27 KiB
Personal finance tracker
Project Overview
Project Name: Personal Finance Tracker
Group Members:
- 289229, Lukáš Trkan, lukastrkan
- 289258, Dejan Ribarovski, ribardej (derib2613)
Brief Description: Our application allows users to easily track their cash flow through multiple bank accounts. Users can label their transactions with custom categories that can be later used for filtering and visualization. New transactions are automatically fetched in the background.
Architecture Overview
Our system is a full‑stack web application composed of a React frontend, a FastAPI backend, a asynchronousMariaDB database with Maxscale, and background workers powered by Celery with RabbitMQ. The backend exposes REST endpoints for authentication (email/password and OAuth), users, categories, transactions, exchange rates and bank APIs. Infrastructure for Kubernetes is managed via Terraform/OpenTofu and the application is packaged via a Helm chart. This all is deployed on private TalosOS cluster running on Proxmox VE with CI/CD and with public access over Cloudflare tunnels. Static files for frontend are served via Cloudflare pages. Other services deployed in the cluster includes Longhorn for persistent storage, Prometheus with Grafana for monitoring.
High-Level Architecture
flowchart LR
n3(("User")) <--> client["Frontend"]
proc_queue["Message Queue"] --> proc_queue_worker["Worker Service"]
proc_queue_worker -- SMTP --> ext_mail[("Email Service")]
proc_queue_worker <-- HTTP request/response --> ext_bank[("Bank API")]
proc_queue_worker <--> db[("Database")]
proc_cron["Cron"] <-- HTTP request/response --> svc["Backend API"]
svc --> proc_queue
n2["Cloudflare tunnel"] <-- HTTP request/response --> svc
svc <--> db
svc <-- HTTP request/response --> api[("UniRate API")]
client <-- HTTP request/response --> n2
The workflow works in the following way:
- Client connects to the frontend. After login, frontend automatically fetches the stored transactions from the database via the backend API and currency rates from UniRate API.
- When the client opts for fetching new transactions via the Bank API, cron will trigger periodic fetching using background worker.
- After successful load, these transactions are stored to the database and displayed to the client
Features
- The stored transactions are encrypted in the DB for security reasons.
- For every pull request the full APP is deployed on a separate URL and the tests are run by github CI/CD
- On every push to main, the production app is automatically updated
- UI is responsive for mobile devices
- Slow operations (emails, transactions fetching) are handled in the background by Celery workers.
- App is monitored using prometheus metrics endpoint and metrics are shown in Grafana dashboard.
Components
- Frontend (frontend/): React + TypeScript app built with Vite. Talks to the backend via REST, handles login/registration, shows latest transactions, filtering, and allows adding transactions.
- Backend API (backend/app): FastAPI app with routers under app/api for auth, users, categories, transactions, exchange rates and bankAPI. Uses FastAPI Users for auth (JWT + OAuth), SQLAlchemy ORM, and Pydantic v2 schemas.
- Worker service (backend/app/workers): Celery worker handling background tasks (emails, transactions fetching).
- Database (MariaDB with Maxscale): Persists users, categories, transactions; schema managed by Alembic migrations.
- Message Queue (RabbitMQ): Queues background tasks for Celery workers.
- Infrastructure as Code (tofu/): OpenTofu modules provisioning cluster services (RabbitMQ, Redis, Cloudflare tunnel, etc.).
- Deployment Chart (charts/myapp-chart/): Helm chart to deploy the application to Kubernetes.
Technologies Used
- Backend: Python, FastAPI, FastAPI Users, SQLAlchemy, Pydantic, Alembic, Celery
- Frontend: React, TypeScript, Vite
- Database: MariaDB with Maxscale
- Background jobs: RabbitMQ, Celery
- Containerization/Orchestration: Docker, Docker Compose (dev), Kubernetes, Helm
- IaC/Platform: Proxmox, Talos, Cloudflare pages, OpenTofu (Terraform), cert-manager, MetalLB, Cloudflare Tunnel, Prometheus, Loki
Prerequisites
System Requirements
Development
- Minimum RAM: 8 GB
- Storage: 10 GB+ free
Production
- 1 + 4 nodes
- CPU: 4 cores
- RAM: 8 GB
- Storage: 200 GB
Required Software
Development
- Docker
- Docker Compose
- Node.js and npm
- Python 3.12
- MariaDB 11
Production
Minimal:
- domain name with Cloudflare`s nameservers - tunnel, pages
- Kubernetes cluster
- kubectl
- Helm
- OpenTofu
Our setup specifics:
- Proxmox VE
- TalosOS cluster
- talosctl
- GitHub self-hosted runner with access to the cluster
- TailScale for remote access to cluster
Environment Variables
Backend
MOJEID_CLIENT_ID,MOJEID_CLIENT_SECRET- OAuth client ID and secret for MojeID - https://www.mojeid.cz/en/provider/BANKID_CLIENT_ID,BANKID_CLIENT_SECRET- OAuth client ID and secret for BankID - https://developer.bankid.cz/CSAS_CLIENT_ID,CSAS_CLIENT_SECRET- OAuth client ID and secret for Česká spořitelna - https://developers.erstegroup.com/docs/apis/bank.csasDATABASE_URL(orMARIADB_HOST,MARIADB_PORT,MARIADB_DB,MARIADB_USER,MARIADB_PASSWORD) - MariaDB connection detailsRABBITMQ_USERNAME,RABBITMQ_PASSWORD- credentials for RabbitMQSENTRY_DSN- Sentry DSN for error reportingDB_ENCRYPTION_KEY- symmetric key for encrypting sensitive data in the databaseSMTP_HOST,SMTP_PORT,SMTP_USERNAME,SMTP_PASSWORD,SMTP_USE_TLS,SMTP_USE_SSL,SMTP_FROM- SMTP configuration (host, port, auth credentials, TLS/SSL options, sender).UNIRATE_API_KEY- API key for UniRate.
Frontend
VITE_BACKEND_URL- URL of the backend API
Dependencies (key libraries)
Backend: FastAPI, fastapi-users, SQLAlchemy, pydantic v2, Alembic, Celery, uvicorn, pytest Frontend: React, TypeScript, Vite
Local development
You can run the project with Docker Compose and Python virtual environment for testing and development purposes
1) Clone the Repository
git clone https://github.com/dat515-2025/Group-8.git
cd Group-8/7project
2) Install dependencies
Backend
cd backend
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
3) Run Docker containers
cd ..
docker compose up -d
4) Prepare the database
bash upgrade_database.sh
5) Run backend
cd backend
#TODO: set env variables
uvicorn app.app:fastApi --reload --host 0.0.0.0 --port 8000
6) Run Celery worker (optional, in another terminal)
cd Group-8/7project/backend
source .venv/bin/activate
celery -A app.celery_app.celery_app worker -l info
7) Install frontend dependencies and run
cd ../frontend
npm i
npm run dev
- Backend available at: http://127.0.0.1:8000 (OpenAPI at /docs)
- Frontend available at: http://localhost:5173
Build Instructions
Backend
cd 7project/backend
# Dont forget to set correct image tag with your registry and name
# For example lukastrkan/cc-app-demo or gitea.ltrk.dev/lukas/cc-app-demo
docker buildx build --platform linux/amd64,linux/arm64 -t CHANGE_ME --push .
Frontend
cd project7/frontend
npm ci
npm run build
Deployment Instructions
Setup Cluster
Deployment should work on any Kubernetes cluster. However, we are using 4 TalosOS virtual machines (1 control plane, 3 workers) running on top of Proxmox VE.
- Create at least 4 VMs with TalosOS (4 cores, 8 GB RAM, 200 GB disk)
- Install talosctl for your OS: https://docs.siderolabs.com/talos/v1.10/getting-started/talosctl
- Generate Talos config
- Navigate to tofu directory
cd 7project/tofu
- Set IP addresses in environment variables
CONTROL_PLANE_IP=<control-plane-ip>
WORKER1_IP=<worker1-ip>
WORKER2_IP=<worker2-ip>
WORKER3_IP=<worker3-ip>
WORKER4_IP=<worker4-ip>
....
- Create config files
# change my-cluster to your desired cluster name
talosctl gen config my-cluster https://$CONTROL_PLANE_IP:6443
- Edit the generated configs
Apply the following changes to worker.yaml:
- Add mounts for persistent storage to
machine.kubelet.extraMountssection:
extraMounts:
- destination: /var/lib/longhorn
type: bindind.
source: /var/lib/longhorn
options:
- bind
- rshared
- rw
- Change
machine.install.imageto image with extra modules:
image: factory.talos.dev/metal-installer/88d1f7a5c4f1d3aba7df787c448c1d3d008ed29cfb34af53fa0df4336a56040b:v1.11.1
or you can use latest image generated at https://factory.talos.dev with following options:
- Bare-metal machine
- your Talos os version
- amd64 architecture
- siderolabs/iscsi-tools
- siderolabs/util-linux-tools
- (Optionally) siderolabs/qemu-guest-agent
Then copy "Initial Installation" value and paste it to the image field.
- Add docker registry mirror to
machine.registries.mirrorssection:
registries:
mirrors:
docker.io:
endpoints:
- https://mirror.gcr.io
- https://registry-1.docker.io
- Apply configs to the VMs
talosctl apply-config --insecure --nodes $CONTROL_PLANE_IP --file controlplane.yaml
talosctl apply-config --insecure --nodes $WORKER1_IP --file worker.yaml
talosctl apply-config --insecure --nodes $WORKER2_IP --file worker.yaml
talosctl apply-config --insecure --nodes $WORKER3_IP --file worker.yaml
talosctl apply-config --insecure --nodes $WORKER4_IP --file worker.yaml
- Boostrap the cluster and retrieve kubeconfig
export TALOSCONFIG=$(pwd)/talosconfig
talosctl config endpoint https://$CONTROL_PLANE_IP:6443
talosctl config node $CONTROL_PLANE_IP
talosctl bootstrap
talosctl kubeconfig .
You can now use k8s client like https://headlamp.dev/ with the generated kubeconfig file.
Install base services to the cluster
- Copy and edit variables
cp terraform.tfvars.example terraform.tfvars
-
metallb_ip_range- set to range available in your network for load balancer services -
mariadb_password- password for internal mariadb user -
mariadb_root_password- password for root user -
mariadb_user_name- username for admin user -
mariadb_user_host- allowed hosts for admin user -
mariadb_user_password- password for admin user -
metallb_maxscale_ip,metallb_service_ip,metallb_primary_ip,metallb_secondary_ip- IPs for database cluster,
set them to static IPs from themetallb_ip_range -
s3_enabled,s3_bucket,s3_region,s3_endpoint,s3_key_id,s3_key_secret- S3 compatible storage for backups (optional) -
phpmyadmin_enabled- set to false if you want to disable phpmyadmin -
rabbitmq-password- password for RabbitMQ -
cloudflare_account_id- your Cloudflare account ID -
cloudflare_api_token- your Cloudflare API token with permissions to manage tunnels and DNS -
cloudflare_email- your Cloudflare account email -
cloudflare_tunnel_name- name for the tunnel -
cloudflare_domain- your domain name managed in Cloudflare
- Deploy without Cloudflare module first
tofu init
tofu apply -exclude modules.cloudflare
- Deploy rest of the modules
tofu apply
Configure deployment
- Create self-hosted runner with access to the cluster or make cluster publicly accessible
- Change
jobs.deploy.runs-onin.github/workflows/deploy-prod.ymland in.github/workflows/deploy-pr.yamlto your runner label - Add variables to GitHub in repository settings:
PROD_DOMAIN- base domain for deployments (e.g. ltrk.cz)DEV_FRONTEND_BASE_DOMAIN- base domain for your cloudflare pages
- Add secrets to GitHub in repository settings:
- CLOUDFLARE_ACCOUNT_ID - same as in tofu/terraform.tfvars
- CLOUDFLARE_API_TOKEN - same as in tofu/terraform.tfvars
- DOCKER_USER - your docker registry username
- DOCKER_PASSWORD - your docker registry password
- KUBE_CONFIG - content of your kubeconfig file for the cluster
- PROD_DB_PASSWORD - same as MARIADB_PASSWORD
- PROD_RABBITMQ_PASSWORD - same as MARIADB_PASSWORD
- PROD_DB_ENCRYPTION_KEY - same as DB_ENCRYPTION_KEY
- MOJEID_CLIENT_ID
- MOJEID_CLIENT_SECRET
- BANKID_CLIENT_ID
- BANKID_CLIENT_SECRET
- CSAS_CLIENT_ID
- CSAS_CLIENT_SECRET
- SENTRY_DSN
- SMTP_HOST
- SMTP_PORT
- SMTP_USERNAME
- SMTP_PASSWORD
- SMTP_FROM
- UNIRATE_API_KEY
- On Github open Actions tab, select "Deploy Prod" and run workflow manually
Testing Instructions
The tests are located in 7project/backend/tests directory. All tests are run by GitHub actions on every pull request and push to main. See the workflow here.
If you want to run the tests locally, the preferred way is to use a bash script that will start a test DB container with docker compose and remove it afterwards.
cd 7project/backend
bash test_locally.sh
Unit Tests
There are only 5 basic unit tests, since our services logic is very simple
bash test_locally.sh --only-unit
Integration Tests
There are 9 basic unit tests, testing the individual backend API logic
bash test_locally.sh --only-integration
End-to-End Tests
There are 7 e2e tests, testing more complex app logic
bash test_locally.sh --only-e2e
Usage Examples
All endpoints are documented at OpenAPI: http://127.0.0.1:8000/docs
Auth: Register and Login (JWT)
# Register
curl -X POST http://127.0.0.1:8000/auth/register \
-H 'Content-Type: application/json' \
-d '{
"email": "user@example.com",
"password": "StrongPassw0rd",
"first_name": "Jane",
"last_name": "Doe"
}'
# Login (JWT)
TOKEN=$(curl -s -X POST http://127.0.0.1:8000/auth/jwt/login \
-H 'Content-Type: application/x-www-form-urlencoded' \
-d 'username=user@example.com&password=StrongPassw0rd' | jq -r .access_token)
echo $TOKEN
# Call a protected route
curl -H "Authorization: Bearer $TOKEN" http://127.0.0.1:8000/authenticated-route
Frontend
- Start with: npm run dev in 7project/frontend
- Ensure VITE_BACKEND_URL is set to the backend URL (e.g., http://127.0.0.1:8000)
- Open http://localhost:5173
- Login, view latest transactions, filter, and add new transactions from the UI.
Presentation Video
YouTube Link: [Insert your YouTube link here]
Duration: [X minutes Y seconds]
Video Includes:
- Project overview and architecture
- Live demonstration of key features
- Code walkthrough
- Build and deployment showcase
Troubleshooting
Common Issues
Issue 1: [Common problem]
Symptoms: [What the user sees] Solution: [Step-by-step fix]
Issue 2: [Another common problem]
Symptoms: [What the user sees] Solution: [Step-by-step fix]
Debug Commands
# Useful commands for debugging
# Log viewing commands
# Service status checks
Progress Table
Be honest and detailed in your assessments. This information is used for individual grading. Link to the specific commit on GitHub for each contribution.
| Task/Component | Assigned To | Status | Time Spent | Difficulty | Notes |
|---|---|---|---|---|---|
| Project Setup & Repository | Lukas | ✅ Complete | [X hours] | Medium | [Any notes] |
| Design Document | Both | ✅ Complete | 4 Hours | Easy | [Any notes] |
| Backend API Development | Dejan | ✅ Complete | 12 hours | Medium | [Any notes] |
| Database Setup & Models | Lukas | ✅ Complete | [X hours] | Medium | [Any notes] |
| Frontend Development | Dejan | ✅ Complete | 17 hours | Medium | [Any notes] |
| Docker Configuration | Lukas | ✅ Complete | 3 hours | Easy | [Any notes] |
| Cloud Deployment | Lukas | ✅ Complete | [X hours] | Hard | Using Talos cluster running in proxmox - easy snapshots etc. Frontend deployed at Cloudflare pages. |
| Testing Implementation | Dejan | ✅ Complete | 16 hours | Medium | [Any notes] |
| Documentation | Both | 🔄 In Progress | [X hours] | Easy | [Any notes] |
| Presentation Video | Both | ❌ Not Started | [X hours] | Medium | [Any notes] |
Legend: ✅ Complete | 🔄 In Progress | ⏳ Pending | ❌ Not Started
Hour Sheet
Link to the specific commit on GitHub for each contribution.
[Lukáš]
Hour Sheet
Name: Lukáš Trkan
| Date | Activity | Hours | Description | Representative Commit / PR |
|---|---|---|---|---|
| 18.9. - 19.9. | Initial Setup & Design | 40 | Repository init, system design diagrams, basic Terraform setup | feat(infrastructure): add basic terraform resources |
| 20.9. - 5.10. | Core Infrastructure & CI/CD | 12 | K8s setup (ArgoCD), CI/CD workflows, RabbitMQ, Redis, Celery workers, DB migrations | PR #2, feat(infrastructure): add rabbitmq cluster |
| 6.10. - 9.10. | Frontend Infra & DB | 5 | Deployed frontend to Cloudflare, setup metrics, created database models | PR #16 (Cloudflare), PR #19 (DB structure) |
| 10.10. - 11.10. | Backend | 5 | Implemented OAuth support (MojeID, BankID) | feat(auth): add support for OAuth and MojeID |
| 12.10. | Infrastructure | 2 | Added database backups | feat(infrastructure): add backups |
| 16.10. | Infrastructure | 4 | Implemented secrets management, fixed deployment/env variables | PR #29 (Deployment envs) |
| 17.10. | Monitoring | 1 | Added Sentry logging | feat(app): add sentry loging |
| 21.10. - 22.10. | Backend | 8 | Added ČSAS bank connection | PR #32 (Fix React OAuth) |
| 29.10. - 30.10. | Backend | 5 | Implemented transaction encryption, add bank scraping | PR #39 (CSAS Scraping) |
| 30.10. | Monitoring | 6 | Implemented Loki logging and basic Prometheus metrics | PR #42 (Prometheus metrics) |
| 9.11. | Monitoring | 2 | Added custom Prometheus metrics | PR #46 (Prometheus custom metrics) |
| 11.11. | Tests | 1 | Investigated and fixed broken Pytest environment | fix(tests): set pytest env |
| 11.11. - 12.11. | Features & Deployment | 6 | Added cron support, email sender service, updated workers & image | PR #49 (Email), PR #50 (Update workers) |
| 18.9 - 14.11 | Documentation | 8 | Updated report.md, design docs, and tfvars.example | Create design.md, update report |
| Total | 105 |
Dejan
| Date | Activity | Hours | Description | Representative Commit / PR |
|---|---|---|---|---|
| 25.9. | Design | 2 | 6design | |
| 9.10 to 11.10. | Backend APIs | 14 | Implemented Backend APIs | PR #26, 20-create-a-controller-layer-on-backend-side |
| 13.10 to 15.10. | Frontend Development | 8 | Created user interface mockups | PR #28, frontend basics |
| Continually | Documentation | 7 | Documenting the dev process | |
| 21.10 to 23.10 | Tests, frontend | 10 | Test basics, balance charts, and frontend improvement | PR #31, 30 create tests and set up a GitHub pipeline |
| 28.10 to 30.10 | CI | 6 | Integrated tests with test database setup on github workflows | PR #28, frontend basics |
| 28.10 to 30.10 | Frontend | 8 | UI improvements and exchange rate API integration | PR #28, frontend basics |
| 4.11 to 6.11 | Tests | 6 | Test fixes improvement, more integration and e2e | PR #28, frontend basics |
| 4.11 to 6.11 | Frontend | 6 | Fixes, Improved UI, added support for mobile devices | PR #28, frontend basics |
| 11.11 | Backend APIs | 4 | Moved rates API, mock bank to Backend, few fixes | PR #28, frontend basics |
| 11.11 to 12.11 | Tests | 3 | Local testing DB container, few fixes | PR #28, frontend basics |
| 12.11 | Frontend | 3 | Enabled multiple transaction edits at once, CSAS button state | PR #28, frontend basics |
| 13.11 | Video | 3 | Video | |
| Total | 80 |
Group Total: [XXX.X] hours
Final Reflection
What We Learned
[Reflect on the key technical and collaboration skills learned during this project]
Challenges Faced
Slow cluster performance
This was caused by single SATA SSD disk running all VMs. This was solved by adding second NVMe disk just for Talos VMs.
Stucked IaC deployment
If the deployed module (helm chart for example) was not configured properly, it would get stuck and timeout resulting in namespace that cannot be deleted. This was solved by using snapshots in Proxmox and restoring if this happened.
If We Did This Again
Different framework
FastAPI lacks usable build in support for database migrations and implementing Alembic was a bit tricky. Tricky was also integrating FastAPI auth system with React frontend, since there is no official project template. Using .NET (which we considered initially) would probably solve these issues.
[What would you do differently? What worked well that you'd keep?]
Individual Growth
[Lukas]
This course finally forced me to learn kubernetes (been on by TODO list for at least 3 years). I had some prior experience with terraform/opentofu from work but this improved by understanding of it.
The biggest challenge for me was time tracking since I am used to tracking to projects, not to tasks. (I am bad even at that :) ).
It was also interesting experience to be the one responsible for the initial project structure/design/setup used not only by myself.
[Personal reflection on growth, challenges, and learning]
[Dejan]
Since I do not have a job, this project was probably the most complex one I have ever worked on. It was also the first school project where I was encouraged to use AI.
Lukas
[Personal reflection on growth, challenges, and learning]
Report Completion Date: [Date] Last Updated: 13.11.2025