45 Commits

Author SHA1 Message Date
ribardej
16f660ea5b feat(docs): finalized checklist.md 2025-11-16 21:07:16 +01:00
7a0d7dc4af update report 2025-11-16 17:30:51 +01:00
fabdff3bef update checklist 2025-11-16 17:27:13 +01:00
1c1130b9b0 update checklist 2025-11-16 17:21:45 +01:00
7d7698450d feat(deployment): Optimize Dockerfile 2025-11-16 17:07:32 +01:00
ribardej
db9092b78f feat(docs): checklist.md and report.md update 2025-11-15 23:48:29 +01:00
3557b3ea13 updated Dockerfile 2025-11-15 18:23:09 +01:00
4a1a9f03a1 Merge remote-tracking branch 'origin/main' 2025-11-15 13:55:51 +01:00
b1be15f559 updated docs 2025-11-15 13:55:41 +01:00
ribardej
515106b238 feat(docs): report.md update 2025-11-14 22:54:45 +01:00
ribardej
b5290119e9 feat(docs): report.md update 2025-11-14 17:56:08 +01:00
0beb889f5e updated docs 2025-11-14 17:32:11 +01:00
ribardej
0a96c32c93 Merge remote-tracking branch 'origin/main' 2025-11-14 17:24:04 +01:00
ribardej
f1034f6ed5 feat(docs): report.md update 2025-11-14 17:23:55 +01:00
0f729d28d1 Merge pull request #54 from dat515-2025/merge/core_simplificcation
refactor(core): simplify core module
2025-11-14 17:14:47 +01:00
c689caea88 refactor(core): fix tests 2025-11-14 16:51:21 +01:00
8c20deb690 refactor(core): simplify core module 2025-11-14 16:42:35 +01:00
39979b51ee update report 2025-11-14 15:20:16 +01:00
da0c77101d Merge pull request #53 from dat515-2025/test_arm_build
Some checks failed
Deploy Prod / Run Python Tests (push) Has been cancelled
Deploy Prod / Build and push image (reusable) (push) Has been cancelled
Deploy Prod / Generate Production URLs (push) Has been cancelled
Deploy Prod / Frontend - Build and Deploy to Cloudflare Pages (prod) (push) Has been cancelled
Deploy Prod / Helm upgrade/install (prod) (push) Has been cancelled
build arm64 image
2025-11-14 00:58:33 +01:00
a5a83e5d07 update docs 2025-11-14 00:20:19 +01:00
3749aa4525 Also add amd64 2025-11-14 00:16:36 +01:00
94aa64addc build arm64 image 2025-11-14 00:03:06 +01:00
ba1677b2d3 add README.md
Some checks are pending
Deploy Prod / Run Python Tests (push) Waiting to run
Deploy Prod / Build and push image (reusable) (push) Blocked by required conditions
Deploy Prod / Generate Production URLs (push) Blocked by required conditions
Deploy Prod / Frontend - Build and Deploy to Cloudflare Pages (prod) (push) Blocked by required conditions
Deploy Prod / Helm upgrade/install (prod) (push) Blocked by required conditions
2025-11-13 15:50:19 +01:00
ribardej
8ea1ef9eea Merge remote-tracking branch 'origin/main' 2025-11-13 14:33:50 +01:00
ribardej
4b614902b2 feat(docs): report.md update 2025-11-13 14:33:42 +01:00
a152ecbe4d fix main.py 2025-11-13 14:30:31 +01:00
7d7dd98d0f Merge remote-tracking branch 'origin/main' 2025-11-13 14:16:30 +01:00
5aca071ac2 update report 2025-11-13 14:16:21 +01:00
ribardej
80991c7390 Merge remote-tracking branch 'origin/main' 2025-11-13 14:09:04 +01:00
ribardej
1403e0029b feat(docs): report.md update 2025-11-13 14:08:52 +01:00
aa63e51e6a update report 2025-11-13 14:06:35 +01:00
Dejan Ribarovski
4aaaba3956 Merge pull request #52 from dat515-2025/51-refactor-project-structure
feat(docs): codebase refactor - added src directory
2025-11-13 13:58:24 +01:00
ribardej
f0c28ba9e1 feat(docs): codebase refactor - added src directory 2025-11-13 13:55:40 +01:00
ribardej
b560c07d62 feat(docs): codebase refactor - added src directory 2025-11-13 13:52:27 +01:00
ribardej
f0b1452e30 feat(docs): codebase refactor - added src directory 2025-11-13 13:45:41 +01:00
6effb2793a update report 2025-11-13 13:24:24 +01:00
ribardej
ba7798259c feat(docs): report.md update 2025-11-13 12:36:05 +01:00
deb67f421e Create README.md 2025-11-13 12:24:29 +01:00
74557eeea8 update report 2025-11-13 12:06:15 +01:00
2e0619d03f update report 2025-11-13 11:52:07 +01:00
31add42d6d update report 2025-11-13 11:13:11 +01:00
4de79169a2 update report 2025-11-13 11:11:16 +01:00
59d53967b0 update report
Some checks are pending
Deploy Prod / Run Python Tests (push) Waiting to run
Deploy Prod / Build and push image (reusable) (push) Blocked by required conditions
Deploy Prod / Generate Production URLs (push) Blocked by required conditions
Deploy Prod / Frontend - Build and Deploy to Cloudflare Pages (prod) (push) Blocked by required conditions
Deploy Prod / Helm upgrade/install (prod) (push) Blocked by required conditions
2025-11-13 01:35:13 +01:00
f3086f8c73 update report, edit deployment, update tfvars.example 2025-11-13 00:04:31 +01:00
ribardej
fd437b1caf feat(frontend): implemented CSAS button responsiveness 2025-11-12 20:21:31 +01:00
176 changed files with 898 additions and 467 deletions

View File

@@ -15,7 +15,7 @@ on:
context:
description: "Docker build context path"
required: false
default: "7project/backend"
default: "7project/src/backend"
type: string
pr_number:
description: "PR number (required when mode=pr)"
@@ -94,7 +94,7 @@ jobs:
tags: |
${{ env.IMAGE_REPO }}:${{ env.TAG1 }}
${{ env.IMAGE_REPO }}:${{ env.TAG2 }}
platforms: linux/amd64
platforms: linux/arm64,linux/amd64
- name: Set outputs
id: set

View File

@@ -21,7 +21,7 @@ jobs:
with:
mode: pr
image_repo: lukastrkan/cc-app-demo
context: 7project/backend
context: 7project/src/backend
pr_number: ${{ github.event.pull_request.number }}
secrets: inherit
@@ -33,7 +33,7 @@ jobs:
runner: vhs
mode: pr
pr_number: ${{ github.event.pull_request.number }}
base_domain: ${{ vars.DEV_BASE_DOMAIN }}
base_domain: ${{ vars.PROD_DOMAIN }}
secrets: inherit
frontend:
@@ -77,7 +77,7 @@ jobs:
- name: Helm upgrade/install PR preview
env:
DEV_BASE_DOMAIN: ${{ secrets.BASE_DOMAIN }}
DEV_BASE_DOMAIN: ${{ vars.BASE_DOMAIN }}
RABBITMQ_PASSWORD: ${{ secrets.PROD_RABBITMQ_PASSWORD }}
DB_PASSWORD: ${{ secrets.PROD_DB_PASSWORD }}
DIGEST: ${{ needs.build.outputs.digest }}
@@ -90,9 +90,9 @@ jobs:
PR=${{ github.event.pull_request.number }}
RELEASE=myapp-pr-$PR
NAMESPACE=pr-$PR
helm upgrade --install "$RELEASE" ./7project/charts/myapp-chart \
helm upgrade --install "$RELEASE" ./7project/src/charts/myapp-chart \
-n "$NAMESPACE" --create-namespace \
-f 7project/charts/myapp-chart/values-dev.yaml \
-f 7project/src/charts/myapp-chart/values-dev.yaml \
--set prNumber="$PR" \
--set deployment="pr-$PR" \
--set domain="$DOMAIN" \

View File

@@ -4,9 +4,9 @@ on:
push:
branches: [ "main" ]
paths:
- 7project/backend/**
- 7project/frontend/**
- 7project/charts/myapp-chart/**
- ../../7project/src/backend/**
- ../../7project/src/frontend/**
- ../../7project/src/charts/myapp-chart/**
- .github/workflows/deploy-prod.yaml
- .github/workflows/build-image.yaml
- .github/workflows/frontend-pages.yml
@@ -32,7 +32,7 @@ jobs:
with:
mode: prod
image_repo: lukastrkan/cc-app-demo
context: 7project/backend
context: 7project/src/backend
secrets: inherit
get_urls:
@@ -103,9 +103,9 @@ jobs:
SMTP_FROM: ${{ secrets.SMTP_FROM }}
UNIRATE_API_KEY: ${{ secrets.UNIRATE_API_KEY }}
run: |
helm upgrade --install myapp ./7project/charts/myapp-chart \
helm upgrade --install myapp ./7project/src/charts/myapp-chart \
-n prod --create-namespace \
-f 7project/charts/myapp-chart/values-prod.yaml \
-f 7project/src/charts/myapp-chart/values-prod.yaml \
--set deployment="prod" \
--set domain="$DOMAIN" \
--set domain_scheme="$DOMAIN_SCHEME" \

View File

@@ -35,7 +35,7 @@ jobs:
runs-on: ubuntu-latest
defaults:
run:
working-directory: 7project/frontend
working-directory: 7project/src/frontend
steps:
- name: Checkout
uses: actions/checkout@v4
@@ -45,7 +45,7 @@ jobs:
with:
node-version: '20'
cache: 'npm'
cache-dependency-path: 7project/frontend/package-lock.json
cache-dependency-path: 7project/src/frontend/package-lock.json
- name: Install dependencies
run: npm ci
@@ -61,7 +61,7 @@ jobs:
uses: actions/upload-artifact@v4
with:
name: frontend-dist
path: 7project/frontend/dist
path: 7project/src/frontend/dist
deploy:
name: Deploy to Cloudflare Pages

View File

@@ -46,21 +46,21 @@ jobs:
- name: Add test dependencies to requirements
run: |
echo "pytest==8.4.2" >> ./7project/backend/requirements.txt
echo "pytest-asyncio==1.2.0" >> ./7project/backend/requirements.txt
echo "pytest==8.4.2" >> ./7project/src/backend/requirements.txt
echo "pytest-asyncio==1.2.0" >> ./7project/src/backend/requirements.txt
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r ./7project/backend/requirements.txt
pip install -r ./7project/src/backend/requirements.txt
- name: Run Alembic migrations
run: |
alembic upgrade head
working-directory: ./7project/backend
working-directory: ./7project/src/backend
- name: Run tests with pytest
env:
PYTEST_RUN_CONFIG: "True"
run: pytest
working-directory: ./7project/backend
working-directory: ./7project/src/backend

8
.idea/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,8 @@
# Default ignored files
/shelf/
/workspace.xml
# Editor-based HTTP Client requests
/httpRequests/
# Datasource local storage ignored files
/dataSources/
/dataSources.local.xml

16
7project/.gitignore vendored
View File

@@ -1,8 +1,8 @@
/tofu/controlplane.yaml
/tofu/kubeconfig
/tofu/talosconfig
/tofu/terraform.tfstate
/tofu/terraform.tfstate.backup
/tofu/worker.yaml
/tofu/.terraform.lock.hcl
/tofu/.terraform/
/src/tofu/controlplane.yaml
/src/tofu/kubeconfig
/src/tofu/talosconfig
/src/tofu/terraform.tfstate
/src/tofu/terraform.tfstate.backup
/src/tofu/worker.yaml
/src/tofu/.terraform.lock.hcl
/src/tofu/.terraform/

8
7project/.idea/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,8 @@
# Default ignored files
/shelf/
/workspace.xml
# Editor-based HTTP Client requests
/httpRequests/
# Datasource local storage ignored files
/dataSources/
/dataSources.local.xml

View File

@@ -1,43 +1,6 @@
# Lab 6: Design Document for Course Project
| Lab 6: | Design Document for Course Project |
| ----------- | ---------------------------------- |
| Subject: | DAT515 Cloud Computing |
| Deadline: | **September 19, 2025 23:59** |
| Grading: | No Grade |
| Submission: | Group |
## Table of Contents
- [Table of Contents](#table-of-contents)
- [1. Design Document (design.md)](#1-design-document-designmd)
The design document is the first deliverable for your project.
We separated this out as a separate deliverable, with its own deadline, to ensure that you have a clear plan before you start coding.
This part only needs a cursory review by the teaching staff to ensure it is sufficiently comprehensive, while still realistic.
The teaching staff will assign you to a project mentor who will provide guidance and support throughout the development process.
## 1. Design Document (design.md)
You are required to prepare a design document for your application.
The design doc should be brief, well-organized and easy to understand.
The design doc should be prepared in markdown format and named `design.md` and submitted in the project group's repository.
Remember that you can use [mermaid diagrams](https://github.com/mermaid-js/mermaid#readme) in markdown files.
The design doc **should include** the following sections:
- **Overview**: A brief description of the application and its purpose.
- **Architecture**: The high-level architecture of the application, including components, interactions, and data flow.
- **Technologies**: The cloud computing technologies or services used in the application.
- **Deployment**: The deployment strategy for the application, including any infrastructure requirements.
The design document should be updated throughout the development process and reflect the final implementation of your project.
Optional sections may include:
- Security: The security measures implemented in the application to protect data and resources.
- Scalability: The scalability considerations for the application, including load balancing and auto-scaling.
- Monitoring: The monitoring and logging strategy for the application to track performance and detect issues.
- Disaster Recovery: The disaster recovery plan for the application to ensure business continuity in case of failures.
- Cost Analysis: The cost analysis of running the application on the cloud, including pricing models and cost-saving strategies.
- References: Any external sources or references used in the design document.
# Personal Finance Tracker
## Folder Structure
- meetings: Contains note from meetings
- scr: Source code for the project
- checklist: Project checklist and self assessment tracking
- report.md: Detailed report of the project

View File

@@ -1,8 +0,0 @@
FROM python:3.11-trixie
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
CMD alembic upgrade head && uvicorn app.app:fastApi --host 0.0.0.0 --port 8000

View File

@@ -1,6 +0,0 @@
import app.celery_app # noqa: F401
from app.workers.celery_tasks import send_email
def enqueue_email(to: str, subject: str, body: str) -> None:
send_email.delay(to, subject, body)

View File

@@ -1,4 +0,0 @@
import uvicorn
if __name__ == "__main__":
uvicorn.run("app.app:app", host="0.0.0.0", log_level="info")

View File

@@ -7,64 +7,65 @@ Focus on areas that align with your project goals and interests.
The core deliverables are required.
This means that you must get at least 2 points for each item in this category.
| **Category** | **Item** | **Max Points** | **Points** |
|----------------------------------| --------------------------------------- | -------------- |-------------------------------------------------|
| **Core Deliverables (Required)** | | | |
| Codebase & Organization | Well-organized project structure | 5 | 5 |
| | Clean, readable code | 5 | 4 |
| | Use planning tool (e.g., GitHub issues) | 5 | 4 |
| | Proper version control usage | 5 | 5 |
| 23 | Complete source code | 5 | 5 |
| Documentation | Comprehensive reproducibility report | 10 | 4-5 |
| | Updated design document | 5 | 2 |
| | Clear build/deployment instructions | 5 | 2 |
| | Troubleshooting guide | 5 | 1 |
| | Completed self-assessment table | 5 | 2 |
| 14 | Hour sheets for all members | 5 | 3 |
| Presentation Video | Project demonstration | 5 | 0 |
| | Code walk-through | 5 | 0 |
| 0 | Deployment showcase | 5 | 0 |
| **Technical Implementation** | | | |
| Application Functionality | Basic functionality works | 10 | 8 |
| | Advanced features implemented | 10 | 0 |
| | Error handling & robustness | 10 | 4 |
| 16 | User-friendly interface | 5 | 4 |
| Backend & Architecture | Stateless web server | 5 | 5 |
| | Stateful application | 10 | ? WHAT DOES THIS MEAN |
| | Database integration | 10 | 10 |
| | API design | 5 | 5 |
| 20 | Microservices architecture | 10 | 0 |
| Cloud Integration | Basic cloud deployment | 10 | 10 |
| | Cloud APIs usage | 10 | ? WHAT DOES THIS MEAN |
| | Serverless components | 10 | 0 |
| 10 | Advanced cloud services | 5 | 0 |
| **DevOps & Deployment** | | | |
| Containerization | Basic Dockerfile | 5 | 5 |
| | Optimized Dockerfile | 5 | 0 |
| | Docker Compose | 5 | 5 - dev only |
| 15 | Persistent storage | 5 | 5 |
| Deployment & Scaling | Manual deployment | 5 | 5 |
| | Automated deployment | 5 | 5 |
| | Multiple replicas | 5 | 5 |
| 20 | Kubernetes deployment | 10 | 10 |
| **Quality Assurance** | | | |
| Testing | Unit tests | 5 | 2 |
| | Integration tests | 5 | 2 |
| | End-to-end tests | 5 | 5 |
| 9 | Performance testing | 5 | 0 |
| Monitoring & Operations | Health checks | 5 | 5 |
| | Logging | 5 | 2 - only to terminal add logstash |
| 9 | Metrics/Monitoring | 5 | 2 - only DB, need to create Prometheus endpoint |
| Security | HTTPS/TLS | 5 | 5 |
| | Authentication | 5 | 5 |
| 15 | Authorization | 5 | 5 |
| **Innovation & Excellence** | | | |
| Advanced Features and | AI/ML Integration | 10 | 0 |
| Technical Excellence | Real-time features | 10 | 0 |
| | Creative problem solving | 10 | ? |
| | Performance optimization | 5 | 2 |
| 2 | Exceptional user experience | 5 | 0 |
| **Total** | | **255** | **153** |
| **Category** | **Item** | **Max Points** | **Points** | **Comment** |
|:---------------------------------|:----------------------------------------|:---------------|:-----------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **Core Deliverables (Required)** | | | | |
| Codebase & Organization | Well-organized project structure | 5 | 5 | Project is well-organized, each part is separated (backend, frontend, IaC) and these parts are separated even mode (modules, packages...) |
| | Clean, readable code | 5 | 4 | Should be readable(function names should help), but readability can always be improved |
| | Use planning tool (e.g., GitHub issues) | 5 | 4 | We used Github issues |
| | Proper version control usage | 5 | 5 | We used branches for development, pull request reviews |
| 23 | Complete source code | 5 | 5 | The code is complete - entire codebase is in this repository |
| Documentation | Comprehensive reproducibility report | 10 | 10 | Our report is precise, anybody should be able to reproduce our deployment by following provided instructions |
| | Updated design document | 5 | 4 | Our design document was updated and merged into the report |
| | Clear build/deployment instructions | 5 | 5 | Should be clear |
| | Troubleshooting guide | 5 | 3 | When it comes to troubleshooting, there is never enough documentation |
| | Completed self-assessment table | 5 | 5 | Completed. |
| 32 | Hour sheets for all members | 5 | 5 | Filled. |
| Presentation Video | Project demonstration | 5 | 5 | Yes |
| | Code walk-through | 5 | 3 | There was not enough time to go through all of our code, so we just mentioned some parts of it. |
| 13 | Deployment showcase | 5 | 5 | Yes |
| **Technical Implementation** | | | | |
| Application Functionality | Basic functionality works | 10 | 10 | The app works as intended |
| | Advanced features implemented | 10 | 5 | OAuth, BankAPI connection (not only mock bank) |
| | Error handling & robustness | 10 | 5 | App notifies user about errors, errors in code are also logged by sentry and we get notified |
| 24 | User-friendly interface | 5 | 4 | Responsive interface with dark mode support, should by user friendly enough |
| Backend & Architecture | Stateless web server | 5 | 5 | Yes, the web server is stateless - authentication uses JWT, not sessions. |
| | Stateful application | 10 | 10 | Yes, the app is stateful - data are persistently stored in database |
| | Database integration | 10 | 10 | We have deployed 3 MariaDB nodes with replication, MaxScale proxy and periodic backups. Connection app with this setup is same as with standalone db. |
| | API design | 5 | 5 | Backend APIs are implemented with public Swagger docs |
| 33 | Microservices architecture | 10 | 3 | We have separated API deployment and worker deployment. Worker process slow tasks - emails, payment scraping. There is no need for another service in current state but adding it is easy. |
| Cloud Integration | Basic cloud deployment | 10 | 10 | Yes (In private cluster), using GH Actions and self-hosted runner. |
| | Cloud APIs usage | 10 | 8 | GH Actions deploys frontend to Cloudflare Pages, deployment creates CF tunnel record automatically |
| | Serverless components | 10 | 10 | We are using CF pages for frontend deployment |
| 33 | Advanced cloud services | 5 | 5 | Using CF provides us with DDOS protection, access rules, it hides our IP |
| **DevOps & Deployment** | | | | |
| Containerization | Basic Dockerfile | 5 | 5 | Yes |
| | Optimized Dockerfile | 5 | 5 | Rootless Dockerfile |
| | Docker Compose | 5 | 5 | For development environment |
| 20 | Persistent storage | 5 | 5 | Yes, using Longhorn. |
| Deployment & Scaling | Manual deployment | 5 | 5 | Yes, possible by using Helm manually |
| | Automated deployment | 5 | 5 | Yes, with Github actions |
| | Multiple replicas | 5 | 5 | Yes, 3 pods with API, 3 pods with workers, 3 database pods |
| 25 | Kubernetes deployment | 10 | 10 | Yes |
| **Quality Assurance** | | | | |
| Testing | Unit tests | 5 | 4 | All workflows are covered by tests |
| | Integration tests | 5 | 5 | Yes |
| | End-to-end tests | 5 | 5 | Yes |
| 14 | Performance testing | 5 | 0 | No |
| Monitoring & Operations | Health checks | 5 | 5 | Yes |
| | Logging | 5 | 4 | Logs can be accessed easily using Grafana |
| | Metrics/Monitoring | 2 | 2 | Yes, visualised in Grafana |
| 14 | Custom Metrics for your project | 3 | 3 | Yes, API has /metrics endpoint providing information about FastAPI itself and custom information such as number of users or transactions. |
| Security | HTTPS/TLS | 5 | 5 | Yes |
| | Authentication | 5 | 5 | Yes |
| 15 | Authorization | 5 | 5 | Yes |
| **Innovation & Excellence** | | | | |
| Advanced Features and | AI/ML Integration | 10 | 0 | No |
| Technical Excellence | Real-time features | 10 | 0 | No |
| | Creative problem solving | 10 | 4 | Cron jobs for bank scraping |
| | Performance optimization | 5 | 4 | Delegating emails and scraping to workers, hosting frontend on CF |
| 11 | Exceptional user experience | 5 | 3 | |
| **Total** | | **255** | **257** | |
## Grading Scale
@@ -72,7 +73,7 @@ This means that you must get at least 2 points for each item in this category.
- **Maximum: 200+ points**
| Grade | Points |
| ----- | -------- |
|-------|----------|
| A | 180-200+ |
| B | 160-179 |
| C | 140-159 |

View File

@@ -1,73 +0,0 @@
# React + TypeScript + Vite
This template provides a minimal setup to get React working in Vite with HMR and some ESLint rules.
Currently, two official plugins are available:
- [@vitejs/plugin-react](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react) uses [Babel](https://babeljs.io/) (or [oxc](https://oxc.rs) when used in [rolldown-vite](https://vite.dev/guide/rolldown)) for Fast Refresh
- [@vitejs/plugin-react-swc](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react-swc) uses [SWC](https://swc.rs/) for Fast Refresh
## React Compiler
The React Compiler is not enabled on this template because of its impact on dev & build performances. To add it, see [this documentation](https://react.dev/learn/react-compiler/installation).
## Expanding the ESLint configuration
If you are developing a production application, we recommend updating the configuration to enable type-aware lint rules:
```js
export default defineConfig([
globalIgnores(['dist']),
{
files: ['**/*.{ts,tsx}'],
extends: [
// Other configs...
// Remove tseslint.configs.recommended and replace with this
tseslint.configs.recommendedTypeChecked,
// Alternatively, use this for stricter rules
tseslint.configs.strictTypeChecked,
// Optionally, add this for stylistic rules
tseslint.configs.stylisticTypeChecked,
// Other configs...
],
languageOptions: {
parserOptions: {
project: ['./tsconfig.node.json', './tsconfig.app.json'],
tsconfigRootDir: import.meta.dirname,
},
// other options...
},
},
])
```
You can also install [eslint-plugin-react-x](https://github.com/Rel1cx/eslint-react/tree/main/packages/plugins/eslint-plugin-react-x) and [eslint-plugin-react-dom](https://github.com/Rel1cx/eslint-react/tree/main/packages/plugins/eslint-plugin-react-dom) for React-specific lint rules:
```js
// eslint.config.js
import reactX from 'eslint-plugin-react-x'
import reactDom from 'eslint-plugin-react-dom'
export default defineConfig([
globalIgnores(['dist']),
{
files: ['**/*.{ts,tsx}'],
extends: [
// Other configs...
// Enable lint rules for React
reactX.configs['recommended-typescript'],
// Enable lint rules for React DOM
reactDom.configs.recommended,
],
languageOptions: {
parserOptions: {
project: ['./tsconfig.node.json', './tsconfig.app.json'],
tsconfigRootDir: import.meta.dirname,
},
// other options...
},
},
])
```

View File

@@ -1,5 +0,0 @@
export const BACKEND_URL: string =
import.meta.env.VITE_BACKEND_URL ?? '';
export const VITE_UNIRATE_API_KEY: string =
import.meta.env.VITE_UNIRATE_API_KEY ?? 'wYXMiA0bz8AVRHtiS9hbKIr4VP3k5Qff8XnQdKQM45YM3IwFWP6y73r3KMkv1590';

View File

@@ -9,6 +9,8 @@
**Project Name**: Personal Finance Tracker
**Deployment URL**: https://finance.ltrk.cz/
**Group Members**:
- 289229, Lukáš Trkan, lukastrkan
@@ -16,58 +18,129 @@
**Brief Description**:
Our application allows users to easily track their cash flow
through multiple bank accounts. Users can label their transactions with custom categories that can be later used for
through multiple bank accounts. Users can label their transactions with custom categories that can be later used for
filtering and visualization. New transactions are automatically fetched in the background.
## Architecture Overview
Our system is a fullstack web application composed of a React frontend, a FastAPI backend,
a PostgreSQL database, and asynchronous background workers powered by Celery with RabbitMQ.
Redis is available for caching/kv and may be used by Celery as a result backend. The backend
exposes REST endpoints for authentication (email/password and OAuth), users, categories,
transactions, exchange rates and bank APIs. A thin controller layer (FastAPI routers) lives under app/api.
Infrastructure for Kubernetes is provided via OpenTofu (Terraformcompatible) modules and
the application is packaged via a Helm chart.
Our system is a fullstack web application composed of a React frontend, a FastAPI backend,
a asynchronousMariaDB database with Maxscale, and background workers powered by Celery with RabbitMQ.
The backend exposes REST endpoints for authentication (email/password and OAuth), users, categories,
transactions, exchange rates and bank APIs. Infrastructure for Kubernetes is managed via Terraform/OpenTofu and
the application is packaged via a Helm chart. This all is deployed on private TalosOS cluster running on Proxmox VE with
CI/CD and with public access over Cloudflare tunnels. Static files for frontend are served via Cloudflare pages.
Other services deployed in the cluster includes Longhorn for persistent storage, Prometheus with Grafana for monitoring.
### High-Level Architecture
```mermaid
flowchart LR
proc_queue[Message Queue] --> proc_queue_worker[Worker Service]
proc_queue_worker --> ext_mail[(Email Service)]
proc_cron[Cron] --> svc
proc_queue_worker --> ext_bank[(Bank API)]
proc_queue_worker --> db
client[Client/Frontend] <--> svc[Backend API]
flowchart TB
n3(("User")) <--> client["Frontend"]
proc_queue["Message Queue"] --> proc_queue_worker["Worker Service"]
proc_queue_worker -- SMTP --> ext_mail[("Email Service")]
proc_queue_worker <-- HTTP request/response --> ext_bank[("Bank API")]
proc_queue_worker <--> db[("Database")]
proc_cron["Cron"] <-- HTTP request/response --> svc["Backend API"]
svc --> proc_queue
svc <--> db[(Database)]
svc <--> api[(UniRate API)]
n2["Cloudflare tunnel"] <-- HTTP request/response --> svc
svc <--> db
svc <-- HTTP request/response --> api[("UniRate API")]
client <-- HTTP request/response --> n2
```
The workflow works in the following way:
- Client connects to the frontend. After login, frontend automatically fetches the stored transactions from
the database via the backend API and currency rates from UniRate API.
- When the client opts for fetching new transactions via the Bank API, the backend delegates the task
to a background worker service via the Message queue.
- Client connects to the frontend. After login, frontend automatically fetches the stored transactions from
the database via the backend API and currency rates from UniRate API.
- When the client opts for fetching new transactions via the Bank API, cron will trigger periodic fetching
using background worker.
- After successful load, these transactions are stored to the database and displayed to the client
- There is also a Task planner, that executes periodic tasks, like fetching new transactions automatically from the Bank APIs
### Database Schema
```mermaid
classDiagram
direction BT
class alembic_version {
varchar(32) version_num
}
class categories {
varchar(100) name
varchar(255) description
char(36) user_id
int(11) id
}
class category_transaction {
int(11) category_id
int(11) transaction_id
}
class oauth_account {
char(36) user_id
varchar(100) oauth_name
varchar(4096) access_token
int(11) expires_at
varchar(1024) refresh_token
varchar(320) account_id
varchar(320) account_email
char(36) id
}
class transaction {
blob amount
blob description
char(36) user_id
date date
int(11) id
}
class user {
varchar(100) first_name
varchar(100) last_name
varchar(320) email
varchar(1024) hashed_password
tinyint(1) is_active
tinyint(1) is_superuser
tinyint(1) is_verified
longtext config
char(36) id
}
categories --> user: user_id -> id
category_transaction --> categories: category_id -> id
category_transaction --> transaction: transaction_id -> id
oauth_account --> user: user_id -> id
transaction --> user: user_id -> id
```
### Features
- The stored transactions are encrypted in the DB for security reasons.
- For every pull request the full APP is deployed on a separate URL and the tests are run by github CI/CD
- On every push to main, the production app is automatically updated
- UI is responsive for mobile devices
- Slow operations (emails, transactions fetching) are handled
in the background by Celery workers.
- App is monitored using prometheus metrics endpoint and metrics are shown in Grafana dashboard.
### Components
- Frontend (frontend/): React + TypeScript app built with Vite. Talks to the backend via REST, handles login/registration, shows latest transactions, filtering, and allows adding transactions.
- Backend API (backend/app): FastAPI app with routers under app/api for auth, users, categories, transactions, exchange rates and bankAPI. Uses FastAPI Users for auth (JWT + OAuth), SQLAlchemy ORM, and Pydantic v2 schemas.
- Worker service (backend/app/workers): Celery worker handling asynchronous tasks (e.g., sending verification emails, future background processing).
- Database (PostgreSQL): Persists users, categories, transactions; schema managed by Alembic migrations.
- Message Queue (RabbitMQ): Transports background jobs from the API to the worker.
- Cache/Result Store (Redis): Available for caching or Celery result backend.
- Infrastructure as Code (tofu/): OpenTofu modules provisioning cluster services (RabbitMQ, Redis, Argo CD, cert-manager, Cloudflare tunnel, etc.).
- Frontend (frontend/): React + TypeScript app built with Vite. Talks to the backend via REST, handles
login/registration, shows latest transactions, filtering, and allows adding transactions.
- Backend API (backend/app): FastAPI app with routers under app/api for auth, users, categories, transactions, exchange
rates and bankAPI. Uses FastAPI Users for auth (JWT + OAuth), SQLAlchemy ORM, and Pydantic v2 schemas.
- Worker service (backend/app/workers): Celery worker handling background tasks (emails, transactions fetching).
- Database (MariaDB with Maxscale): Persists users, categories, transactions; schema managed by Alembic migrations.
- Message Queue (RabbitMQ): Queues background tasks for Celery workers.
- Infrastructure as Code (tofu/): OpenTofu modules provisioning cluster services (RabbitMQ, Redis, Cloudflare tunnel,
etc.).
- Deployment Chart (charts/myapp-chart/): Helm chart to deploy the application to Kubernetes.
### Other services deployed in the cluster
- Longhorn: distributed storage system providing persistent volumes for the database and other services
- Prometheus + Grafana: monitoring stack collecting metrics from the app and cluster, visualized in Grafana dashboards
- MariaDB operator: manages the MariaDB cluster based on Custom resources, creates Databases, users, handles backups
- RabbitMQ operator: manages RabbitMQ cluster based on Custom resources
- Cloudflare Tunnel: allows public access to backend API running in the private cluster, providing HTTPS
### Technologies Used
- Backend: Python, FastAPI, FastAPI Users, SQLAlchemy, Pydantic, Alembic, Celery
@@ -75,227 +148,388 @@ to a background worker service via the Message queue.
- Database: MariaDB with Maxscale
- Background jobs: RabbitMQ, Celery
- Containerization/Orchestration: Docker, Docker Compose (dev), Kubernetes, Helm
- IaC/Platform: Proxmox, Talos, Cloudflare pages, OpenTofu (Terraform), cert-manager, MetalLB, Cloudflare Tunnel, Prometheus, Loki
- IaC/Platform: Proxmox, Talos, Cloudflare pages, OpenTofu (Terraform), cert-manager, MetalLB, Cloudflare Tunnel,
Prometheus, Loki
## Prerequisites
Here are software and hardware prerequisites for the development and production environments. This section also
describes
necessary environment variables and key dependencies used in the project.
### System Requirements
- Operating System (dev): Linux, macOS, or Windows with Docker support
- Operating System (prod): Linux with kubernetes
- Minimum RAM: 4 GB (8 GB recommended for running backend, frontend, and database together)
- Storage: 4 GB free (Docker images may require additional space)
#### Development
- OS: Tested on MacOS, Linux and Windows should work as well
- Minimum RAM: 8 GB
- Storage: 10 GB+ free
#### Production
- 1 + 4 nodes
- CPU: 4 cores
- RAM: 8 GB
- Storage: 200 GB
### Required Software
- Docker Desktop or Docker Engine
- Docker Compose
#### Development
- Docker
- Docker Compose
- Node.js and npm
- Python 3.12+
- Python 3.12
- MariaDB 11
- Helm 3.12+ and kubectl 1.29+
#### Production
##### Minimal:
- domain name with Cloudflare`s nameservers - tunnel, pages
- Kubernetes cluster
- kubectl
- Helm
- OpenTofu
### Environment Variables (common)
##### Our setup specifics:
# TODO: UPDATE
- Backend: SECRET, FRONTEND_URL, BACKEND_URL, DATABASE_URL, RABBITMQ_URL, REDIS_URL, UNIRATE_API_KEY
- Proxmox VE
- TalosOS cluster
- talosctl
- GitHub self-hosted runner with access to the cluster
- TailScale for remote access to cluster
- OAuth vars (Backend): MOJEID_CLIENT_ID/SECRET, BANKID_CLIENT_ID/SECRET (optional)
- Frontend: VITE_BACKEND_URL
### Environment Variables
#### Backend
- `MOJEID_CLIENT_ID`, `MOJEID_CLIENT_SECRET` \- OAuth client ID and secret for
[MojeID](https://www.mojeid.cz/en/provider/)
- `BANKID_CLIENT_ID`, `BANKID_CLIENT_SECRET` \- OAuth client ID and secret for [BankID](https://developer.bankid.cz/)
- `CSAS_CLIENT_ID`, `CSAS_CLIENT_SECRET` \- OAuth client ID and secret for [Česká
spořitelna](https://developers.erstegroup.com/docs/apis/bank.csas)
- `DATABASE_URL`(or `MARIADB_HOST`, `MARIADB_PORT`, `MARIADB_DB`, `MARIADB_USER`, `MARIADB_PASSWORD`) \- MariaDB
connection details
- `RABBITMQ_USERNAME`, `RABBITMQ_PASSWORD` \- credentials for RabbitMQ
- `SENTRY_DSN` \- Sentry DSN for error reporting
- `DB_ENCRYPTION_KEY` \- symmetric key for encrypting sensitive data in the database
- `SMTP_HOST`, `SMTP_PORT`, `SMTP_USERNAME`, `SMTP_PASSWORD`, `SMTP_USE_TLS`, `SMTP_USE_SSL`, `SMTP_FROM` \- SMTP
configuration (host, port, auth credentials, TLS/SSL options, sender).
- `UNIRATE_API_KEY` \- API key for UniRate.
#### Frontend
- `VITE_BACKEND_URL` \- URL of the backend API
### Dependencies (key libraries)
Backend: FastAPI, fastapi-users, SQLAlchemy, pydantic v2, Alembic, Celery, uvicorn
Backend: FastAPI, fastapi-users, SQLAlchemy, pydantic v2, Alembic, Celery, uvicorn, pytest
Frontend: React, TypeScript, Vite
## Local development
You can run the project with Docker Compose and Python virtual environment for testing and dev purposes
You can run the project with Docker Compose and Python virtual environment for testing and development purposes
### 1) Clone the Repository
```bash
git clone https://github.com/dat515-2025/Group-8.git
cd 7project
cd Group-8/7project/src
```
### 2) Install dependencies
Backend
```bash
cd backend
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
```
Frontend
### 3) Run Docker containers
```bash
# In 7project/frontend
npm install
cd ..
docker compose up -d
```
### 3) Manual Local Run
### 4) Prepare the database
Backend
```bash
# From the 7project/ directory
docker compose up --build
# This starts: MariaDB, RabbitMQ
# Set environment variables (or create .env file)
# TODO: fix
export SECRET=CHANGE_ME_SECRET
export FRONTEND_DOMAIN_SCHEME=http://localhost:5173
export BANKID_CLIENT_ID=CHANGE_ME
export BANKID_CLIENT_SECRET=CHANGE_ME
export CSAS_CLIENT_ID=CHANGE_ME
export CSAS_CLIENT_SECRET=CHANGE_ME
export MOJEID_CLIENT_ID=CHANGE_ME
export MOJEID_CLIENT_SECRET=CHANGE_ME
# Apply DB migrations (Alembic)
# From 7project
bash upgrade_database.sh
```
# Run API
### 5) Run backend
Before running the backend, make sure to set the necessary environment variables. Either by setting them in your shell
or by setting them in run configuration in your IDE.
```bash
cd backend
uvicorn app.app:fastApi --reload --host 0.0.0.0 --port 8000
```
### 6) Run Celery worker (optional, in another terminal)
```bash
cd Group-8/7project/src/backend
source .venv/bin/activate
celery -A app.celery_app.celery_app worker -l info
```
Frontend
### 7) Install frontend dependencies and run
```bash
# Configure backend URL for dev
echo 'VITE_BACKEND_URL=http://127.0.0.1:8000' > .env
cd ../frontend
npm i
npm run dev
# Open http://localhost:5173
```
- Backend default: http://127.0.0.1:8000 (OpenAPI at /docs)
- Frontend default: http://localhost:5173
- Backend available at: http://127.0.0.1:8000 (OpenAPI at /docs)
- Frontend available at: http://localhost:5173
## Build Instructions
### Backend
App is separated into backend and frontend so it also needs to be built separately. Backend is build into docker image
and frontend is deployed as static files.
```bash
# run in project7/backend
docker buildx build --platform linux/amd64,linux/arm64 -t your_container_registry/your_name --push .
cd 7project/src/backend
# Dont forget to set correct image tag with your registry and name
# For example lukastrkan/cc-app-demo or gitea.ltrk.dev/lukas/cc-app-demo
docker buildx build --platform linux/amd64,linux/arm64 -t CHANGE_ME --push .
```
### Frontend
```bash
# run in project7/frontend
cd project7/src/frontend
npm ci
npm run build
```
## Deployment Instructions
### Setup Cluster
Deployment should work on any Kubernetes cluster. However, we are using 4 TalosOS virtual machines (1 control plane, 3 workers)
running on top of Proxmox VE.
1) Create 4 VMs with TalosOS
Deployment is tested on TalosOS cluster with 1 control plane and 4 workers, cluster needs to be setup and configured
manually. Terraform/OpenTofu is then used to deploy base services to the cluster. App itself is deployed automatically
via GitHub actions and Helm chart. Frontend files are deployed to Cloudflare pages.
### Setup Cluster
Deployment should work on any Kubernetes cluster. However, we are using 5 TalosOS virtual machines (1 control plane, 4
workers)
running on top of Proxmox VE.
1) Create at least 4 VMs with TalosOS (4 cores, 8 GB RAM, 200 GB disk)
2) Install talosctl for your OS: https://docs.siderolabs.com/talos/v1.10/getting-started/talosctl
3) Generate Talos config
```bash
# TODO: add commands
```
4) Edit the generated worker.yaml
- add google container registry mirror
- add modules from config generator
- add extramounts for persistent storage
- add kernel modules
4) Navigate to tofu directory
5) Apply the config to the VMs
```bash
#TODO: add config apply commands
cd 7project/src/tofu
````
5) Set IP addresses in environment variables
```bash
CONTROL_PLANE_IP=<control-plane-ip>
WORKER1_IP=<worker1-ip>
WORKER2_IP=<worker2-ip>
WORKER3_IP=<worker3-ip>
WORKER4_IP=<worker4-ip>
....
```
6) Verify the cluster is up
6) Create config files
```bash
# change my-cluster to your desired cluster name
talosctl gen config my-cluster https://$CONTROL_PLANE_IP:6443
```
7) Export kubeconfig
```bash
# TODO: add export command
7) Edit the generated configs
Apply the following changes to `worker.yaml`:
1) Add mounts for persistent storage to `machine.kubelet.extraMounts` section:
```yaml
extraMounts:
- destination: /var/lib/longhorn
type: bindind.
source: /var/lib/longhorn
options:
- bind
- rshared
- rw
```
2) Change `machine.install.image` to image with extra modules:
```yaml
image: factory.talos.dev/metal-installer/88d1f7a5c4f1d3aba7df787c448c1d3d008ed29cfb34af53fa0df4336a56040b:v1.11.1
```
or you can use latest image generated at https://factory.talos.dev with following options:
- Bare-metal machine
- your Talos os version
- amd64 architecture
- siderolabs/iscsi-tools
- siderolabs/util-linux-tools
- (Optionally) siderolabs/qemu-guest-agent
Then copy "Initial Installation" value and paste it to the image field.
3) Add docker registry mirror to `machine.registries.mirrors` section:
```yaml
registries:
mirrors:
docker.io:
endpoints:
- https://mirror.gcr.io
- https://registry-1.docker.io
```
8) Apply configs to the VMs
```bash
talosctl apply-config --insecure --nodes $CONTROL_PLANE_IP --file controlplane.yaml
talosctl apply-config --insecure --nodes $WORKER1_IP --file worker.yaml
talosctl apply-config --insecure --nodes $WORKER2_IP --file worker.yaml
talosctl apply-config --insecure --nodes $WORKER3_IP --file worker.yaml
talosctl apply-config --insecure --nodes $WORKER4_IP --file worker.yaml
```
9) Boostrap the cluster and retrieve kubeconfig
```bash
export TALOSCONFIG=$(pwd)/talosconfig
talosctl config endpoint https://$CONTROL_PLANE_IP:6443
talosctl config node $CONTROL_PLANE_IP
talosctl bootstrap
talosctl kubeconfig .
```
You can now use k8s client like https://headlamp.dev/ with the generated kubeconfig file.
### Install base services to the cluster
1) Copy and edit variables
### Install
1) Install base services to cluster
```bash
cd tofu
# copy and edit variables
cp terraform.tfvars.example terraform.tfvars
# authenticate to your cluster/cloud as needed, then:
```
- `metallb_ip_range` - set to range available in your network for load balancer services
- `mariadb_password` - password for internal mariadb user
- `mariadb_root_password` - password for root user
- `mariadb_user_name` - username for admin user
- `mariadb_user_host` - allowed hosts for admin user
- `mariadb_user_password` - password for admin user
- `metallb_maxscale_ip`, `metallb_service_ip`, `metallb_primary_ip`, `metallb_secondary_ip` - IPs for database
cluster,
set them to static IPs from the `metallb_ip_range`
- `s3_enabled`, `s3_bucket`, `s3_region`, `s3_endpoint`, `s3_key_id`, `s3_key_secret` - S3 compatible storage for
backups (optional)
- `phpmyadmin_enabled` - set to false if you want to disable phpmyadmin
- `rabbitmq-password` - password for RabbitMQ
- `cloudflare_account_id` - your Cloudflare account ID
- `cloudflare_api_token` - your Cloudflare API token with permissions to manage tunnels and DNS
- `cloudflare_email` - your Cloudflare account email
- `cloudflare_tunnel_name` - name for the tunnel
- `cloudflare_domain` - your domain name managed in Cloudflare
2) Deploy without Cloudflare module first
```bash
tofu init
tofu apply -exclude modules.cloudflare
tofu apply
```
2) Deploy the app using Helm
```bash
# Set the namespace
kubectl create namespace myapp || true
# Install/upgrade the chart with required values
helm upgrade --install myapp charts/myapp-chart \
-n myapp \
-f charts/myapp-chart/values.yaml \
--set image.backend.repository=myorg/myapp-backend \
--set image.backend.tag=latest \
--set env.BACKEND_URL="https://myapp.example.com" \
--set env.FRONTEND_URL="https://myapp.example.com" \
--set env.SECRET="CHANGE_ME_SECRET"
```
Adjust values to your registry and domain. The charts NOTES.txt includes additional examples.
3) Expose and access
- If using Cloudflare Tunnel or an ingress, configure DNS accordingly (see tofu/modules/cloudflare and deployment/tunnel.yaml).
- For quick testing without ingress:
```bash
kubectl -n myapp port-forward deploy/myapp-backend 8000:8000
kubectl -n myapp port-forward deploy/myapp-frontend 5173:80
```
### Verification
3) Deploy rest of the modules
```bash
# Check pods
kubectl -n myapp get pods
# Backend health
curl -i http://127.0.0.1:8000/
# OpenAPI
open http://127.0.0.1:8000/docs
# Frontend (if port-forwarded)
open http://localhost:5173
tofu apply
```
### Configure deployment
1) Create self-hosted runner with access to the cluster or make cluster publicly accessible
2) Change `jobs.deploy.runs-on` in `.github/workflows/deploy-prod.yml` and in `.github/workflows/deploy-pr.yaml` to your
runner label
3) Add variables to GitHub in repository settings:
- `PROD_DOMAIN` - base domain for deployments (e.g. ltrk.cz)
- `DEV_FRONTEND_BASE_DOMAIN` - base domain for your cloudflare pages
4) Add secrets to GitHub in repository settings:
- CLOUDFLARE_ACCOUNT_ID - same as in tofu/terraform.tfvars
- CLOUDFLARE_API_TOKEN - same as in tofu/terraform.tfvars
- DOCKER_USER - your docker registry username
- DOCKER_PASSWORD - your docker registry password
- KUBE_CONFIG - content of your kubeconfig file for the cluster
- PROD_DB_PASSWORD - same as MARIADB_PASSWORD
- PROD_RABBITMQ_PASSWORD - same as MARIADB_PASSWORD
- PROD_DB_ENCRYPTION_KEY - same as DB_ENCRYPTION_KEY
- MOJEID_CLIENT_ID
- MOJEID_CLIENT_SECRET
- BANKID_CLIENT_ID
- BANKID_CLIENT_SECRET
- CSAS_CLIENT_ID
- CSAS_CLIENT_SECRET
- SENTRY_DSN
- SMTP_HOST
- SMTP_PORT
- SMTP_USERNAME
- SMTP_PASSWORD
- SMTP_FROM
- UNIRATE_API_KEY
5) On Github open Actions tab, select "Deploy Prod" and run workflow manually
## Testing Instructions
The tests are located in 7project/backend/tests directory. All tests are run by GitHub actions on every pull request and push to main.
The tests are located in 7project/backend/tests directory. All tests are run by GitHub actions on every pull request and
push to main.
See the workflow [here](../.github/workflows/run-tests.yml).
If you want to run the tests locally, the preferred is to use a [bash script](backend/test-with-ephemeral-mariadb.sh)
that will start a [test DB container](backend/docker-compose.test.yml) and remove it afterward.
If you want to run the tests locally, the preferred way is to use a [bash script](backend/test_locally.sh)
that will start a test DB container with [docker compose](backend/docker-compose.test.yml) and remove it afterwards.
```bash
cd 7project/backend
bash test-with-ephemeral-mariadb.sh
cd 7project/src/backend
bash test_locally.sh
```
### Unit Tests
There are only 5 basic unit tests, since our services logic is very simple
```bash
bash test-with-ephemeral-mariadb.sh --only-unit
bash test_locally.sh --only-unit
```
### Integration Tests
There are 9 basic unit tests, testing the individual backend API logic
```bash
bash test-with-ephemeral-mariadb.sh --only-integration
bash test_locally.sh --only-integration
```
### End-to-End Tests
There are 7 e2e tests, testing more complex app logic
```bash
bash test-with-ephemeral-mariadb.sh --only-e2e
bash test_locally.sh --only-e2e
```
## Usage Examples
@@ -328,7 +562,12 @@ curl -H "Authorization: Bearer $TOKEN" http://127.0.0.1:8000/authenticated-route
### Frontend
- Start with: npm run dev in 7project/frontend
- Start with:
```bash
npm run dev in 7project/src/frontend
```
- Ensure VITE_BACKEND_URL is set to the backend URL (e.g., http://127.0.0.1:8000)
- Open http://localhost:5173
- Login, view latest transactions, filter, and add new transactions from the UI.
@@ -337,37 +576,72 @@ curl -H "Authorization: Bearer $TOKEN" http://127.0.0.1:8000/authenticated-route
## Presentation Video
**YouTube Link**: [Insert your YouTube link here]
**YouTube Link**: https://youtu.be/FKR85AVN8bI
**Duration**: [X minutes Y seconds]
**Duration**: 9 minutes 43 seconds
**Video Includes**:
- [ ] Project overview and architecture
- [ ] Live demonstration of key features
- [ ] Code walkthrough
- [ ] Build and deployment showcase
- [x] Project overview and architecture
- [x] Live demonstration of key features
- [x] Code walkthrough
- [x] Build and deployment showcase
## Troubleshooting
### Common Issues
#### Issue 1: [Common problem]
#### Issue 1: Unable to apply Cloudflare terraform module
**Symptoms**: [What the user sees]
**Solution**: [Step-by-step fix]
**Symptoms**: Terraform/OpenTofu apply fails during Cloudflare module deployment.
This is caused by unknown variable not known beforehand.
#### Issue 2: [Another common problem]
**Solution**: Apply first without Cloudflare module and then apply again.
**Symptoms**: [What the user sees]
**Solution**: [Step-by-step fix]
```bash
tofu apply -exclude modules.cloudflare
tofu apply
```
#### Issue 2: Pods are unable to start
**Symptoms**: Pods are unable to start with ImagePullBackOff error. This could be caused
by either hitting docker hub rate limits or by docker hub being down.
**Solution**: Make sure you updated the cluster config to use registry mirror as described in
"Setup Cluster" section.
### Debug Commands
Get a detailed description of the Deployment:
```bash
# Useful commands for debugging
# Log viewing commands
# Service status checks
kubectl describe deployment finance-tracker -n prod
```
Get a list of pods in the Deployment:
```bash
kubectl get pods -n prod
```
Check the logs of a specific pod copy value for <pod-name> from the command above (--previous flag shows logs of a
failing pod, remove it if the pod is not failing):
```bash
kubectl logs <pod-name> -n prod --previous
```
See the service description:
```bash
kubectl describe service finance-tracker -n prod
```
Connect to the pod and run a bash shell:
```bash
kubectl exec -it <pod-name> -n prod -- /bin/bash
```
---
@@ -378,53 +652,67 @@ curl -H "Authorization: Bearer $TOKEN" http://127.0.0.1:8000/authenticated-route
> This information is used for individual grading.
> Link to the specific commit on GitHub for each contribution.
| Task/Component | Assigned To | Status | Time Spent | Difficulty | Notes |
|-----------------------------------------------------------------------|-------------| ------------- |------------|------------| ----------- |
| [Project Setup & Repository](https://github.com/dat515-2025/Group-8#) | Lukas | ✅ Complete | [X hours] | Medium | [Any notes] |
| [Design Document](https://github.com/dat515-2025/Group-8/blob/main/6design/design.md) | Both | ✅ Complete | 4 Hours | Easy | [Any notes] |
| [Backend API Development](https://github.com/dat515-2025/Group-8/tree/main/7project/backend/app/api) | Dejan | ✅ Complete | 12 hours | Medium | [Any notes] |
| [Database Setup & Models](https://github.com/dat515-2025/Group-8/tree/main/7project/backend/app/models) | Lukas | 🔄 In Progress | [X hours] | Medium | [Any notes] |
| [Frontend Development](https://github.com/dat515-2025/Group-8/tree/main/7project/frontend) | Dejan | ✅ Complete | 17 hours | Medium | [Any notes] |
| [Docker Configuration](https://github.com/dat515-2025/Group-8/blob/main/7project/compose.yml) | Lukas | ✅ Complete | [X hours] | Easy | [Any notes] |
| [Cloud Deployment](https://github.com/dat515-2025/Group-8/blob/main/7project/deployment/app-demo-deployment.yaml) | Lukas | ✅ Complete | [X hours] | Hard | [Any notes] |
| [Testing Implementation](https://github.com/dat515-2025/group-name) | Dejan | ✅ Complete | 16 hours | Medium | [Any notes] |
| [Documentation](https://github.com/dat515-2025/group-name) | Both | 🔄 In Progress | [X hours] | Easy | [Any notes] |
| [Presentation Video](https://github.com/dat515-2025/group-name) | Both | ❌ Not Started | [X hours] | Medium | [Any notes] |
**Legend**: ✅ Complete | 🔄 In Progress | ⏳ Pending | ❌ Not Started
| Task/Component | Assigned To | Status | Time Spent | Difficulty | Notes |
|:----------------------------------------------------------------------------------------------------------|:------------|:-----------|:-----------|:-----------|:------|
| [Project Setup & Repository](https://github.com/dat515-2025/Group-8/pull/1) | Both | ✅ Complete | 10 Hours | Medium | |
| [Design Document](https://github.com/dat515-2025/Group-8/commit/f09f9eaa82d0953afe41f33c57ff63e0933a81ef) | Both | ✅ Complete | 4 Hours | Easy | |
| [Cluster setup ](https://github.com/dat515-2025/Group-8/commit/c8048d940df00874c290d99cdb4ad366bca6e95d) | Lukas | ✅ Complete | 30 hours | Hard | |
| [Backend API Development](https://github.com/dat515-2025/Group-8/pull/26) | Dejan | ✅ Complete | 22 hours | Medium | |
| [Database Setup & Models](https://github.com/dat515-2025/Group-8/pull/19) | Lukas | ✅ Complete | 5 hours | Medium | |
| [Frontend Development](https://github.com/dat515-2025/Group-8/pull/28) | Dejan | ✅ Complete | 32 hours | Medium | |
| [Docker Configuration](https://github.com/dat515-2025/Group-8/pull/1) | Lukas | ✅ Complete | 3 hours | Easy | |
| [Authentification](https://github.com/dat515-2025/Group-8/pull/23) | Both | ✅ Complete | 11 hours | Medium | |
| [Transactions loading](https://github.com/dat515-2025/Group-8/pull/32) | Lukas | ✅ Complete | 7 hours | Medium | |
| [Monitoring](https://github.com/dat515-2025/Group-8/pull/42/) | Lukas | ✅ Complete | 9 hours | Medium | |
| [Cloud Deployment](https://github.com/dat515-2025/Group-8/pull/16) | Both | ✅ Complete | 21 hours | Hard | |
| [Testing Implementation](https://github.com/dat515-2025/Group-8/pull/31/) | Both | ✅ Complete | 21 hours | Medium | |
| [Documentation](https://github.com/dat515-2025/Group-8/commit/515106b238bc032d5f7d5dcae931b5cb7ee2a281) | Both | ✅ Complete | 14 hours | Medium | |
| [Presentation Video](https://youtu.be/FKR85AVN8bI) | Both | ✅ Complete | 3 hours | Medium | |
## Hour Sheet
> Link to the specific commit on GitHub for each contribution.
### Lukáš
### [Lukáš]
| Date | Activity | Hours | Description |
|----------------|---------------------|------------|----------------------------------------------------|
| 4.10 to 10.10 | Initial Setup | 40 | Repository setup, project structure, cluster setup |
| 14.10 to 16.10 | Backend Development | 12 | Implemented user authentication - oauth |
| 8.10 to 12.10 | CI/CD | 10 | Created database schema and models |
| [Date] | Testing | [X.X] | Unit tests for API endpoints |
| [Date] | Documentation | [X.X] | Updated README and design doc |
| **Total** | | **[XX.X]** | |
| Date | Activity | Hours | Description | Representative Commit / PR |
|:----------------|:----------------------------|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------|
| 18.9. - 19.9. | Initial Setup & Design | 10 | Repository init, system design diagrams, basic Terraform setup | `feat(infrastructure): add basic terraform resources` |
| 20.9. - 5.10. | Core Infrastructure & CI/CD | 12 | K8s setup (ArgoCD), CI/CD workflows, RabbitMQ, Redis, Celery workers, DB migrations | `PR #2`, `feat(infrastructure): add rabbitmq cluster` |
| 6.10. - 9.10. | Frontend Infra & DB | 5 | Deployed frontend to Cloudflare, setup metrics, created database models | `PR #16` (Cloudflare), `PR #19` (DB structure) |
| 10.10. - 11.10. | Backend | 5 | Implemented OAuth support (MojeID, BankID) | `feat(auth): add support for OAuth and MojeID` |
| 12.10. | Infrastructure | 2 | Added database backups | `feat(infrastructure): add backups` |
| 16.10. | Infrastructure | 4 | Implemented secrets management, fixed deployment/env variables | `PR #29` (Deployment envs) |
| 17.10. | Monitoring | 1 | Added Sentry logging | `feat(app): add sentry loging` |
| 21.10. - 22.10. | Backend | 8 | Added ČSAS bank connection | `PR #32` (Fix React OAuth) |
| 29.10. - 30.10. | Backend | 5 | Implemented transaction encryption, add bank scraping | `PR #39` (CSAS Scraping) |
| 30.10. | Monitoring | 6 | Implemented Loki logging and basic Prometheus metrics | `PR #42` (Prometheus metrics) |
| 9.11. | Monitoring | 2 | Added custom Prometheus metrics | `PR #46` (Prometheus custom metrics) |
| 11.11. | Tests | 1 | Investigated and fixed broken Pytest environment | `fix(tests): set pytest env` |
| 11.11. - 12.11. | Features & Deployment | 6 | Added cron support, email sender service, updated workers & image | `PR #49` (Email), `PR #50` (Update workers) |
| 18.9 - 16.11 | Documentation | 8 | Updated report.md, design docs, and tfvars.example | `Create design.md`, `update report` |
| 15.11 | Video | 2 | Record my video part, edit video | |
| **Total** | | **107** | | |
### Dejan
| Date | Activity | Hours | Description |
|-----------------|----------------------|--------|---------------------------------------------------------------|
| 25.9. | Design | 2 | 6design |
| 9.10 to 11.10. | Backend APIs | 12 | Implemented Backend APIs |
| 13.10 to 15.10. | Frontend Development | 8 | Created user interface mockups |
| Continually | Documentation | 6 | Documenting the dev process |
| 21.10 to 23.10 | Tests, frontend | 10 | Test basics, balance charts, and frontend improvement |
| 28.10 to 30.10 | CI | 6 | Integrated tests with test database setup on github workflows |
| 28.10 to 30.10 | Frontend | 7 | UI improvements and exchange rate API integration |
| 4.11 to 6.11 | Tests | 6 | Test fixes improvement, more integration and e2e |
| 4.11 to 6.11 | Frontend | 6 | Fixes, Improved UI, added support for mobile devices |
| **Total** | | **63** | |
| Date | Activity | Hours | Description | Representative Commit / PR |
|:-----------------|:---------------------|:-------|:----------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------|
| 25.9. | Design | 2 | 6design | |
| 9.10. to 11.10. | Backend APIs | 14 | Implemented Backend APIs | `PR #26`, `20-create-a-controller-layer-on-backend-side` |
| 13.10. to 15.10. | Frontend Development | 8 | Created user interface mockups | `PR #28`, `frontend basics` |
| 21.10. to 23.10. | Tests, frontend | 10 | Test basics, balance charts, and frontend improvement | `PR #31`, `30 create tests and set up a GitHub pipeline` |
| 28.10. to 30.10. | CI/CD | 6 | Integrated tests with test database setup on github workflows | `PR #31`, `30 create tests and set up a GitHub pipeline` |
| 28.10. to 30.10. | Frontend | 8 | UI improvements and exchange rate API integration | `PR #35`, `34 improve frontend functionality` |
| 29.10. | Backend | 4 | Token invalidation, few fixes | `PR #38`, `fix(backend): implemented jwt token invalidation so users cannot use …` |
| 4.11. to 6.11. | Tests | 6 | Test fixes improvement, more integration and e2e | `PR #45`, `feat(test): added more tests ` |
| 4.11. to 6.11. | Frontend | 8 | Fixes, rates API, Improved UI, added support for mobile devices | `PR #41, #44`, `feat(frontend): added CNB API and moved management into a new tab`, `43 fix the UI layout in chrome ` |
| 11.11. | Backend APIs | 4 | Moved rates API, mock bank to Backend, few fixes | `feat(backend): Moved the unirate API to the backend `, `feat(backend): moved mock bank to backend` |
| 11.11. to 12.11. | Tests | 3 | Local testing DB container, few fixes | `PR #48`, `fix(tests): fixed test runtime errors regarding database connection ` |
| 12.11. | Frontend | 3 | Enabled multiple transaction edits at once, CSAS button state | `feat(frontend): implemented multiple transaction selections in UI` |
| 13.11. | Video | 3 | Video | |
| 25.9. to 14.11. | Documentation | 8 | Documenting the dev process | multiple `feat(docs): report.md update` |
| **Total** | | **87** | | |
### Group Total: [XXX.X] hours
### Group Total: 194 hours
---
@@ -432,28 +720,86 @@ curl -H "Authorization: Bearer $TOKEN" http://127.0.0.1:8000/authenticated-route
### What We Learned
[Reflect on the key technical and collaboration skills learned during this project]
#### Technical
- We learned how to use AI to help us with our project.
- We learned how to use Copilot for PR reviews.
- We learned how to troubleshoot issues with our project in different areas.
#### Collaboration
- Weekly meetings with the TA were great for syncing up on progress, discussing issues, planning future work.
- Using GitHub issues and pull requests was very helpful for keeping track of progress.
### Challenges Faced
[Describe the main challenges and how you overcame them]
#### Slow cluster performance
This was caused by single SATA SSD disk running all VMs. This was solved by adding second NVMe disk just for Talos VMs.
#### Stucked IaC deployment
If the deployed module (helm chart for example) was not configured properly, it would get stuck and timeout resulting in
namespace that cannot be deleted.
This was solved by using snapshots in Proxmox and restoring if this happened.
#### Not enough time to implement all features
Since this course is worth only 5 credits, we often had to prioritize other courses we were attending over this project.
In the end, we were able to implement all necessary features.
### If We Did This Again
#### Different framework
FastAPI lacks usable build in support for database migrations and implementing Alembic was a bit tricky.
Tricky was also integrating FastAPI auth system with React frontend, since there is no official project template.
Using .NET (which we considered initially) would probably solve these issues.
#### Private container registry
Using private container registry would allow us to include environment variables directly in the image during build.
This would simplify deployment and CI/CD setup.
#### Start sooner
The weekly meetings helped us to start planning the project earlier and avoid spending too much time on details,
but we could have started earlier if we had more time.
[What would you do differently? What worked well that you'd keep?]
### Individual Growth
#### [Team Member 1 Name]
#### [Lukas]
[Personal reflection on growth, challenges, and learning]
This course finally forced me to learn kubernetes (been on by TODO list for at least 3 years).
I had some prior experience with terraform/opentofu from work but this improved by understanding of it.
#### [Team Member 2 Name]
The biggest challenge for me was time tracking since I am used to tracking to projects, not to tasks.
(I am bad even at that :) ).
It was also interesting experience to be the one responsible for the initial project structure/design/setup
used not only by myself.
#### [Dejan]
Since I do not have a job and I am more theoretically oriented student (I am more into math, algorithms, cryptography),
this project was probably the most complex one I have ever worked on.
For me, it was a great experience to work on an actually deployed fullstack app and not only local development, that I
was used to from the past.
It was also a great experience to collaborate with Lukas who has prior experience with app deployment and
infrastructure.
Thanks to this, I learned a lot new technologies and how to work in a team (First time reviewing PRs).
It was challenging to wrap my head around the project structure and how everything was connected (And I still think I
have some gaps in my knowledge).
But I think that if I decide to create my own demo project in the future, I will definitely be able to work on it much
more efficiently.
[Personal reflection on growth, challenges, and learning]
---
**Report Completion Date**: [Date]
**Last Updated**: 15.10.2025
**Report Completion Date**: 15.11.2025
**Last Updated**: 15.11.2025

14
7project/src/README.md Normal file
View File

@@ -0,0 +1,14 @@
## Folder structure
- `src/`
- `backend/` - Python FastAPI backend application. Described in separate [README](./backend/README.md).
- `charts/`
- `myapp-chart/` - Helm chart for deploying the application, supports prod and dev environments. Described in
separate [README](./charts/README.md).
- `frontend/` - React frontend application. Described in separate
[README](./frontend/README.md).
- `tofu/` - Terraform/OpenTofu services deployment configurations. Described in separate
[README](./tofu/README.md).
- `compose.yaml` - Docker Compose file for local development
- `create_migration.sh` - script to create new Alembic database migration
- `upgrade_database.sh` - script to upgrade database to latest Alembic revision

8
7project/src/backend/.idea/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,8 @@
# Default ignored files
/shelf/
/workspace.xml
# Editor-based HTTP Client requests
/httpRequests/
# Datasource local storage ignored files
/dataSources/
/dataSources.local.xml

View File

@@ -0,0 +1,16 @@
FROM python:3.11-slim
WORKDIR /app
RUN useradd --create-home --shell /bin/bash app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
RUN chown -R app:app /app
USER app
EXPOSE 8000
CMD ["sh", "-c", "alembic upgrade head && uvicorn app.app:fastApi --host 0.0.0.0 --port 8000"]

View File

@@ -0,0 +1,23 @@
# Backend
This directory contains the backend code for the project. It is built using Python and FastAPI framework and with
database migrations support using Alembic.
## Directory structure
- `alembic/` - database migrations
- `app/` - main application code
- `api/` - API endpoints - routers/controllers with request handling logic
- `core/` - core application logic - database session management, security
- `models/` - database models
- `schemas/` - Endpoint schemas
- `services/` - utilities for various tasks
- `workers/` - background tasks
- `app.py` - FastAPI startup script
- `celery_app.py` - Celery startup script
- `tests/` - tests
- `docker-compose.test.yml` - docker compose for testing database
- `Dockerfile` - production Dockerfile
- `main.py` - App entrypoint
- `requirements.txt` - Python dependencies
- `test_locally.sh` - script to run tests with temporary database

View File

@@ -49,4 +49,4 @@ def decode_and_verify_jwt(token: str, secret: str) -> dict:
secret,
algorithms=["HS256"],
options={"verify_aud": False},
) # verify_exp is True by default
) # verify_exp is True by default

View File

@@ -1,10 +1,11 @@
import uuid
from typing import Optional
from typing import Optional, Dict, Any
from fastapi_users import schemas
class UserRead(schemas.BaseUser[uuid.UUID]):
first_name: Optional[str] = None
last_name: Optional[str] = None
config: Optional[Dict[str, Any]] = None
class UserCreate(schemas.BaseUserCreate):
first_name: Optional[str] = None

View File

@@ -14,11 +14,10 @@ from httpx_oauth.oauth2 import BaseOAuth2
from app.models.user import User
from app.oauth.bank_id import BankID
from app.oauth.csas import CSASOAuth
from app.workers.celery_tasks import send_email
from app.oauth.custom_openid import CustomOpenID
from app.oauth.moje_id import MojeIDOAuth
from app.services.db import get_user_db
from app.core.queue import enqueue_email
SECRET = os.getenv("SECRET", "CHANGE_ME_SECRET")
@@ -87,7 +86,7 @@ class UserManager(UUIDIDMixin, BaseUserManager[User, uuid.UUID]):
"Pokud jsi registraci neprováděl(a), tento email ignoruj.\n"
)
try:
enqueue_email(to=user.email, subject=subject, body=body)
send_email.delay(user.email, subject, body)
except Exception as e:
print("[Email Fallback] To:", user.email)
print("[Email Fallback] Subject:", subject)

View File

@@ -0,0 +1,4 @@
import uvicorn
if __name__ == "__main__":
uvicorn.run("app.app:fastApi", host="0.0.0.0", log_level="info")

View File

@@ -4,13 +4,13 @@ set -euo pipefail
# Run tests against a disposable local MariaDB on host port 3307 using Docker Compose.
# Requirements: Docker, docker compose plugin, Python, Alembic, pytest.
# Usage:
# chmod +x ./test-with-ephemeral-mariadb.sh
# chmod +x ./test_locally.sh
# # From 7project/backend directory
# ./test-with-ephemeral-mariadb.sh [--only-unit|--only-integration|--only-e2e] [pytest-args...]
# ./test_locally.sh [--only-unit|--only-integration|--only-e2e] [pytest-args...]
# # Examples:
# ./test-with-ephemeral-mariadb.sh --only-unit -q
# ./test-with-ephemeral-mariadb.sh --only-integration -k "login"
# ./test-with-ephemeral-mariadb.sh --only-e2e -vv
# ./test_locally.sh --only-unit -q
# ./test_locally.sh --only-integration -k "login"
# ./test_locally.sh --only-e2e -vv
#
# This script will:
# 1) Start a MariaDB 11.4 container (ephemeral storage, port 3307)

View File

@@ -34,15 +34,16 @@ def test_authenticated_route_requires_auth(client):
async def test_on_after_request_verify_enqueues_email(monkeypatch):
calls = {}
def fake_enqueue_email(to: str, subject: str, body: str):
calls.setdefault("emails", []).append({
"to": to,
"subject": subject,
"body": body,
})
class FakeCeleryTask:
def delay(to: str, subject: str, body: str):
calls.setdefault("emails", []).append({
"to": to,
"subject": subject,
"body": body,
})
# Patch the enqueue_email used inside user_service
monkeypatch.setattr(user_service, "enqueue_email", fake_enqueue_email)
monkeypatch.setattr(user_service, "send_email", FakeCeleryTask)
class DummyUser:
def __init__(self, email):

View File

@@ -0,0 +1,30 @@
# Helm chart deployment
This directory contains a Helm chart for deploying the app to a cluster, it support bot production and preview
deployment.
## Directory Structure
- `myapp-chart/`
- `templates/`
- `app-deployment.yaml` - Kubernetes Deployment for the application
- `cron.yaml` - cronjob for periodic tasks - periodically calls app endpoint
- `database.yaml` - Creates database using MariaDB operator. Production database is kept, but preview/dev
database is dropped after uninstalling the chart.
- `database-grant.yaml` - Defines rights for the database user
- `database-user.yaml` - Creates database user
- `monitoring.yaml` - Adds /metrics endpoint to Prometheus scraping
- `prod.yaml` - Application secrets
- `rabbitmq-cluster.yaml` - Defines RabbitMQ cluster for this deployment
- `rabbitmq-permission.yalm` - Defines RabbitMQ user permissions
- `rabbitmq-queue.yaml` - Defines RabbitMQ queue
- `rabbitmq-user.yaml` - Defines RabbitMQ user
- `rabbitmq-user-secret.yaml` - Defines RabbitMQ user secret
- `service.yaml` - Kubernetes Service for the application
- `tunnel.yaml` - Cloudflare tunnel for accessing the application¨
- `worker-deployment.yaml` - Kubernetes Deployment for the Celery worker, uses same image as the app-deployment,
but with different entrypoint
- `Chart.yaml` - Helm chart metadata
- `values.yaml` - list of all configurable values
- `values-dev.yaml` - default values for development/preview deployment
- `values-prod.yaml` - default values for production deployment

Some files were not shown because too many files have changed in this diff Show More