Zackriya-Solutions/meeting-minutes
Fork: 240 Star: 3453 (更新于 2025-04-17 15:28:00)
license: MIT
Language: C++ .
A free and open source, self hosted Ai based live meeting note taker and minutes summary generator that can completely run in your Local device (Mac OS and windows OS Support added. Working on adding linux support soon) https://meetily.zackriya.com/
最后发布版本: v0.0.3 ( 2025-03-03 23:56:20)
Meetily - AI-Powered Meeting Assistant




Open source Ai Assistant for taking meeting notes
Website • Author • Discord Channel
An AI-Powered Meeting Assistant that captures live meeting audio, transcribes it in real-time, and generates summaries while ensuring user privacy. Perfect for teams who want to focus on discussions while automatically capturing and organizing meeting content without the need for external servers or complex infrastructure.
Overview
An AI-powered meeting assistant that captures live meeting audio, transcribes it in real-time, and generates summaries while ensuring user privacy. Perfect for teams who want to focus on discussions while automatically capturing and organizing meeting content.
Why?
While there are many meeting transcription tools available, this solution stands out by offering:
- Privacy First: All processing happens locally on your device
- Cost Effective: Uses open-source AI models instead of expensive APIs
- Flexible: Works offline, supports multiple meeting platforms
- Customizable: Self-host and modify for your specific needs
- Intelligent: Built-in knowledge graph for semantic search across meetings
Features
✅ Modern, responsive UI with real-time updates
✅ Real-time audio capture (microphone + system audio)
✅ Live transcription using Whisper.cpp
🚧 Speaker diarization
✅ Local processing for privacy
✅ Packaged the app for macOS and Windows
🚧 Export to Markdown/PDF
Note: We have a Rust-based implementation that explores better performance and native integration. It currently implements:
- ✅ Real-time audio capture from both microphone and system audio
- ✅ Live transcription using locally-running Whisper
- ✅ Speaker diarization
- ✅ Rich text editor for notes
We are currently working on:
- ✅ Export to Markdown/PDF
- ✅ Export to HTML
Release 0.0.3
A new release is available!
Please check out the release here.
What's New
- Windows Support: Fixed audio capture issues on Windows
- Improved Error Handling: Better error handling and logging for audio devices
- Enhanced Device Detection: More robust audio device detection across platforms
- Windows Installers: Added both .exe and .msi installers for Windows
- Transcription quality is improved
- Bug fixes and improvements for frontend
- Better backend app build process
- Improved documentation
What would be next?
- Database connection to save meeting minutes
- Improve summarization quality for smaller LLM models
- Add download options for meeting transcriptions
- Add download option for summary
Known issues
- Smaller LLMs can hallucinate, making summarization quality poor; Please use model above 32B parameter size
- Backend build process requires CMake, C++ compiler, etc. Making it harder to build
- Backend build process requires Python 3.10 or newer
- Frontend build process requires Node.js
LLM Integration
The backend supports multiple LLM providers through a unified interface. Current implementations include:
Supported Providers
- Anthropic (Claude models)
- Groq (Llama3.2 90 B)
- Ollama (Local models that supports function calling)
Configuration
Create .env
file with your API keys:
# Required for Anthropic
ANTHROPIC_API_KEY=your_key_here
# Required for Groq
GROQ_API_KEY=your_key_here
System Architecture
Core Components
-
Audio Capture Service
- Real-time microphone/system audio capture
- Audio preprocessing pipeline
- Built with Rust (experimental) and Python
-
Transcription Engine
- Whisper.cpp for local transcription
- Supports multiple model sizes (tiny->large)
- GPU-accelerated processing
-
LLM Orchestrator
- Unified interface for multiple providers
- Automatic fallback handling
- Chunk processing with overlap
- Model configuration:
-
Data Services
- ChromaDB: Vector store for transcript embeddings
- SQLite: Process tracking and metadata storage
-
API Layer
- FastAPI endpoints:
- POST /upload
- POST /process
- GET /summary/{id}
- DELETE /summary/{id}
- FastAPI endpoints:
Deployment Architecture
- Frontend: Tauri app + Next.js (packaged executables)
-
Backend: Python FastAPI:
- Transcript workers
- LLM inference
Prerequisites
- Node.js 18+
- Python 3.10+
- FFmpeg
- Rust 1.65+ (for experimental features)
- Cmake 3.22+ (for building the frontend)
- For Windows: Visual Studio Build Tools with C++ development workload
Setup Instructions
1. Frontend Setup
Run packaged version
Go to the releases page and download the latest version.
For Windows:
- Download either the
.exe
installer or.msi
package - Once the installer is downloaded, double-click the executable file to run it
- Windows will ask if you want to run untrusted apps, click "More info" and choose "Run anyway"
- Follow the installation wizard to complete the setup
- The application will be installed and available on your desktop
For macOS:
- Download the
dmg_darwin_arch64.zip
file - Extract the file
- Double-click the
.dmg
file inside the extracted folder - Drag the application to your Applications folder
- Execute the following command in terminal to remove the quarantine attribute:
xattr -c /Applications/meeting-minutes-frontend.app
Provide necessary permissions for audio capture and microphone access.
Dev run
# Navigate to frontend directory
cd frontend
# Give execute permissions to clean_build.sh
chmod +x clean_build.sh
# run clean_build.sh
./clean_build.sh
2. Backend Setup
# Clone the repository
git clone https://github.com/Zackriya-Solutions/meeting-minutes.git
cd meeting-minutes/backend
# Create and activate virtual environment
# On macOS/Linux:
python -m venv venv
source venv/bin/activate
# On Windows:
python -m venv venv
.\venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Add environment file with API keys
# On macOS/Linux:
echo -e "ANTHROPIC_API_KEY=your_api_key\nGROQ_API_KEY=your_api_key" | tee .env
# On Windows (PowerShell):
"ANTHROPIC_API_KEY=your_api_key`nGROQ_API_KEY=your_api_key" | Out-File -FilePath .env -Encoding utf8
# Configure environment variables for Groq
# On macOS/Linux:
export GROQ_API_KEY=your_groq_api_key
# On Windows (PowerShell):
$env:GROQ_API_KEY="your_groq_api_key"
# Build dependencies
# On macOS/Linux:
chmod +x build_whisper.sh
./build_whisper.sh
# On Windows:
.\build_whisper.bat
# Start backend servers
# On macOS/Linux:
./clean_start_backend.sh
# On Windows:
.\start_with_output.ps1
Development Guidelines
- Follow the established project structure
- Write tests for new features
- Document API changes
- Use type hints in Python code
- Follow ESLint configuration for JavaScript/TypeScript
Contributing
- Fork the repository
- Create a feature branch
- Submit a pull request
License
MIT License - Feel free to use this project for your own purposes.
Introducing Subscription
We are planning to add a subscription option so that you don't have to run the backend on your own server. This will help you scale better and run the service 24/7. This is based on a few requests we received. If you are interested, please fill out the form here.
Last updated: March 3, 2025
Star History
最近版本更新:(数据更新于 2025-04-17 15:27:57)
2025-03-03 23:56:20 v0.0.3
2025-02-08 20:50:23 v0.0.2
2025-02-01 22:55:27 v0.0.1.1
2025-02-01 21:49:43 v0.0.1
主题(topics):
ai, automation, cross-platform, linux, live, llm, mac, macos-app, meeting-minutes, meeting-notes, recorder, rust, whisper, whisper-cpp, windows
Zackriya-Solutions/meeting-minutes同语言 C++最近更新仓库
2025-04-28 15:21:34 shadps4-emu/shadPS4
2025-04-28 07:12:38 PCSX2/pcsx2
2025-04-27 03:38:30 MaaAssistantArknights/MaaAssistantArknights
2025-04-26 09:02:09 LizardByte/Sunshine
2025-04-26 05:09:42 microsoft/terminal
2025-04-20 21:59:34 AaronFeng753/Waifu2x-Extension-GUI