Skip to content

snpathaks/EcoContext-VoxID

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 

Repository files navigation

🌿EcoContext VoxID : 🌐


🌟 EcoContext VoxID: Voice Recognition with a Green Twist! 🌍


Welcome to EcoContext VoxID, a cutting-edge Python application that blends voice identification with intelligent, context-aware energy optimization. Powered by a sleek neural network and fuzzy logic, this system identifies users through unique voiceprints, adapts to real-world environments, and sips power like a pro—all wrapped in a vibrant Tkinter GUI. Ready to revolutionize voice tech? Let’s dive in ! 🎙️⚡️

✨ Why EcoContext VoxID Shines


Voiceprint Magic: Extracts 32D voice signatures using a custom LSTM neural network.
Smart Identification: Matches voices with pinpoint accuracy via cosine similarity.
Context Wizardry: Adapts to noise and movement with a fuzzy logic brain, ensuring peak performance.
Eco Power Mode: Optimizes energy use with dynamic power states—save the planet while you identify!
Slick GUI: A colorful Tkinter interface for registering users, identifying voices, and simulating contexts.
Audio Flexibility: Record live audio or conjure simulated samples for instant testing.

🚀 Get Started :


Prerequisites
Python: 3.12+ (the fresher, the better!)
Libraries:pip install numpy librosa scikit-fuzzy tensorflow matplotlib tkinter sounddevice

🎮How to Play :

Fire It Up:
Run main.py to unleash the vibrant Tkinter GUI.:
Register Your Voice::
Hit the "Register User" tab.:
Type a cool user ID.:
Click "Record Voice Sample" to capture your voice or "Generate Fake Audio" for a quick test.:
Press "Register User" to lock in your voiceprint.:

Identify the Speaker::
Switch to the "Identify User" tab.:
Record a fresh sample or use a simulated one from a registered user.:
Click "Identify User" to reveal the match with a confidence score!:

Simulate the Scene::
Head to the "Context Simulation" tab.:
Tweak noise and movement sliders to mimic real-world vibes.:
Click "Simulate Context" to see sensitivity, power mode, and battery life in action, complete with a snazzy bar plot.:

🛠️Under the Hood


1. VoiceprintExtractor: The Voice Alchemist

Mission: Turns audio into unique 32D voiceprints. Input: 3-second audio clips (16 kHz), transformed into 40 MFCC features (128 time steps).
Tech: A dazzling LSTM neural network (TensorFlow/Keras):
Input: (128, 40)
Layers: LSTM (64 units) → Dropout (0.3) → LSTM (32 units) → Dense (64, ReLU) → Dense (32, no activation)
Output: A 32D voiceprint that’s as unique as you are!
Heads-Up: The model is untrained out of the box—train it with a dataset like VoxCeleb to unlock its full potential.

💭Outputs:


Sensitivity (0–100): Fine-tunes audio processing.
Power mode (0–100): Sets the energy vibe (eco, balanced, performance).

Rules: Five clever rules (e.g., noisy room → crank sensitivity, chilling → go eco).

Note: Movement is currently a random placeholder—add a real sensor for next-level accuracy.

3.EnergyOptimizer: The Power Maestro

Mission: Keeps energy use lean and green.
Power States:
Sleep: 5 mW (snooze mode)
Listening: 14 mW (all ears)
Processing: 25 mW (full throttle)

Logic: Picks the perfect state based on power mode.
Battery Life: Predicts runtime for a 1000 mAh battery at 3.7V—stay powered longer!

4.EcoContextVoxID: The Mastermind

Mission: Ties it all together for seamless operation.
Powers:
Stores user voiceprints
Identifies users with a 0.7 cosine similarity threshold.
Processes context for adaptive performance.
Handles audio recording and simulation like a champ.

  1. VoxIDApp: The Visual Showstopper

Mission: Delivers a user-friendly, eye-catching GUI.
Tabs:
Register User: Add new voices with flair.
Identify User: Spot speakers with confidence.
Context Simulation: Play with settings and watch the magic unfold.

Perks: Live status updates, a user listbox, and a matplotlib-powered plot that pops.

📊 Data & Models:


Datasets:


No external datasets—this system creates its own spark!
Audio:
Live recordings via sounddevice (16 kHz, 3 seconds).
Simulated audio with np.random.normal for testing.

🔘Context:
Noise: Extracted from audio energy.
Movement: Randomly generated (accelerometer-ready).

User Data: Voiceprints stored in a snappy in-memory dictionary.

🤖AI Models:


Voiceprint Extraction: A custom LSTM neural network (needs training to shine).
Fuzzy Logic: A rule-based system that’s sharp and adaptive.

.......................................................................................................................................................................

EcoContext VoxID: Where voice recognition meets eco-smart innovation. Let’s make waves, not waste! 🌊⚡️


About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published