Depressiondetectionsystem
PublicThis project detects depression using audio and visual features from video input. It extracts MFCC features from the full audio and selects 20 evenly spaced frames from the video. These are fused and passed into a DenseNet201 model trained on the DAIC-WOZ dataset. Includes a Gradio web interface, deployable via Hugging Face Spaces and Google Colab.