Building a Camera Recorder with Human Detection using Python and OpenCV

Gal B.
6 min readAug 1, 2023

--

Introduction

In this tutorial, we’ll explore how to build a camera recorder application with human detection using Python and OpenCV. This application allows us to record video from our computer’s camera and automatically detect and extract the portions where humans are present. With the power of YOLO object detection algorithm, we can identify and isolate human movements with ease. Let’s dive in and see how we can create this application step by step.

Prerequisites

Before we begin, make sure you have the following prerequisites installed on your system:

  • Python 3.x
  • OpenCV library
  • Pre-trained YOLO model (yolov3.cfg and yolov3.weights)
  • VLC media player (or any other compatible media player)
  • ffmpeg (command-line tool) for video extraction
Powerful Technologies: Python, OpenCV, YOLO and FFmpeg

Setting up the Project

  1. Clone the project repository from GitHub.
    You can find the repository here.
  2. Inside the repository, you will find a detailed README file that provides step-by-step instructions on setting up the project. It includes information about the prerequisites, installation steps, and additional resources.
  3. Follow the instructions in the README file to install the required Python packages, download the pre-trained YOLO model, and set up VLC media player and ffmpeg on your system.
  4. The README file also provides guidance on how to configure the project, including the placement of the YOLO model files and the usage of the application.
Project Code Files: Python Scripts for Camera Recording and Human Detection

Recording Video from the Camera

  1. Open the camera_recorder.py file in your preferred code editor.
  2. Inside the file, you’ll find the necessary code to capture video from the camera using OpenCV. This includes initializing the video capture, setting the desired video resolution, and configuring the video codec.
  3. You can modify the configuration parameters according to your requirements. For example, you can adjust the resolution by changing the values of the frame_width and frame_height variables. Additionally, you can experiment with different video codecs by modifying the fourcc variable.
  4. Run the camera_recorder.py script to start recording video from the camera. You'll see a live video feed from the camera, and a new video file will be created and saved in the "records" folder.

By referring to the camera_recorder.py file inside the project repository, you'll find all the necessary configuration details and code implementation to record video from the camera using Python and OpenCV.

Extracting Human Movement

After detecting humans in the video, we’ll extract the portions where humans are present. All the configuration details and code implementation can be found inside the human_detection.py file in the project repository.

  1. Open the human_detection.py file in your preferred code editor.
  2. Inside the file, you’ll find the necessary code to perform human detection using YOLO and extract the frames representing human movement. This includes initializing the YOLO model, setting the confidence threshold (conf_threshold), and defining the post-processing steps.
  3. You can modify the configuration parameters according to your needs. For example, you can adjust the confidence threshold by changing the value of the conf_threshold variable. Higher values will result in stricter detection, while lower values may include more false positives.

By referring to the human_detection.py file inside the project repository, you'll find all the necessary configuration details and code implementation to extract human movement from the recorded videos using YOLO and OpenCV.

Demo: Human Detection in Action — Appearing and Disappearing Twice

Bonus Question: Why I Record Myself with a Clock? (Answer is here)

Saving Detected Human Portions as Videos

Once we have the frames representing human movements, we’ll save them as separate video files. All the configuration details and code implementation can be found inside the human_detection.py file in the project repository.

  1. Open the human_detection.py file in your preferred code editor.
  2. Inside the file, you’ll find the necessary code to save the frames representing human movement as separate video files. This includes configuring the output file name, format, and file path.
  3. You can modify the configuration parameters according to your requirements. For example, you can change the output file format, adjust the file naming convention, or specify a different output directory.
  4. Run the human_detection.py script to start the human detection process. The script will analyze the recorded video files from the "records" folder, detect human movements, and identify the frames representing human presence.
  5. The detected portions of the videos will be saved as separate MP4 files in the configured output directory. Each output file will have a name format of “video_timestamp_human.mp4” (e.g., “video_2023–07–06_11–52–34_human.mp4”).

By referring to the human_detection.py file inside the project repository, you'll find all the necessary configuration details and code implementation to save the frames representing human movement as separate video files using OpenCV and ffmpeg.

Recorded Videos and Detected Human Portions: records vs. output folders

Challenge: subprocess.run vs. subprocess.call

During the implementation of the project, you may come across a challenge when using the subprocess module for executing the ffmpeg command. There are two commonly used functions in the subprocess module: subprocess.run and subprocess.call. Both functions can be used to run external commands, but they have some differences in terms of behavior and return values.

The subprocess.run function was introduced in Python 3.5 and provides a more powerful and flexible interface compared to subprocess.call. It allows capturing the command's output, handling errors, setting timeouts, and more. On the other hand, subprocess.call is a simpler function that runs the command and waits for it to complete, without capturing output or handling errors.

When using ffmpeg to create videos from the detected frames, you may face a decision between subprocess.run and subprocess.call. If you need to capture the output or handle potential errors during the video creation process, subprocess.run is the recommended choice. However, if you only need to execute the command and wait for it to complete without capturing the output, subprocess.call can be a simpler alternative.

ffmpeg Command: Creating Human-Detected Video Clips from Frames

Consider your specific requirements and use the appropriate function accordingly. Be sure to handle any potential errors or exceptions that may occur during the execution of the ffmpeg command.

Comparison: subprocess.run vs. subprocess.call

Conclusion

In this tutorial, we have built a camera recorder application with human detection using Python and OpenCV. We’ve learned how to record video from the camera, detect humans using YOLO, and extract the portions with human movement. By leveraging the power of OpenCV and YOLO, we can automate the process of detecting and isolating human movements, opening up possibilities for various applications such as surveillance, activity monitoring, and more.

Feel free to customize and expand upon this article based on your project’s specific implementation details and additional insights you’d like to share.

--

--

Gal B.
Gal B.

Written by Gal B.

Backend engineer with a passion for crafting robust solutions. linktr.ee/ga1b

No responses yet