I use moviepy to edit videos in Python, as it provides a very clean and pythonic interface for editing videos. Recently, we were looking for different options on implementing a background removal feature for a mobile app project that I have been working on. I managed to come up with a low efford solution that we can use to have the minimum functionality, until we decide to either ditch, keep or improve what we have.

Before I dive into the details of the code snippet, I would like to share a demonstrative video to give a sense of its performance. I used an interview video of Lena Lenke from a German dark comedy show, called How to Sell Drugs Online (Fast), as my test video. Although I called the title background removal, I think blurring the background looks a bit more interesting than providing a dark background.

First, a short clip from the original video

Now, with a blurred background

Now that you know what to expect, I would like to go through the code snippet. I use moviepy to import, transform and export the video frames. First, I will focus on the lines between 14 and 17 to get a sense of moviepy API. I am reading the video file, cropping it between 36–45th seconds, blurring the background, and finally exporting it.

Line 16 might look a bit confusing to people who are not familiar with moviepy interface. fl_image is a method that expects an image transform function as the input. By image transform function, I mean that the function is supposed to accept an image and provide an output as another image. You can think of fl_image as a mapping function from original frames to processed frames. As you might guess, blur_background function on line 7 is our transform function.

 1from skimage.filters import gaussian
 2from moviepy.editor import VideoFileClip
 4import mediapipe
 5selfie_segmentation =
 7def blur_background(im):
 8    mask = selfie_segmentation.process(im).segmentation_mask[:, :, None]
 9    mask = mask > 0.8
11    bg = gaussian(im.astype(float), sigma=4)
12    return mask * im + (1 - mask) * bg
14video_clip = VideoFileClip("video.mp4")
15video_clip = video_clip.subclip(36, 45)
16video_clip = video_clip.fl_image( blur_background )
17video_clip.write_videofile("sample.mp4", audio=False)

Let’s look at the details of blur_background. To be able to distinguish a background from a human in the foreground, we need a model/algorithm that accepts images as inputs and outputs a mask to tell us which parts of the image belongs to the background and which part belongs to the human in front of the camera. Luckily, Google provides a library called mediapipe, which has various APIs to provide various image processing algorithms. The one that I use for this project is the Selfie Segmentation API.

After we initialize SelfieSegmentation as shown on line 5, all we need to do is to call its process method on video frames and make use of the segmentation mask in line 8. The returned segmentation_mask is also called alpha channel. Using the alpha channel, we can make use of alpha matting to process background and foreground separately. In other words, I am composing two images, one is the original frame and the other is the blurred frame, where the human pixels are untouched.