Movie trailer genre classification using multimodal pretrained features
My paper “Movie trailer genre classification using multimodal pretrained features” is published on Expert Systems with Applications.
The paper is available on Elsevier ScienceDirect (paywalled) and here (open). The source code is available on Github. Our paper can be cited as:
@article{SULUN2024125209,
title = {Movie Trailer Genre Classification Using Multimodal Pretrained Features},
author = {Sulun, Serkan and Viana, Paula and Davies, Matthew E.P.},
year = {2024},
journal = {Expert Systems with Applications},
volume = {258},
pages = {125209},
issn = {0957-4174},
doi = {10.1016/j.eswa.2024.125209}}
Abstract
We introduce a novel method for movie genre classification, capitalizing on a diverse set of readily accessible pretrained models. These models extract high-level features related to visual scenery, objects, characters, text, speech, music, and audio effects. To intelligently fuse these pretrained features, we train small classifier models with low time and memory requirements. Employing the transformer model, our approach utilizes all video and audio frames of movie trailers without performing any temporal pooling, efficiently exploiting the correspondence between all elements, as opposed to the fixed and low number of frames typically used by traditional methods. Our approach fuses features originating from different tasks and modalities, with different dimensionalities, different temporal lengths, and complex dependencies as opposed to current approaches. Our method outperforms state-of-the-art movie genre classification models in terms of precision, recall, and mean average precision (mAP). To foster future research, we make the pretrained features for the entire MovieNet dataset, along with our genre classification code and the trained models, publicly available.