The dwerft project aims to maintain all metadata created throughout processes along the entire media value chain. Why? Because metadata is key when we talk about searching and finding
audio-visual content. Creating metadata manually, as it is – still – done nowadays, is pretty expensive. That is also why a typical metadata set describing a programme is rather limited in terms
of detailedness and accuracy – usually you only get three or four sentences of content description, optionally paired with genres and actors. Therefore, AI tools analysing the video as well as
the audio track are more and more being applied lately to annotate already existing audio-visual content.
The dwerft project follows a completely different approach – revolutionising the future of media production by gathering, structuring, and re-using all metadata created throughout the entire
production process. It will interconnect the so-far disconnected media value chain by collecting the metadata from any tool or system involved, and turn it into a structured format (following an
ontology) that allows search & find on a semantic, meaningful and even – by being combined with related timecodes coming from the post-production – on a scene level. Imagine a search like
“give me all scenes where Angelina Jolie is wearing a red dress”, and you only get correct results. Try it, now, on any VoD platform, and you will fail. With dwerft in place, you won’t. Metadata
will be structured fine-granularly enough to allow automated processing, either in search or in recommendation engines – and besides that, the data can easily be transformed to allow the export
to all these Video on Demand platforms being around.
So, how to get there? Solution approaches are shown based on several use cases along the media value chain from pre-production and post-production.
This workshop is presented by the dwerft research project.
Julius Dasche (CEO PreProducer), Mark Guelbahar (Senior Engineer IRT),
Holger Lehmann (CEO Rotor Film)