🔖Detect chapters
Last updated
Last updated
Dividing your timeline into chapters can be very helpful.
When you're close to a final cut, chapter detection can give you markers that you can export as timestamps for YouTube (so YouTube can mark them as chapters in the video), and can even generate pretty motion graphics clips you can show at the start of each chapter.
You can also use it early in the process to put markers on your timeline (e.g. to mark things like "Introduction", "Topic 1, 2, 3, ...", "Sponsor segment", "Conclusion") for reference while you edit your video, making the project easier to navigate. These are just like any other markers and can be moved around and edited. However, this is not the recommended use case at the moment, because analysing a long sequence (>20 mins) can take quite a while with current technology.
Tip: Analyzing a long sequence (>20 mins) can take a while with current technology. We suggest to run this feature close to the end of your editing process, after removing silences and repetitions.
Chapter detection in FireCut:
Analyses the words being spoken in the selected portion of your sequence to detect when new topics are starting
Shows these topics as timestamps in a list you can review and edit; you can add more topics, change the names of topics, and delete topics you don't want
Lets you do 3 actions with this finalised list:
Copy your chapter timestamps as a text snippet to clipboard, for pasting into your YouTube video's description
Put these timestamps as markers in your timeline
Generate divider motion graphics clips that show an animated title; these are added to a new video track at the top of your sequence
There are two main settings for chapter detection:
Scope: You can decide whether to run the operation on your Full sequence or only on a certain portion that you specify with In / Out points (you can place these in your sequence using the I
and O
hotkeys)
Language: We are constantly adding support for new languages. However, please note this is experimental and may create unpredictable results when compared with English. This is the case for every AI tool even if they don't mention it, because training sets for AI tend to be in English. On occasion, this might cause the operation to fail -- please do let us know (support@firecut.ai) if this happens, so we can improve it for you!
Use towards the end of your workflow so that your sequence is fairly short (<20 min)
Make sure your audio is loud and clear, with any music / sound effects muted
Try it on a short section of your timeline first to get familiar with the tool; processing larger sequences can take a while (~2 min for every ~10 min of sequence)
You can edit the generated motion graphics clips after they are placed in the timeline, e.g. changing the font size and text
Don't work in the timeline while FireCut is working in the background
"I can't see the motion graphics clips" --> These are always added to a new video track at the top of your sequence - this can often be hidden at the top of your sequence, so try scrolling up to make sure it's not hiding from you!
"The chapter topics / timestamps don't match up with my video" --> Make sure you have a clear speaker in the audio so that your script can be picked up and analysed. If there are any sound effects, multiple speakers, music, noise, etc., this can throw off the analysis in some occasions
"It didn't work on my non-English video" --> Please drop a note at support@firecut.ai. We are keen to make sure this feature works reliably on all supported languages!