ffmpeg


FFmpeg: Could find no file with path ‘%08d.ppm’ and index in the range 0-4 %08d.ppm: No such file or directory

During some work that we were doing, we used the following command to export the frames of a video using FFmpeg:

ffmpeg -v quiet -i "$video" "$input_frames_folder/%08d.ppm";

The above command exported the video frames into the selected folder and using eight digits zero-padding it named all the images in an increasing order starting from the number 00000001 (00000001.ppm).

Later on, we processed those frames and deleted some of the first ones (specifically, we deleted the first 61 frames, so the first available frame was named 00000062.ppm). When we tried to rebuild the video using the command below, we got the error that is listed after the command:

ffmpeg -framerate "25/1" -i "$input_frames_folder/%08d.ppm" -pix_fmt yuv420p -y processed.mkv;
Could find no file with path '%08d.ppm' and index in the range 0-4 %08d.ppm: No such file or directory

To fix the issue, we used the -start_number parameter with the value 62. The parameter sets the file starting index for the matched image files.

ffmpeg -start_number 62 -framerate "25/1" -i "$input_frames_folder/%08d.ppm" -pix_fmt yuv420p -y processed.mkv;

Please note that we also used -pix_fmt yuv420p as we were getting a video with black frames only, so we had to define the format of the pixels to use manually.


Testing free Text to Speech engines on Ubuntu GNU/Linux

Recently, we decided to test a few free text to speech engines (TtS) on GNU/Linux. We were curious on what the current capabilities that are available as we wanted to create a few videos. To play a bit, we tested espeak, festival and pico. For this reason, we created a text file (called text.txt) and added the following content to it:

I triple E has a lot of scholarships, awards, and opportunities, but it doesn't have a centralized site where members can quickly identify the right ones.
Many problems arise as a result of the lack of this platform.
One crucial issue is that many people are unaware of specific opportunities until it is too late. Many projects are squandered each year because there is insufficient knowledge about these opportunities, resulting in low participation.
Another critical difficulty is having to start over with each application. Many people find it frustrating, and it prevents them from doing so.
The lack of real-time support to answer issues while an applicant is applying is critical, leading to discouragement and abandonment of the application process.
Providing references is a topic that many individuals are uncomfortable with. They are embarrassed to seek references that need to learn new systems and maybe answer the same questions posed in other ways.

Our solution is utilizing the Collabratec platform and storing all of these opportunities there:
Collabratec already has numerous key capabilities in place, lowering development costs.
Each application may have its own community or working group where an applicant can seek special clarifications or support. Collabratec will save money on development by repurposing existing technology. It will also give such features a new purpose.
Through those working groups, experienced members can share their knowledge and potentially coach applicants during their application process. Many members would be willing to help others attain their objectives, especially after they've gone through the process and understand the frustrations others are experiencing. We could utilize badges to reward individuals who aid others and those who apply for these possibilities, which is a frequent practice in Collabratec to make certain members stand out. This approach will assist members in getting to know one another and expanding their network outside their geographic zones, resulting in a genuinely global I triple E experience.
People who create opportunities can utilize the I triple E profile of a user to pre-populate elements of their application. As a result, the applicants' effort will be reduced because they will only fill in the questions related to that particular opportunity.
Without any additional work, the system may reuse earlier references. Assume that a reference has to be updated or validated to ensure that it is still valid. In that situation, the system may send an automatic notification to that person, asking them to approve, alter, or delete their earlier contribution.
Because users can readily share each application form and the corresponding working group information, Collabratec's capabilities as a social network will significantly enhance each opportunity's reach and all related documents, public comments, and discussions.

espeak

We started off with espeak and we used the following commands to test it:

# Command to install espeak;
sudo apt install espeak;
# Command that reads the text.txt file creates an audio file from its content.
espeak -f text.txt -w espeak.wav;

The result from espeak is below:

espeak definitely does not sound human-like. It is a fun tool if you need to create an audio file that sounds robotic! In our case, it was not a solution as we wanted to use a long text, listening to a robotic voice for a lot of time can be tiring.

festival

After that, we tested the text2wave tool of festival as follows:

sudo apt install festival;
cat text.txt | text2wave -o festival.wav;

The results from festival/text2wave are the following:

festival does sound a bit better than espeak, it’s almost smooth but not quite. You can easily tell that this is a computer-generated voice.

pico

Finally, we tested pico utilities. We set it up and used it as follows:

sudo apt install libttspico-utils;
pico2wave -l en-US -w test.wav "$(cat text.txt)";

The results of pico2wave were pretty good! Not perfect but still good! The voice is nearly human-like and fairly smooth. Below is the result of our test:

From the three utilities, pico was the most human-like and it fit our needs more. With this tool, we will be able to create certain videos with narration without being too annoying.

Other information

To create the videos, we used ffmpeg. As in the following commands, we combined the audio wave files with static images that were looped forever.

ffmpeg -loop 1 -i TtS-pico2wave.png -i test.wav -c:a aac -c:v libx264 -pix_fmt yuv420p -shortest TtS-pico2wave.mp4;
ffmpeg -loop 1 -i TtS-espeak.png -i espeak.wav -c:a aac -c:v libx264 -pix_fmt yuv420p -shortest TtS-espeak.mp4;
ffmpeg -loop 1 -i TtS-festival.png -i festival.wav -c:a aac -c:v libx264 -pix_fmt yuv420p -shortest TtS-festival.mp4;

Some notes on how to record audio from a terminal in Ubuntu 20.04LTS

Recently, we were trying to record the audio that was played on the system speakers using an Ubuntu 20.04LTS desktop. In the installation, there was no dedicated audio recorder installed and we did not want to install any. To record, we used the following command to get the list of all audio sources available to the system:

pactl list short sources;

The pactl command produced results like so:

3	alsa_output.pci-0000_00_1f.3.analog-stereo.monitor	module-alsa-card.c	s16le 2ch 44100Hz	SUSPENDED
4	alsa_input.pci-0000_00_1f.3.analog-stereo	module-alsa-card.c	s16le 2ch 44100Hz	SUSPENDED
10	alsa_input.usb-Dell_DELL_PROFESSIONAL_SOUND_BAR_AE515-00.iec958-stereo	module-alsa-card.c	s16le 2ch 44100Hz	SUSPENDED
12	alsa_output.usb-Dell_DELL_PROFESSIONAL_SOUND_BAR_AE515-00.analog-stereo.monitor	module-alsa-card.c	s16le 2ch 44100Hz	IDLE

We knew that the system was using the Dell soundbar in analog mode to play the music (as we could see in the Settings under the Sound category, which is depicted below), so we copied the following name from the line that starts with the number 12:

alsa_output.usb-Dell_DELL_PROFESSIONAL_SOUND_BAR_AE515-00.analog-stereo.monitor

Then we used that monitor name as an input device for FFmpeg like so:

ffmpeg -f pulse -i alsa_output.usb-Dell_DELL_PROFESSIONAL_SOUND_BAR_AE515-00.analog-stereo.monitor test.mp3;

When we were done recording, we pressed CTRL+C to stop the recording.


Cut a video based on start and end time using FFmpeg

You probably don’t have a keyframe at the specified second mark if you can’t cut a video at a particular moment.
Non-keyframes need all of the data beginning with the previous keyframe because they encode variations from other frames.

Using an edit list, it is possible to cut at a non-keyframe with the mp4 container without re-encoding.
In other words, if the closest keyframe before 3s is at 0s, the video will be copied starting at 0s, and FFmpeg will use an edit list to tell the player to begin playing 3 seconds in.

If you’re using the latest version of FFmpeg from git master, it’ll use an edit list when you run it with the command you give.
If this does not work for you, it is you are likely using an older version of FFmpeg or that your player does not support edit lists.
Some players can disregard the edit list and play the entire file from beginning to end, regardless of the edit list.

If you want to cut specifically at a non-keyframe and have it play at the desired point on a player that doesn’t support edit lists, or if you want to make sure the cut section isn’t in the output file (for example, if it includes sensitive information), you can do so by re-encoding so that a keyframe is present at the desired start time.
If you don’t mention copy, re-encoding is the norm.
Consider the following scenario:

ffmpeg -i input.mp4 -ss 00:00:07 -t 00:00:18 -async 1 output.mp4
  • The -t option defines a length rather than an end time.
  • The command above will encode 18 seconds of video beginning at 7 seconds.
  • Use -t 8 to start at 7 seconds and end at 15 seconds.
  • If you’re using a recent version of FFmpeg, you can also use -to instead of -t in the above command to make it end at the required time.