A 3D spoon

Introduction

I have pasted some code at the end of that post. If you put it in a file called macro.gmic, you’d be able to redo the commands proposed in this post that call such a file.

Inspiration

I have been in a museum lately, it was great and at some point, I saw a spoon, a spoon that was designed more to be beautiful than useful (see image below).

Spoon designed by Isaie Bloch

Spoon designed by Isaie Bloch

I thought that keeping aside aesthetic considerations, such a spoon was just a conventional spoon with some material subtracted from the handle using a random pattern. I thought I could try to do something like that. I just have to create a 3D shape and send it to a 3D printing web site and I would have my spoon.

3D vectors images in G’mic

G’mic easily handles 3D images when they are 3D raster images (also called volumetric files), i.e. matrices full of voxels. But to print in 3D, you need to provide a 3D mesh, i.e. a 3D vector representation of your shape.

Limited 3D objects manipulation

G’mic capabilities in such 3D objects (as it is referred in the documentation) are very limited. As some expert stated it: “G’MIC is not really meant to *process* 3d objects”. You are nowhere close to what Blender can do. It has a marching cube algorithm (command -isosurface3d), it can “elevate” some 2D surface (-elevation3d) or it can create simple shapes (-sphere3d, -box3d, …), not much more. For example, it doesn’t have any boolean operator, so I can already forget about that material subtraction in G’mic.

A rare format

Another difficulty is that G’mic handles only one 3D object format: the quite unused OFF format. A standard format for 3D printing would rather be the STL.

After some research, I was unable to find any scriptable way to convert OFF in STL and reverse. The most handy way to do that is meshlab, which isn’t scriptable and doesn’t allow any boolean operation. Neither Blender(for artists) nor Salome(for techies) are able to handle the OFF format. They both have a python API, but tests proved me that writing an OFF reader/writer is painful, learning Blender is painful, it is impossible to call the python API from outside Blender and it is painful to do it from outside Salome, which is also painful to learn and install and handles STL imperfectly.

The good news is that if Shapeways doesn’t accept the OFF, Sculpteo does. So at the end of everything, I’ll save a few clicks by sending my file to them.

Strategic conclusion

So, if I want to use G’mic to create my random pattern, I’ll have to stay with G’mic until the end or I have to add numerous clics in my process, many of them being painful. And I want to create my pattern in G’mic. And I am interested to see until where I can go in 3D printing with G’mic.

So, I abandon the pattern substraction, my random shape will be made with addition. I don’t really need some boolean operator for my addition, as I realized that I just have to create 2 overlapping shapes, put them in my OFF file and Sculpteo would handle that just fine, it will be one single plastic thing at the end.

Ok, ok, let’s go!

Starting from a conventional spoon

I found a conventional spoon (thank you Hobbyman), I downloaded it, I removed the handle painfully thanks to Blender, meshlab that to an OFF file. And then I was ready for thousands of tests.

Now I have a spoon bowl!

Now I have a spoon bowl!

Random pattern

I wanted my random pattern to come from the -stencil command because I like it.

CC-BY stencil command on a cow: gmic cow.jpg -to_gray -stencil 2,1,10

CC-BY Emilio Labrador
stencil command on a cow: gmic cow.jpg -to_gray -stencil 2,1,10

I needed it to be seamless since it is intended to roll and that both ends must match nicely. For that, I duplicate some random noise so that the -stencil command run on it sees the same noise on each side. I am so proud of that trick.

gmic 200,200 -rand 0,255 [0] -a x -stencil 4 -crop 25%,75%

gmic 200,200 -rand 0,255 [0] -a x -stencil 4 -crop 25%,75%

Actually, as I wanted to have smooth edges, I added some -blur in the process.

Earning the third D

Now that I have the texture, I have to make it 3D. I tried to use the -elevate command, that transforms 2D image in 3D volumetric files. The idea was to do everything with volumetric files and to pass to 3D objects only at the end with -isosurface3d. but whatever I did, the final result looked always too pixelized.

So, I used the -elevation3d command to get directly a 3D object.

gmic macro.gmic -seamless_stencil 100,200,2 -elevation3d 0.1

gmic macro.gmic -seamless_stencil 100,200,2 -elevation3d 0.1

But this means that, even if G’mic isn’t meant to, I now have to *process* a 3D object.

Rolling

For example, I have to roll the texture to get a round stick that is supposed to be a spoon handle at the end. Even if this is not convenient, everything is explained in the wiki: the -split3d command transform the 3D object into 6 column vectors which contain all the informations that constitute it. For example, here, the idea is simply to modify vertices coordinates to go from flat to round. For that, you have to play with the third column vector, the one that contains vertices coordinates. Some -sin and -cos do the trick. I made a generic custom command for that : -roll3d.

gmic macro.gmic -seamless_stencil 100,200,2 -elevation3d 0.1 -roll3d 100 -*3d 1,5,1

gmic macro.gmic -seamless_stencil 100,200,2 -elevation3d 0.1 -roll3d 100 -*3d 1,5,1

Sewing and closing

My 3D mesh will only mean something to a 3D printer if it is a closed surface. This means that I have to close the bottom and the top of the stick, but I also have to sew it along its length.
By sewing, I mean modifying the triangles from one side to have them use the vertices of the triangles from the other side. This can be done by manipulating the fourth column vector.
To close the bottom and the top, I choose to add some triangles at both ends. All those new triangles use a vertex created on the stick axis and already existing vertices from the triangles at the ends. This requires the manipulation of almost every column vector and it was a nightmare to have it work. I made a not so generic custom command for all that called -close_tube3d.

sewing_closing

Spoon completion

I only have a handle so far. I need to make a single OFF file with the handle I just made and the bowl I stole from Hobbyman. The good news is that it is trivial: just add the two 3D objects with the command -add3d (shortcut -+3d). Well I also had to move the handle and the bowl around so that I get a good spoon at the end.

good_bad_spoon

Glorious conclusion

Now I have a spoon! and since I chose a white plastic, I can even eat with it. OK, it isn’t as beautiful as Isaie Bloch’s one, but my children like it.

Actually, I printed two of them, one in polished plastic to see the difference. Well, there isn’t much difference, they both give a granular feeling in the mouth. My children have tested, they are quite solid, not stiff at all, but you can give them back their shape without breaking them. The handle ends are a bit sharp, you can not cut anything with it, but it makes it a bit uncomfortable in the mouth, I could have done something about it, but I was fed up with pain, I wanted my spoon IRL at once. I doubt I will redo any spoon in the future, but it gave me some ideas for other things.

gmic macro.gmic -handle_and_spoon

gmic macro.gmic -handle_and_spoon

The code

handle_and_spoon:
  -stencil_tube3d 50,100,50 -*3d 0.04,0.13,0.04
  cut_spoon.off
  -c3d -rotate3d[0] 0,0,1,90
  -+3d[0] -47
  -+3d -rotate3d 1,0,0,180


# gmic macro.gmic -stencil_tube3d 100,200,100
#$1 : nb segments
#$2 : nb segments longitudinal
#$3 : radius
stencil_tube3d:
  -seamless_stencil $1,$2,2 
  -elevation3d 0.2 -roll3d $3 
  -*3d 1,5,1 
  -close_tube3d $1,$2

# gmic macro.gmic -seamless_stencil 10,200,2 -elevation3d 0.1 -roll3d 100 -*3d 1,5,1 -close_tube3d 10,200
close_tube3d:
  -color3d 255,255,255  #I want only one color
  -split3d 1            #require version 1.5.8.1 or more
  ymin=@{2,(0,1,0,0)}
  ymax=@{2,(0,{@{2,h}-2},0,0)}
  #sew tube longitudinally
  -e LONGI
    -local[3]
      -s y
      -e {5*($1-1)-1}
      --[{5*($1-1)-2}--1:{5*($1-1)}] {$1-1}
      --[{5*($1-1)-1}--1:{5*($1-1)}] {$1-1}
      -a y
    -endl
  #close tube at lower end:
  -e LOWER
    (1;{$1-1}) -+[1,-1]                         #increment vertices and primitive number
    (0;$ymin;0) -a[2,-1] y                      #add a vertice in vertices data
    #add triangles in primitive properties: 
    -i 1,{$1-1},1,1,3
    -i 1,{$1-1},1,1,y
    --shift[-1] 0,1,0,0,2
    -i 1,{$1-1},1,1,{@{1,(0,0,0,0)}-1}
    -a[-4--1] x -r[-1] 1,{w*h},1,1,-1
    -a[3,-1] y
    -i 1,{{$1-1}*3},1,1,255 -a[4,-1] y          #new triangles are white
    -i 1,{$1-1},1,1,1 -a[5,-1] y                #new triangles opacity = 1
  #close tube at upper end:
  -e UPPER
    (1;{$1-1}) -+[1,-1]                         #increment vertices and primitive number
    (0;$ymax;0) -a[2,-1] y                      #add a vertice in vertices data
    #add triangles in primitive properties: 
    -i 1,{$1-1},1,1,3
    -i 1,{$1-1},1,1,y+{$1*($2-1)}
    --shift[-1] 0,1,0,0,2
    -i 1,{$1-1},1,1,{@{1,(0,0,0,0)}-1}
    -a[-4--1] x -r[-1] 1,{w*h},1,1,-1
    -a[3,-1] y
    -i 1,{{$1-1}*3},1,1,255 -a[4,-1] y          #new triangles are white
    -i 1,{$1-1},1,1,1 -a[5,-1] y                #new triangles opacity = 1
  #back to 3d object
  -a y

# gmic macro.gmic -seamless_stencil 200,200,2
seamless_stencil: -skip ${3=0}
  $1,$2 -rand 0,255 [0] -a x    #random pattern repeated twice
  -stencil 1                    #stencil
  -blur $3                      #blur has to be done before cropping to ensure seamless
  -crop 25%,75%                 #keep only the middle

# gmic macro.gmic 200,200 -rand 0,255 --mirror x -a x -stencil 1 -blur 2 -elevation3d 0.1 -roll3d 100 -*3d 1,5,1 
# $1 : radius
roll3d:
  -split3d -local[2]
    -r 3,33.3333333333%,1,1,-1
    -s x
    L={@{0,M}-@{0,m}}
    -+[-1] $1                     #radius of future vertices
    --*[0] {2*pi/$L} [-1]
    -cos[-2] -sin[-1]
    -*[-2,-1] [-3]
    -rm[0,2]
    -mv[1] 0
    -a x
    -r 1,300%,1,1,-1
  -endl
  -a y

Averaging face photos : mouth alignment

Intro

In a previous post, eyes of face photos were aligned to enable photo averaging as neat as possible around the eyes. Someone suggested that I could also align mouths by stretching a bit those faces for a neater averaging around the mouth. He was right, this is about to happen in this additional post. But it will happen without Masha Fishman because I intent to use colors in this post and she is in black and white.

Identify the mouth

First, all those mouths are about at the same place (under the nose!). It is thus possible to examine them with a simple -crop. Starting from the directory aligned_rotate_zoom, where eyes are already aligned:

gmic * -crop 180,517,455,700

28 mouths

Looking at those mouths, one can see that they have very different shapes, but that they all have a distinctive color : they are a bit more red. I didn’t find anything useful by watching at their red channel in the RGB space. But in the HSV space, their hue channel shows a different color for the lips. To study the hue channel, it is better to “rotate” it by 180° since the colors we want to observe are close to 0 and thus close to 360 also.

Mouth segmentation

Let’s try to segment those mouths, let’s make masks by thresholding the rotated hue channel. After some tests, it feels that 192 is a not so bad value for that:

mkdir mouth_mask
cd aligned_rotate_zoom
for i in *png
do
  gmic $i -crop 180,517,455,700 -rgb2hsv -channels 0 -+ 180 -mod 360 -blur 5 -threshold 192 -negative -o ../mouth_mask/$i
done
28_mouth_masks

28 mouth masks

To see what it is worth, we use those masks to highlight the mouths:

cd aligned_rotate_zoom
gmic * -crop 180,517,455,700 -append_tiles 14 ../mouth_mask/* -append_tiles[1--1] 14 -+[-1] 1 -* -n 0,255

The highlighted zone is the mouth mask

The highlighted zone is the mouth mask


Well, it is far from a perfect segmentation, but our final goal is not segmentation but localization in order to be able to correctly align mouths. And with those masks they are localized, we actually only miss one lip : Harriet Sykes upper one.

Stretching

The idea is to stretch the face according to mouth altitude. To estimate this altitude, we use -barycenter:

cd mouth_mask
gmic * -barycenter -rows 1 -a x

The command above displays a not so interesting row matrix, but it displays the information we want in the terminal : the mean value of this matrix, which is the average altitude of all the masks, for me : 74.

So now, we are ready to stretch. We’ll stretch only the area between the eyes and the mouth, which is for me about between altitude 376 and 500.

For every image in the directory aligned_rotate_zoom, the script below computes the difference between 74 and the altitude of the mouth mask. It then splits the image in 3 parts and stretches the middle part to get the mouth at the right altitude. Finally, it reassembles the 3 parts and resizes them so that every stretched images have the same definition.

mkdir stretched
cd aligned_rotate_zoom
for i in *png
do
  gmic $i ../mouth_mask/$i -barycenter[-1] -rows[-1] 1 --[-1] 74 \
  --rows[0] 0,375 --rows[0] 376,500 -resize[-1] '{w},{h-@{1,(0,0,0,0)}}' --rows[0] 501,100% \
  -a[-3--1] y -resize[-1] 644,766,1,3,0 -o[-1] ../stretched/$i
done

28_stretched_faces

As you can see, no face is completely deformed. And the final averaging gives a much neater mouth than obtained in the previous post:

Averaging face photos : eye alignment

Intro

I stumbled upon a post from Patrick David where he takes front view close-ups photos of celebrities faces from Martin Schoeller and averages them with Image Magick.

I do the same here, but image alignment is made with a script and more image blendings are proposed. All that with G’mic.

Stealing the close-ups

Martin Schoeller shows his close-ups with a flash applet, it is thus not possible to just save them as usual. So, many screenshots and a big crop later and they are all 29 in a directory called faces:

  gmic screenshots*.png -crop 231,112,874,877 -o faces/faces.png

29 faces

Automatic alignment using -phase_correlation

Patrick David aligned images by hand using Gimp, he “tried to get eyes on the same level, and the same distance from the centers”. Let’s try to do it with a script.

The dumb way

“To estimate the relative translative offset between two similar images”, phase correlation is a good trick. G’mic has a command called -phase_correlation. Starting from two images, this command returns a single pixel image with 3 layers, each one being the estimated offset in the X, Y or Z direction.

Let’s try it. Let’s try to compare every images with the first one (Jack Nicholson) and to shift them according to what -phase_correlation find:

mkdir dumb_phase_correlation
cd faces
cp faces_000000.png ../dumb_phase_correlation
for i in $(ls|tail -n +2)
do
  gmic $i faces_000000.png --phase_correlation[-1,-2] -shift[0] '@{-1,(0,0,0,0)},@{-1,(0,0,0,1)}' -o[0] ../dumb_phase_correlation/$i
done

Obviously, this doesn’t work:

badly aligned

Aim at the eye

This failure just means that -phase_correlation should be used wisely.

There’s no way -phase_correlation can deal with different skin/hair colors, so we’ll use -gradient_norm to work on contours. And since we want to align eyes, we should ask -phase_correlation to focus on eyes only, let’s say the right one, we’ll do that with a wise -crop. It gives:

mkdir aligned_right_eye
cd faces
cp faces_000000.png ../aligned_right_eye
for i in $(ls|tail -n +2)
do
  gmic $i faces_000000.png --gradient_norm -n[-1,-2] 0,1 -crop[-1,-2] 96,258,322,444 --phase_correlation[-1,-2] -shift[0] '@{-1,(0,0,0,0)},@{-1,(0,0,0,1)}' -o[0] ../aligned_right_eye/$i
done

It looks much better:

All 29 images aligned on the right eye.

And if we try to average them:

  gmic * -+ -normalize 0,255 -o aligned_right_eye.jpg,90

it looks ok … for the right eye:

the 29 photos averaged after being aligned on the right eye.

the 29 photos averaged after being aligned on the right eye.

Align, rotate, zoom

The right eye is the first step, let’s try to align on both eyes. It won’t be easy because some of them don’t have both eyes at the same altitude and they all have different eye pitch.

Fortunately, they all have 2 eyes. If we suppose that the phase correlation focused on the right eye gives somehow the position of the right eye, we can do the same on the left eye and then align, rotate and zoom to get a perfect alignment on both eyes.

Since this is a much more complex issue, we’ll use a macro file called macro.gmic that will contain an custom command called align_rotate_zoom. The custom command first aligns according to both eye. Then it makes a rotation with an angle computed with eyes vertical shift and centered at mid distance between Jack Nicholson’s eyes. The zoom factor given to the -rotate is calculated with a comparison between the image eye pitch with Jack Nicholson’s one. At the end, the custom command is:

align_rotate_zoom:
  --gradient_norm -n[-1,-2] 0,1
  --crop[-1,-2] 322,258,548,444 -phase_correlation[-1,-2]  #left eye
  -crop[-2,-3] 96,258,322,444 -phase_correlation[-2,-3]  #right eye
#align
  vertical_shift={(@{-1,(0,0,0,1)}+@{-2,(0,0,0,1)})/2}  #average of the two dy
  horizontal_shift={(@{-1,(0,0,0,0)}+@{-2,(0,0,0,0)})/2}  #average of the two dx
  -shift[0] $horizontal_shift,$vertical_shift
#rotate & zoom
  eye_pitch=220 centerx=319 centery=337 
  angle={atan((@{-1,(0,0,0,1)}-@{-2,(0,0,0,1)})/$eye_pitch)/pi*180}
  zoom_factor={$eye_pitch/($eye_pitch-@{-1,(0,0,0,0)}+@{-2,(0,0,0,0)})}
  -rotate[0] $angle,0,2,$centerx,$centery,$zoom_factor
#clean a bit
  -cut[0] 0,255 -rm[-1,-2]

And the bash script to run becomes:

mkdir aligned_rotate_zoom
cd faces
cp faces_000000.png ../aligned_rotate_zoom
for i in $(ls|tail -n +2)
do
  gmic $i faces_000000.png ../macro.gmic -align_rotate_zoom -o[0] ../aligned_rotate_zoom/$i
done

It looks fine even if imperfect, for example, Hillary Clinton is too big. But when averaging is made on many photos, if only a few are wrong, you can’t really see it. And averaging could also be made without Hillary.

All 29 images aligned, rotated and zoom to fit both eyes.

And the final averaging gives:

the 29 photos averaged after being aligned on both eyes thanks to shifting, rotating and zooming.

the 29 photos averaged after being aligned on both eyes thanks to shifting, rotating and zooming.

Nice gaze isn’t it?

Other blending

Averaging is only one way of blending many images together, there are zillions of other ways. Here below is one using -compose_edges, which gives priority to edges:

gmic * -compose_edges 0.8 -o blend_edges.jpg,90

gmic * -compose_edges 0.8 -o blend_edges.jpg,90

I made some others based on image multiplication, averaging inverse of image values, minimum, maximum, minimum or maximum only on luminance:

Post Scriptum

In the next post, mouth alignment is added for event better averaging.

Finding grain orientation in the weilding heat-affected zone with the hough transform

The problem

The microstructure of an austenitic weld originates in the dentritic type growth along the axis of the heat flow direction. In some cases, this leads to elongated and oriented grains which can grow with epitaxic process on several millimeters length. An example of metallographic observations on a transversal section of an austenitic weld clearly reveals the columnar grain structure (Figure). It reveals an heterogeneous structure due to the weld geometry: on each side of the weld, the grains are perpendicular to the chamfer and they are slightly tilted with respect to the vertical in the middle of the weld.

Several applications like, for example, Finite Element simulations, requires to determine the local orientation inside the weld.

Image from Bertrand Chassignole, CC-by.

The Hough transform

The Hough transform is a wonderful trick. With a voting system, it tells where there are things looking like a straight line. It tells it in a graph with orientation in abscissa and distance from the “origin” in ordinate. All this is much clearer once you have tested an image with the G’mic command x_hough:

gmic image.jpg -x_hough

Image from Jean-Philippe Mathieu, CC-by.

The Hough transform the way I like it

The idea here is to cut the HAZ image in small square samples and to estimate the main orientation of each sample thanks to a hough transform. Since the distance from the origin is of no interest, the hough transform is asked to be one pixel high. The abscissa of the maximum value gives the main orientation.

If the principle is applied to the ladybug picture seen above:

gmic ladybug.jpg -hough 360,1 -display_graph

Two peaks can be seen, one at about 115° and one at about 295° (180° more, which means the same orientation). They both correspond to the herb orientation.

The Custom command

With the 1D hough transform, some treatment is applied to get only one peak. The maximum position gives the main orientation and it is even possible to build a confidence criteria based on the maximum value.

The custom command proposed here takes 2 parameters, one for the sample size and one for the accuracy. Setting a low accuracy is a way to cope with the high frequency oscillation seen on the curve, there are probably better ways to handle that.

At the end the G’mic command below shows a quite acceptable result in less than one second :

gmic macro.gmic haz.png --haz_orientation 40,10 -compose_rgba[0,1] -keep[0]

Edit : from the version 1.5.4.0 on, image blending has been rethought, thus the command line above becomes:

gmic macro.gmic haz.png --haz_orientation 40,10 -blend[0,1] alpha -keep[0]

The future

Of course, it can be much improved. As already said, the custom command can be refined to be more accurate by applying for example some gaussian blur on the 1D hough transform. “Some people” will want to get an orientation estimation at each pixel, this should be easy to make. And it would be great to get that estimation only based on information inside the HAZ, this will require some thinking and probably some semi-manual HAZ contouring. It will also quickly become important to be able to handle pictures with small defects inside because HAZ photography can not always be that clean. For that, I plan to make some inpainting in zones whose values are too different from their neighborhood, but I face an awkward issue : how to make G’mic understand that 0° and 179° are not distant numbers?

Time as the third dimension

Intro

Once upon a time, I stumbled upon the work Urban flow of Adam Magyar. He let people pass in front of a camera that record one pixel wide images and paste all that together to create a weird panoramic effect.

From Adam Magyar

I told to myself :”You could easily do that kind of things with any video and G’mic, just keep a one pixel wide portion of each image and paste. By the way, you have already done it.”

One pixel wide portion

Providing that you have enough memory in your computer, keeping “a one pixel wide portion of each image” to make another one is pretty easy. Ex, to keep portion number 123:

gmic *.png -columns 123 -append x

One can select conveniently the portion to keep by using G’mic 3D viewer. Build your 3D parallelepiped by appending everything in the Z direction:

gmic *.png -append z

The 3D viewer shows 3 plane cuts of the 3D image: XY plane at upper left, YZ at upper right and XZ at lower left.

Here below is what I have done in the garden.

And here what I have done with Kylie Minogue’s video.

Click to see the whole image.

Going further

Once I had played with images in the YZ plane, I began to make some other plane cut. For example, to cut at 45 degrees a pack of images:
gmic *.png -a z -permute xzyc -rotate 45 -permute xzyc -slices 50%

Below is a video showing plane cut rotating around the Z axis:

Here, the original video is rebuilt with a plane cut whose orientation varies randomly. It makes that at each instant, some part of the image belong to the future and some to the past.

The above video has been made with that ugly G’mic macro.

I have done other things with the same idea, but they are even worse.

How I play with videos

Intro

I sometimes play with videos. For that, I use G’mic.

Export images

G’mic is able to load directly videos, but my computer seldom has enough memory. So, before playing with video images, I create a directory called original and export every images of that video using MPlayer:
mkdir original
cd original
mplayer ../video.avi -nosound -vo png

It gives me tons of files. I could export as jpeg if I want those files to take less space, but I usually prefer the lossless png format.

Macro

Then I make a macro file that I always call macro.gmic. It contains the custom commands that I will need for my image processing. I make a very simple shell script which loops over every file, one at a time and output the result in another directory. This script looks like that:
#/bin/bash
mkdir -p processed
cd original
for i in *.png
do
gmic macro.gmic $i -my_command -o ../processed/$i
done

Play video without video

Once enough files have been processed, I use MPlayer to play those files as if they were a video to get a quick feeling of what it will look like at the end :
mplayer mf://*.png
I love that MPlayer feature.

One step, one directory

If my processing can be divided in different steps, I don’t hesitate to fill intermediate folders with half processed images. This way, I am able to tune separately each processing stage, it saves me time. Of course, it always finishes with a dozen of folders, a dozen a macros that have modified so many times that I don’t remember what version of which one is related to what folder and many thousands of files. Well, it is my hobby contained inside my computer, the mess is allowed.

Make a real video

At the end, to make the video from all that files, there are dozens of solutions. But experience has lead me to do first an mjpeg file with mencoder:
mencoder mf://*.png -o my_video.avi -ovc lavc -lavcopts vcodec=mjpeg
If I want to add some sound:
mencoder mf://*.png -o my_video.avi -ovc lavc -lavcopts vcodec=mjpeg -audiofile music.mp3 -oac copy
I finally reopen that my_video.avi in Avidemux to reencode it in H264.

Sound

If I have to play with sound, I use the command line tool SoX or Audacity if I want a graphical user interface.

Parallelization

The shell allows parallelization using the insanely complex command xargs. Parallelization is great, it can put all the cores and your had drive at 100% use, but it changes a bit the way things should be done, you need to have script file whose argument are passed by xargs.
For example, your script file script.sh will contain a single line :
gmic macro.gmic $@ -my_command -o ../processed/$@
and your script will be called by the command:
ls *png|xargs -I{} -P 4 ./script.sh {}
You can replace the 4 by any number, don’t hesitate to exceed the number of cores you have.

I say ‘you’ because personally, I don’t really use that trick, It puts my computer on the knees and prevents me from properly using it for something else in parallel.