I made the above video by combining a static .png of myself with a Rayleigh-Taylor instability simulation performed in Athena++. In principle, any photo can be combined with any desired fluid simulation with the methods described here.

This process begins with understanding that colors in a photo can be decomposed into red, green, and blue intensities. At the same time, it is possible to simulate fluid dyes in Athena++ with the use of passive scalars. Therefore by assigning three species of passive scalars corresponding to red, green, and blue dyes with relative concentrations set by the desired input photo, it is possible to paint any photo onto a simulated fluid. The simulated fluid then just does whatever you have told it to (in this case, undergo Rayleigh-Taylor instability) and the dyes will mix as they would in reality.

1. Convert .png to RGB

I did this with a simple python script that takes my (500x500) pixel .png and outputs the RGB pixel values into three .txt files (one for each red, green, blue channel).

import numpy as np
from PIL import Image

file = '/Users/apbailey/Avery_Bailey_photo.jpeg'
im = Image.open(file)
rgb_im = im.convert('RGBA')
r_array = np.zeros((500,500),dtype=np.uint8)
g_array = np.zeros((500,500),dtype=np.uint8)
b_array = np.zeros((500,500),dtype=np.uint8)
for i in range(500):
    for j in range(500):
        r, g, b, a = rgb_im.getpixel((i, j))
        r_array[j,i] = r
        g_array[j,i] = g
        b_array[j,i] = b
np.savetxt('rdata.txt', r_array, fmt="%d")
np.savetxt('gdata.txt', g_array, fmt="%d")
np.savetxt('bdata.txt', b_array, fmt="%d")

2. Initialize passive scalars according to RGB colors

Initializing the passive scalars in Athena++ requires adding a code block to read the .txt files and initialize them in the fluid simulation. This block goes in the MeshBlock::ProblemGenerator(ParameterInput *pin) function inside /src/pgen/rt.cpp (the Rayleigh-Taylor problem generator). This is what the code for the red channel looks like and can be easily extended to green and blue:

// read the red channel intentsities data
std::string rfile;
rfile = pin->GetOrAddString("problem", "rdata", "none");
std::ifstream file;
file.open(rfile);
int rdata [500][500];
for (int i=0; i != 500; ++i) {
    for (int j=0; j != 500; ++j) {
        file >> rdata[i][j];
    }
}
file.close();

// initialize the scalar concentrations based on the intensities
for (int k=ks; k<=ke; k++) {
  for (int j=js; j<=je; j++) {
    for (int i=is; i<=ie; i++) {
      int iindex = 500-(i-NGHOST)-1;
      int jindex = 500-(j-NGHOST)-1;
      pscalars->s(0,k,j,i) = (rdata[jindex][iindex]/255.0);
    }
  }
}

The path to the .txt files is read from the Athena++ input file (athinput.rt2d) at runtime by including the line rdata = rdata.txt in the <problem> block.

3. Simulate time evolution of passive scalars

With the above modifications, the Rayleigh-Taylor simulation can be run by compiling the Athena++ binary with command python configure.py --prob=rt --nscalars=3.The compiled binary can then be put into a directory with the relevant input file and three RGB .txt files and run with ./athena -i athinput.rt2d.

4. For each frame, construct a .png based on the passive scalars in the simulation

With the simulation finished and 2000 frames in the form of Athena++ output files in .vtk format, the passive scalar concentrations can be read and converted back to a .png with a python script like the following:

import numpy as np
import athena_read
from PIL import Image

for i in range(2000):
    x,y,z,data = athena_read.vtk('rt.block0.out2.'+str(i).zfill(5)+'.vtk')
    s0 = (data['rho']*data['r0'])[0,::-1,::-1]
    s1 = (data['rho']*data['r1'])[0,::-1,::-1]
    s2 = (data['rho']*data['r2'])[0,::-1,::-1]
    s0[s0 > 1.0] = 1.0
    s1[s1 > 1.0] = 1.0
    s2[s2 > 1.0] = 1.0
    rdata = (255.0*s0).astype(np.uint8)
    gdata = (255.0*s1).astype(np.uint8)
    bdata = (255.0*s2).astype(np.uint8)
    rgbArray = np.dstack((rdata,gdata,bdata))
    img = Image.fromarray(rgbArray)
    img.save('image.'+str(i)+'.png')

5. Combine .pngs into a video

Finally the output .png files can be combined into a video with ffmpeg using a command like

ffmpeg -f image2 -r 30 -i image.%d.png -vcodec libx264 -crf 18 -pix_fmt yuv420p profile-video.mp4

Note: I left some trouble about ICC profiles out of this writeup. The ICC profiles used in picture formats can result in subtle color differences between the starting .png and the final video. In my case, the final video made me look pallid and desaturated by comparison. I was not able to perfectly match the colors on my machine in the end but with some hacks was able to improve the result.