S
S
Sergey Sokolov2019-04-07 18:45:36
Amazon Web Services
Sergey Sokolov, 2019-04-07 18:45:36

Why does ffmpeg on Amazon Lambda crash with a SIGSEGV error?

I'm trying to figure out Amazon Lamda under nodejs and run ffmpeg there to create a short test movie.
Function created, it is launched. With standard commands like lsor cat /proc/cpuinfowork out.
I put a static build of ffmpeg under i686 into package from here . When it is launched via child_process.spawn() , it crashes with a signal SIGSEGV
. I did not find much noise about this on the net. The @ffmpeg-installer/ffmpeg package recommended in the tutorials for deploying ffmpeg to AWS Lambda uses the same binaries.
I took this tutorial on resizing images, he worked for me as it should, created a reduced picture in the bucket. I changed the code slightly and slipped the ffmpeg binary.

My index.js
const join = require('path').join;
const tmpdir = require('os').tmpdir;
const process = require('process');

const tempDir = process.env['TEMP'] || tmpdir();
const filename = join(tempDir, 'test.mp4');

const fs = require('fs');

const spawn = require('child_process').spawn;
const exec = require('child_process').exec;

const async = require('async');
const AWS = require('aws-sdk');
const util = require('util');

process.env['PATH'] = process.env['PATH'] + ':' + process.env['LAMBDA_TASK_ROOT'];

const s3 = new AWS.S3();
 

exports.handler = function(event, context, callback) {
  // Read options from the event.
  console.log("Reading options from event:\n", util.inspect(event, {depth: 5}));
  var srcBucket = event.Records[0].s3.bucket.name;
  // Object key may have spaces or unicode non-ASCII characters.
  var srcKey = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));  
  var dstBucket = srcBucket + "resized";
  var dstKey  = "render-test.mp4";

  // Sanity check: validate that source and destination are different buckets.
  if (srcBucket == dstBucket) {
    callback("Source and destination buckets are the same.");
    return;
  }


  // Download the image from S3, transform, and upload to a different S3 bucket.
  async.waterfall([
    function transform(next) {
      var args = [
        '-filter_complex',
        '"testsrc=r=25:s=640x480:d=3"',
        '-an',
        '-y',
        '-hide_banner',
        '-c:v', 'libx264',
        filename,
      ];
      
      console.log("Will launch ffmpeg");
      const childProcess = spawn('ffmpeg', args);
      
      childProcess.on('close', function(e) {
        console.log('ffmpeg close event: ' + JSON.stringify(arguments));
        next();
      });

      console.log("After launched ffmpeg");
    },
    
    function upload(next) {
      
      next();

      var fileStream = fs.createReadStream(filename);
      fileStream.on('error', function (err) {
        if (err) { throw err; }
      });  
      fileStream.on('open', function () {

        s3.putObject(
          {
            Bucket: dstBucket,
            Key: dstKey,
            Body: fileStream,
            ContentType: 'video/mp4',
          },
          next
        );

      });
    }
  ], function (err) {
      if (err) {
        console.error(
          'Unable to resize ' + srcBucket + '/' + srcKey +
          ' and upload to ' + dstBucket + '/' + dstKey +
          ' due to an error: ' + err
        );
      } else {
        console.log(
          'Successfully rendered ' + dstBucket + '/' + dstKey
        );
      }

      callback(null, "message");
    }
  );
};

On the close event of the running process comes Does this mean that you need to somehow compile ffmpeg yourself under AMI ? Upd. I ran the same static build of ffmpeg without problems on an EC2 instance from the same AMI. Those. It's probably not a compilation issue. I tried, as they wrote somewhere, to transfer it myself to a folder already on Lambde and once again put the . I was convinced that he was right there and with the correct permissions through on the lambda. Even a simple command through returns does not work{ "0": null, "1": "SIGSEGV" }
amzn-ami-hvm-2017.03.1.20170812-x86_64-gp2
ffmpeg/tmpchmod 400ls -lA /tmp/ffmpeg
ffmpeg --helpchild_process.execSync()Error: Command failed: /tmp/ffmpeg --help

Answer the question

In order to leave comments, you need to log in

1 answer(s)
S
Sergey Sokolov, 2019-04-08
@sergiks

As a result, he won by compiling ffmpeg on AWS EC2 t2.micro from the image on which Lambda is running.
Although the static build from the JohnVanSickle site is launched on such a machine, for some reason it stubbornly crashes on Lambda with an error. Therefore, alas, only compile.
Used the markus-perl/ffmpeg-build-script script . But it's not smooth with him either: compilation crashed with a codec error aom/ av1, which stubbornly showed its version as 0.1.0, and not the required one 1.0.0. Created a ticket . This little thing was cured by the rejection of the aom codec - in the 376th line of the script it build-ffmpegwas replaced --enable-libaom by --disable-libaom
Build on a weak t2.micro took a long few hours, but the final binary worked in AWS Lambda!

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question