GStreamer is a library for constructing [...] graphs of media-handling components. The use cases it covers range from simple Ogg/Vorbis playback, audio/video streaming to complex audio (mixing) and video (non-linear editing) processing.I've found gstreamer to be remarkably flexible and useful for a variety of audio-video applications. One of the trickier things I had to figure out was how to have 8 channels of audio playing in one gstreamer pipeline.
These examples require the following packages:
- JACK Audio Connection Kit librairies
These modules should be installed in the above order. Personally I use the CVS head for all of the above, which you can get by doing:
$ cvs -d:pserver:firstname.lastname@example.org:/cvs/gstreamer co modulename
where modulename is gstreamer, gst-plugins-base, gst-plugins-good, and gst-plugins-bad respectively.
Gstreamer includes a command-line utility, gst-launch, that allows a user to quickly build a gstreamer pipeline with a simple text description. For example:
Users should be aware of this warning from the gst-launch man page:
$ gst-launch audiotestsrc ! jackaudiosink
will play a sine wave, provided you are already rolling a jack server. I generally run a jack server using the qjacktl application.
gst-launch is primarily a debugging tool for developers and users. You should not build applications on top of it. For applications, use the gst_parse_launch() function of the GStreamer API as an easy way to construct pipelines from pipeline descriptions.
It is possible to run a simple multichannel audio example with the following launch line:
gst-launch-0.10 -v interleave name=i ! audioconvert ! audioresample ! queue ! jackaudiosink \
audiotestsrc volume=0.125 freq=200 ! audioconvert ! queue ! i. \
audiotestsrc volume=0.125 freq=300 ! audioconvert ! queue ! i. \
audiotestsrc volume=0.125 freq=400 ! audioconvert ! queue ! i. \
audiotestsrc volume=0.125 freq=500 ! audioconvert ! queue ! i. \
audiotestsrc volume=0.125 freq=600 ! audioconvert ! queue ! i. \
audiotestsrc volume=0.125 freq=700 ! audioconvert ! queue ! i. \
audiotestsrc volume=0.125 freq=800 ! audioconvert
! queue ! i. \
audiotestsrc volume=0.125 freq=900 ! audioconvert ! queue ! i.
This pipeline consists of 8 audiotestsrc elements, which generate sine tones of increasing frequency. The audioconvert element is used to convert a given audiotestsrc's output to the appropriate numeric data type that the queue element expects. The queue element is a simple data buffer, to which the audioconvert element writes, and from which the interleave element reads. The interleave element combines multiple channels of audio into one interleaved "frame" of audio. For example, if we had 2 independent channels of audio like so:
where channel 1 outputs only 0's, and channel only 1's, the interleaved frame would look like:
multiChannel.c, which initializes interleave appropriately. It can be compiled with this Makefile. The relevant section in multiChannel.c is the function set_channel_layout(GstElement *interleave). This function is passed our interleave element, and sets its channel-positions property to an array of valid spatial positions.
Gstreamer uses Glib's object system (GObject) heavily, and as a result the above example program might be a little tricky to follow for programmers used to straight C. Check out gstreamer's application development manual for further examples of gstreamer usage in C.