tag:blogger.com,1999:blog-21872472571189212422024-03-04T20:01:46.298-08:00Tristan's werkeCode && sound && pictures && music.Tristan Matthewshttp://www.blogger.com/profile/00636142362637536578noreply@blogger.comBlogger14125tag:blogger.com,1999:blog-2187247257118921242.post-40865580018240212592015-05-28T17:53:00.001-07:002015-05-28T17:53:39.027-07:00Ctrl+Z<div dir="ltr" style="text-align: left;" trbidi="on">
So after a lengthy hiatus, I should probably send this blog to the farm upstate. I'll leave it up in case any of the information posted is helpful, although it may be woefully out-of-date by now.<br />
<br />
I'm still working on multimedia free software, but mostly <a href="https://www.videolan.org/">VLC Media Player</a> and the <a href="https://xiph.org/daala">Daala</a> video codec these days.<br />
<br />
Oh and we <a href="https://girlarm.bandcamp.com/">made a record</a>:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<img alt="" border="0" height="320" src="https://f1.bcbits.com/img/a1363061880_10.jpg" title="GIRL ARM - Trading Cities EP" width="320" /></div>
<br />
Thanks for reading!</div>
Tristan Matthewshttp://www.blogger.com/profile/00636142362637536578noreply@blogger.com0tag:blogger.com,1999:blog-2187247257118921242.post-100022111444002332010-10-27T13:45:00.000-07:002010-10-27T13:48:31.187-07:00Looping playback with GStreamerRecently on the GStreamer mailing list, someone asked how to loop playback using the playbin element. Since this seemed pretty straightforward, I thought I'd post it here. To clarify, playbin is a higher-level element that greatly simplifies typical playback scenarios. The code posted here is derived from <a href="http://www.gstreamer.net/data/doc/gstreamer/head/manual/html/chapter-components.html#section-components-playbin">this playbin example.</a> The important addition is the bus callback, which simply listens for an end of stream (EOS) message, and upon receiving it, seeks to the beginning of the stream. This restarts playback.<br />
<!-- weird hack to void angle bracket nonesense in include directives --><br />
<pre class="brush: c">#include <gst/gst.h>
gboolean bus_callback(GstBus *bus, GstMessage *msg, gpointer data)
{
GstElement *play = GST_ELEMENT(data);
switch (GST_MESSAGE_TYPE(msg))
{
case GST_MESSAGE_EOS:
/* restart playback if at end */
if (!gst_element_seek(play,
1.0, GST_FORMAT_TIME, GST_SEEK_FLAG_FLUSH,
GST_SEEK_TYPE_SET, 0,
GST_SEEK_TYPE_NONE, GST_CLOCK_TIME_NONE)) {
g_print("Seek failed!\n");
}
break;
default:
break;
}
return TRUE;
}
gint
main (gint argc,
gchar *argv[])
{
GMainLoop *loop;
GstElement *play;
GstBus *bus;
/* init GStreamer */
gst_init (&argc, &argv);
loop = g_main_loop_new (NULL, FALSE);
/* make sure we have a URI */
if (argc != 2) {
g_print ("Usage: %s <URI>\n", argv[0]);
return -1;
}
/* set up */
play = gst_element_factory_make ("playbin", "play");
g_object_set (G_OBJECT (play), "uri", argv[1], NULL);
bus = gst_pipeline_get_bus (GST_PIPELINE (play));
gst_bus_add_watch (bus, bus_callback, play);
gst_object_unref (bus);
gst_element_set_state (play, GST_STATE_PLAYING);
/* now run */
g_main_loop_run (loop);
/* also clean up */
gst_element_set_state (play, GST_STATE_NULL);
gst_object_unref (GST_OBJECT (play));
return 0;
}
</pre>Tristan Matthewshttp://www.blogger.com/profile/00636142362637536578noreply@blogger.com3tag:blogger.com,1999:blog-2187247257118921242.post-78273505051066454242010-10-07T14:30:00.000-07:002012-01-11T21:00:51.513-08:00New Hey Predator! EP (apologies to the technical readers)Since I do have the word <i>music</i> in the subtitle of this blog, I thought I should share some of ours for a change. My band <a href="http://heypredator.com">Hey Predator!</a> just finished a 5 song EP. <br />
Enjoy: <br />
<a href="http://heypredator.bandcamp.com/album/foxholes-and-atheists-and-so-forth">http://heypredator.bandcamp.com/album/foxholes-and-atheists-and-so-forth</a>Tristan Matthewshttp://www.blogger.com/profile/00636142362637536578noreply@blogger.com0tag:blogger.com,1999:blog-2187247257118921242.post-30701747502166454782010-05-21T15:47:00.000-07:002010-05-21T16:07:15.404-07:00Building the GStreamer VP8 pluginsBy now it's old news that Google has launched the <a href="http://www.webmproject.org/">WebM project</a>. Despite <a href="http://x264dev.multimedia.cx/?p=377">its shortcomings</a>, this is very exciting as at the very least, we are that much closer to a high quality open and free video format for the web...well, <a href="http://www.h-online.com/open/news/item/Report-MPEG-LA-planning-patent-pool-for-VP8-1005252.html">maybe</a>. <br />
In any case, I just wanted to quickly document the steps required to use the new VP8 plugins in GStreamer (also thanks to David Schleef for his feedback). VP8's build system leaves a bit to be desired so bear with me:<br />
<br />
-Get the latest tarball of vp8 <a href="http://code.google.com/p/webm/downloads/list">here</a>.<br />
-Configure with your target architecture, in my case:<br />
<code>./configure --target=x86-linux-gcc</code><br />
-Build with the <code>make</code> command.<br />
-Do an install (not as root):<br />
<code>make install</code><br />
The resulting install directory will be called something like vpx-vp8-nodocs-x86-linux-v0.9.0<br />
It will contain the following:<br />
<code>bin build include lib md5sums.txt src</code><br />
-Go into this new directory <br />
<code>cd vpx-vp8-nodocs-x86-linux-v0.9.0</code> <br />
-Copy include to /usr/local/include/vpx (notice the renaming):<br />
<code>sudo cp -rf include /usr/local/include/vpx</code><br />
-Copy the library to /usr/local/lib<br />
<code>sudo cp lib/libvpx.a /usr/local/lib</code><br />
-Build gst-plugins-bad (git version or <a href="http://blogs.gnome.org/uraeus/2010/05/19/webm-and-gstreamer/">patch the latest releases</a>).<br />
<br />
Note that I use an uninstalled GStreamer source tree from git, which I highly recommend. Refer to <a href="http://svn.sat.qc.ca/trac/scenic/wiki/GstUninstalled">http://svn.sat.qc.ca/trac/scenic/wiki/GstUninstalled</a> for a brief overview on how to get that going.<br />
Make sure that after running autogen.sh or configure, the vp8 plugins are listed under those which will be built. If all has gone well, you should be able to get a plugin description from <code>gst-inspect vp8</code>.<br />
<br />
Now to test with your favourite GStreamer pipeline. Run something like:<br />
<code>gst-launch -v v4l2src ! video/x-raw-yuv, width=320, height=240 \<br />
! ffmpegcolorspace ! vp8enc max-latency=1 ! vp8dec \<br />
! xvimagesink sync=false</code><br />
<br />
(hopefully) Success! My thanks go out the GStreamer developers for yet another great contribution.Tristan Matthewshttp://www.blogger.com/profile/00636142362637536578noreply@blogger.com0tag:blogger.com,1999:blog-2187247257118921242.post-22188501910175815742009-05-20T13:31:00.000-07:002011-09-11T14:44:34.167-07:00Making your own one-step build in UbuntuIt's been a while, so I thought I could at least share something quick that <a href="http://identi.ca/koya">Koya Charles</a> and I came up with yesterday. Koya's done a lot of work to make sure that <a href="https://github.com/tmatth/scenic">Scenic</a>'s autoconf/build setup stays sane. The one important feature we were missing was a "one button build". Typically, I'd do something like:<br /><code><br />cd PATH_TO_TRUNK<br />./autogen.sh && ./configure && make -j4 && sudo make install<br /></code><br />Note the -jn flag which tells make to build n sources in parallel, good if you have a multicore machine (see <a href="http://www.codinghorror.com/blog/archives/000655.html">Jeff Atwood's</a> argument on the quad-core vs. dual-core debate, particularly the 'Comments' section for relevant discussion).<br /><br />So Koya and I made a shell script to do this:<br /><code><br />cd "`dirname $BASH_SOURCE`/../../trunk"<br />notify-send -t 2000 "Building scenic..."<br />make -j4 && gksu make install && notify-send -t 10000 "Done building scenic"<br /><br /></code><br />First, the variable <span style="font-weight: bold;">$BASH_SOURCE</span> evaluates to the location of the build script in question. We then use this to get the path to our source tree (as both are in our repository). We then call notify-send (requires libnotify-bin) to show a popup telling us that the build is being attempted. We then compile and call gksu, as this is intended to be used via a hotkey, instead of plain sudo for the make install. Provided the make and make install were successful, we again use notify to post a popup that the build is done. You should make this script executable with<br /><code><br />chmod u+x your_build_script.sh<br /></code><br />This script can be called from the command line, but to be even more useful I made a hotkey for it.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRX_RXbzD4Wms94M9Gul5KQvb8HbdA46evZmArkqmx1E66UDczvyGZPftycM_hGvEbAAKfok0qh4Lu_3n2j2c9VXXAnl6Y1V5hsFTwP6w-f7ZPMaG7niXPu7bOROMkEHwRJY4NUeUtg9xP/s1600-h/Screenshot-Configuration+Editor+-+global_keybindings.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 320px; height: 261px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRX_RXbzD4Wms94M9Gul5KQvb8HbdA46evZmArkqmx1E66UDczvyGZPftycM_hGvEbAAKfok0qh4Lu_3n2j2c9VXXAnl6Y1V5hsFTwP6w-f7ZPMaG7niXPu7bOROMkEHwRJY4NUeUtg9xP/s320/Screenshot-Configuration+Editor+-+global_keybindings.png" alt="" id="BLOGGER_PHOTO_ID_5338014222877985378" border="0" /></a><br />Running gconf-editor, you can edit what keybindings you have for your global workspace, with the following steps:<br /><ol><li>run gconf-editor from a terminal (or just Alt-F2, then type in <span style="font-weight: bold;">gconf-editor</span> in the "Run Application" window that appears).<br /></li><li>click on the tab for <span style="font-weight: bold;">apps</span>, under <span style="font-weight: bold;">/</span></li><li>click on the tab for <span style="font-weight: bold;">metacity</span>, under <span style="font-weight: bold;">apps</span></li><li>click on <span style="font-weight: bold;">global_keybindings</span></li><li>in the adjacent window, right-click on <span style="font-weight: bold;">run_command_1</span> (assuming it's disabled, otherwise use the first run_command_x listed as disabled) and click <span style="font-weight: bold;">Edit key</span>.<br /></li><li>I use F5 (as in the F5 key) for my binding, but it can be whatever you choose. Just type 'F' and '5' in the <span style="font-weight: bold;">Value:</span> field, NOT the F5 key itself, to get this binding.</li><li>Next, click on the tab <span style="font-weight: bold;">keybinding_commands</span>, right below <span style="font-weight: bold;">global_keybindings</span>, and right-click on command_1 (or whichever you chose) and click <span style="font-weight: bold;">Edit Key</span>.</li><li>In the field labelled <span style="font-weight: bold;">Value:</span>, enter the path to your build script, for example: /home/tristan/devel/scenic/inhouse/misc/build_scenic.sh</li></ol>Now, I can build my project no matter what window is in focus, without opening a terminal, just by hitting the F5 key.Tristan Matthewshttp://www.blogger.com/profile/00636142362637536578noreply@blogger.com0tag:blogger.com,1999:blog-2187247257118921242.post-38947919098889648672009-02-09T12:10:00.000-08:002010-11-22T17:25:42.996-08:00Snapshots from a live video source in gstreamer<b>EDIT</b>: A helpful reader pointed out that using videorate in the pipeline should do the trick, something like:<br />
<pre class="brush: c">gst-launch -v v4l2src ! tee name=t ! queue ! xvimagesink t.
! queue ! videorate ! video/x-raw-yuv, width=640, height=480, framerate=1/1 ! jpegenc !
filesink location=test.jpeg</pre>I'm fairly sure I tried this before with no success, but it works fine now. There have been some major bugfixes to videorate since I wrote this. In any case, I'll leave the rest of the article up in the hope that it is still useful.<br />
<hr>As part of the <a href="https://svn.sat.qc.ca/trac/miville">propulse[ART]</a> project, we wanted to add a "preview" feature to the existing audio/video engine. This involves writing a frame of video every second to a jpeg file, while displaying the video at a full 30 fps in a window. I was surprised to discover that this feature was not already <a href="http://bugzilla.gnome.org/show_bug.cgi?id=464630">implemented in gstreamer</a> for live sources (to the best of my knowledge).<br />
This should probably be implemented by a real gstreamer element, but for our purposes a relatively straightforward hack prototyped in <a href="https://svn.sat.qc.ca/trac/miville/browser/inhouse/prototypes/gstreamer/cpp/streamshots/streamshot.cpp?rev=1988">streamshot.cpp</a> was sufficient. The video pipeline consists of a video4linux source, a capsfilter to enforce some properties on the video, an ffmpegcolorspace element, and an xvimagesink. We attach a callback to the source pad of the video4linux source that is called every time our source has a new buffer of data (i.e. a frame). Since timing is not a big concern, the cb_have_data callback knows to only write a file every second by checking a boolean value that is periodically set to true by a separate callback. The cb_have_data function makes a copy of the buffer, swaps its red bytes with its blue bytes (swapping the red_mask and blue_mask in the caps did not work for some reason) and writes the modified buffer to a jpeg, line-by-line. The jpeg writing code was based on <a href="http://andrewewhite.net/lifetype/blog.php/the-white-blog/programming/2008/09/02/very-simple-jpeg-writer-in-c-c">Andrew White's Xlib screenshot example</a>. Thanks also to <a href="http://koya.notreplace.org/">Koya Charles</a> for help debugging this example, as well as the byte swapping.<br />
The final version of the preview feature will probably happen entirely in a separate thread and not involve file-writing, but rather be streamed as these frames will be part of our web-interface.<br />
<br />
<br />
<span style="font-weight: bold;">Update</span>: To make up for this entry's lack of flashiness, enjoy this <a href="http://code.google.com/p/codeswarm/">code_swarm</a> video of the propulse[ART] software's (codename <span style="font-style: italic;">miville</span>) development so far (code_swarm instructions courtesy of <a href="http://amix.dk/blog/viewEntry/19350">amix.dk</a>):<br />
<br />
<object width="400" height="300"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="movie" value="http://vimeo.com/moogaloop.swf?clip_id=3156303&server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=&fullscreen=1" /><embed src="http://vimeo.com/moogaloop.swf?clip_id=3156303&server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=&fullscreen=1" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="400" height="300"></embed></object>Tristan Matthewshttp://www.blogger.com/profile/00636142362637536578noreply@blogger.com5tag:blogger.com,1999:blog-2187247257118921242.post-29835382891079410422008-12-24T10:40:00.000-08:002014-09-15T09:07:11.521-07:00Computer Vision on OS X with Python and OpenCV<div dir="ltr" style="text-align: left;" trbidi="on">
After my macbook's sudden demise and spontaneous, inexplicable regeneration, I've decided to try and port some things I've worked on in GNU/Linux to OS X. Also, having a built-in microphone and camera on a portable computer is pretty amazing when you work primarily with audio and video and need to test things.<br />
<br />
I was really happy with how my previous forays in <a href="http://pygame.org/">pygame</a> had been going, so over the holidays I thought I 'd try and make a version of my <a href="http://tristanswork.blogspot.com/2008/12/live-animation-with-pygame-and.html">prev</a><a href="http://tristanswork.blogspot.com/2008/12/live-animation-with-pygame-and.html">ious camLoops prototype</a>, but for OS X.<br />
Also, at <a href="http://alexandre.quessy.net/">Alexandre Quessy</a>'s invitation, I've added <a href="https://www.gitorious.org/blog-examples/blog-examples/source/30997a929de0be711fc14a4a180ad1bd1ee83fad:cam_loop/cam_loop.py">cam_loop.py</a> to the <a href="http://code.google.com/p/toonloop/">ToonLoop</a> project. More on that in the coming weeks, so stay "tooned" (if not put off by that terrible pun).<br />
<br />
*<span style="font-weight: bold;">UPDATE</span> The initial opencv version of camLoops <a href="http://code.google.com/p/toonloop/source/browse/trunk/py/camLoopOpenCV.py">is now available</a>. I've only tested it on a MacBook running OS X so far, but it should also work in GNU/Linux.<br />
<br />
I did some research online and there did not seem to be much in the way of camera- frame-grabbing modules for OS X with python bindings (granted it is a pretty niche area). The most complete are Apple's <a href="http://developer.apple.com/quicktime/qtkit.html">QTKit</a>, the Cocoa framework (which has python bindings) for Quicktime and <a href="http://opencv.willowgarage.com/wiki/Welcome">OpenCV</a>, or Open Source Computer Vision, which is a cross-platform, BSD-licensed library written in C with Python bindings. Since I have no interest in wasting time using a Mac-only, proprietary framework like QTKit, the choice was pretty obvious. The installation, however, was not.<br />
<br />
Given my heavy use of fink packages (when I should really just give in and roll GNU/Linux on my laptop), I am used to having to fight with cross-platform librairies/frameworks. I was even pleased to find an OS X specific <a href="http://opencv.willowgarage.com/wiki/Mac_OS_X_OpenCV_Port">build instruction page</a> on the OpenCV wiki. It's quite likely that my approach is not the best solution for building but it's what worked for me.<br />
<br />
I decided for better or worse to get the current <a href="http://www.python.org/download/releases/2.6.1/">production version of Python</a>, which is 2.6.1 at the time of this writing. I grabbed the disk image and ran the installer (and breathed a sigh of relief). For pygame to work, I had to get <a href="http://pyobjc.sourceforge.net/">PyObjC</a>, which is used to build Cocoa apps for OS X in Python. This is not required for OpenCV, so skip ahead if you're not using pygame. This also required an svn <a href="http://svn.red-bean.com/pyobjc/branches/pyobjc-1.4-branch">checkout of the 1.4 branch</a> of pyobjc which works on Tiger, as no slick disk-images with installers are available from the website at present. To grab the branch, do:<br />
<br />
<code>svn co http://svn.red-bean.com/pyobjc/branches/pyobjc-1.4-branch pyobjc</code><br />
<br />
Fortunately, the PyObjC people know their target audience and all it took once in the pyobjc directory to make a nice installer was:<br />
<code><br />python setup.py bdist_mpkg --open</code><br />
<br />
*<span style="font-weight: bold;">UPDATE</span> Previously OpenCV was using CVS for version control, they have since migrated to Subversion. Check out a clean version with:<br />
<br />
<code>svn co https://opencvlibrary.svn.sourceforge.net/svnroot/opencvlibrary opencvlibrary </code><span style="font-family: Georgia,serif;"><br />In the INSTALL file, it is suggested that you do <code>autoreconf -i --force</code><br />This failed for me (even though my autotools are up to date) so i used the pre-existing configuration files and left autoconf alone.</span><span style="font-family: Georgia,serif;"><br /><br />In the <code>opencv</code> directory, create a <code>build</code> directory and enter it with <code>mkdir build; cd build</code></span><br />
<br />
From the <code>build</code> directory, run configure with a few flags set (replace <code>sw</code> with <code>/opt/local</code> if you are using darwinports instead of fink):<br />
<br />
<code>../configure CPPFLAGS="-I/sw/include" LDFLAGS="-L/sw/lib" --with-python</code><br />
<br />
Now compile and install with <code>make && sudo make install</code>. You will be prompted for your password.<br />
<br />
Edit your ~/.profile to include the following two lines, which will be different if you set the <code>--prefix</code> option on your configure script to something other than the default /usr:<br />
<br />
<code>LD_LIBRARY_PATH=/usr/local/lib:${LD_LIBRARY_PATH}<br />export LD_LIBRARY_PATH<br /><code>PYTHONPATH=/usr/local/lib/python2.6/site-packages:${PYTHONPATH}<br />export PYTHONPATH</code></code><br />
To test that everything worked, run your python interpreter and try <code>import opencv</code>. Provided that there is no errors such as 'no module opencv', you can start trying the python examples in the samples directory.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLQ2NTBK_wz_wG3S165OrkZMvrYrg9uRuTDBsR9dDwrstzmG2NXOTrkJ1WY3qm7eQK8XKslDKVuhocZoT4oW-38milrzKSBNAAiSh6tl-uUJiUD03PVP38dDdgCWPGzgmsesgddbkfKKA2/s1600-h/facegrab.png" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}"><img alt="" border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLQ2NTBK_wz_wG3S165OrkZMvrYrg9uRuTDBsR9dDwrstzmG2NXOTrkJ1WY3qm7eQK8XKslDKVuhocZoT4oW-38milrzKSBNAAiSh6tl-uUJiUD03PVP38dDdgCWPGzgmsesgddbkfKKA2/s320/facegrab.png" id="BLOGGER_PHOTO_ID_5283455276743118210" style="cursor: pointer; display: block; height: 249px; margin: 0px auto 10px; text-align: center; width: 320px;" /></a><br />
I ran <code>python facedetect.py 0</code>, where 0 is the index of the camera you want the program to use. The program then draws a red square around what it thinks is your face. Extra points for being beard proof, and hats off to <a href="http://opencv.willowgarage.com/wiki/MarkAsbach">Mark Asbach</a> for his work on OS X support for OpenCV.</div>
Tristan Matthewshttp://www.blogger.com/profile/00636142362637536578noreply@blogger.com7tag:blogger.com,1999:blog-2187247257118921242.post-15239176472021116442008-12-15T12:02:00.000-08:002013-04-16T23:42:14.462-07:00Live animation with pygame and video4linux<div dir="ltr" style="text-align: left;" trbidi="on">
<i>EDIT: I've uploaded the code (and merged it into one file) on <a href="https://www.gitorious.org/blog-examples/blog-examples/blobs/master/cam_loop/cam_loop.py">gitorious</a>.</i><br />
<br />
Last Friday, the SAT saw the premiere of a new work by <a href="http://alexandre.quessy.net/">Alexandre Quessy</a> and <a href="http://www.isabellecaron.com/">Isabelle Caron</a> called <a href="http://alexandre.quessy.net/?q=motifsurbains">Motifs Urbains</a>. The work was part of the experimentation phase of the <a href="http://propulseart.sat.qc.ca/">propulse[art]</a> project. The demo involved Alexandre generating live, multichannel audio and looping it, while at the same time in a separate room Isabelle did live stop-motion animation and looped it using Alexandre's <a href="http://www.processing.org/">Processing</a>-based software, <a href="http://alexandre.quessy.net/?q=toonloop">ToonLoop</a>. Both rooms featured simultaneous playback of the audio and animation.<br />
Inspired by the demo and my recent tinkering with <a href="http://www.pygame.org/">pygame</a>, I decided to try and write a prototype for a simple stop motion animation looper like ToonLoop. The result is <a href="https://www.gitorious.org/blog-examples/blog-examples/blobs/master/cam_loop/cam_loop.py">cam_loop.py</a>. Thanks to <a href="http://eclecti.cc/">Nirav Patel</a>'s 2008 Google Summer of Code project, pygame (subversion revision 1744 or later) now supports video4linux camera input. The camLoop program shows the camera playback in the left hand region of the window, and the animation in the right hand of the video. The user can then grab frames live, and the accumulated frames will be played back in the right hand side of the window:<br />
<br />
<div style="text-align: center;">
<iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.blogger.com/video.g?token=AD6v5dyD0Ju9O6aYuIoDVpzcMmPnetdk3sGpJ5dmm3e6EOg2ULbhdXwCcgtOwulLX3sro0X0ygeIJSoZi_7ji7BU5g' class='b-hbp-video b-uploaded' frameborder='0'></iframe></div>
If you run the program during the winter months, you'll be subjected to a nauseating holiday theme overlayed on the incoming video. This example was simply to demonstrate how it's possible to easily draw arbitrary objects on live video. From <a href="http://eclecti.cc/olpc/v4l2-camera-module-now-in-pygame-svn">Nirav's blog</a>, it would appear that CoreVideo support is also forthcoming.<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIoo3-d38-yHBEdCiq4tuMxgBIazRA02bYTJDXUi4faUTmNRjicrwNxxGY2WEdXBqnAJbY5s7B4aPzEeNNp66Ki5POjIlhJAEZTq6R7htTJMFqj_Xnt_YKZE_nTvUCoQK7kpC1UbojN2gq/s1600-h/Screenshot-Snow+Cam.png" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}"><img alt="" border="0" id="BLOGGER_PHOTO_ID_5280125231281656802" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIoo3-d38-yHBEdCiq4tuMxgBIazRA02bYTJDXUi4faUTmNRjicrwNxxGY2WEdXBqnAJbY5s7B4aPzEeNNp66Ki5POjIlhJAEZTq6R7htTJMFqj_Xnt_YKZE_nTvUCoQK7kpC1UbojN2gq/s320/Screenshot-Snow+Cam.png" style="cursor: pointer; display: block; height: 250px; margin: 0px auto 10px; text-align: center; width: 320px;" /></a></div>
Tristan Matthewshttp://www.blogger.com/profile/00636142362637536578noreply@blogger.com3tag:blogger.com,1999:blog-2187247257118921242.post-11527125213552945892008-10-27T11:39:00.000-07:002012-07-17T05:04:10.144-07:00OpenGL and Gtk+ with GtkGlExt<div dir="ltr" style="text-align: left;" trbidi="on">
<i><b>UPDATE</b>: source files now on <a href="https://gitorious.org/blog-examples/blog-examples/trees/master/gtk_gl_ext">gitorious</a></i> <br />
<br />
Lately I've been really impressed with the performance and simplicity of <a href="http://www.gtk.org/">Gtk+</a>. After using it to get <a href="http://tristanswork.blogspot.com/2008/09/fullscreen-video-in-gstreamer-with-gtk.html">fullscreen video</a> with <a href="http://www.gstreamer.net/">Gstreamer</a>, I thought I would check out using Gtk+ in place of <a href="http://www.opengl.org/resources/libraries/glut/">GLUT</a> to write <a href="http://www.opengl.org/">OpenGL</a> programs.<br />
<br />
The program <a href="https://gitorious.org/blog-examples/blog-examples/blobs/master/gtk_gl_ext/teapot.c">teapot.c</a> is a heavily commented example that creates a window and draws a mesh teapot. The accompanying <a href="https://gitorious.org/blog-examples/blog-examples/blobs/master/gtk_gl_ext/Makefile">Makefile</a><span style="text-decoration: underline;"></span> will compile it under GNU/Linux provided the following packages (in <a href="http://www.ubuntu.com/">Ubuntu</a>) are installed:<br />
<span class="s">gtk+-2.0 gtkglext-1.0 gtkglext-x11-1.0<br /><br /><a href="http://www.k-3d.org/gtkglext/Main_Page">GtkGlExt</a> is an API extension to the Gtk+ API that allows developers to use OpenGL calls on standard Gtk+ widgets or on new custom widgets. There is also a C++ API, </span>gtkglextmm,<span class="s"> for Gtk+'s C++ counterpart, <a href="http://www.gtkmm.org/">gtkmm</a>.<br /><br />The program is broken down into initialization functions to set up the window and OpenGL context, as well as callbacks that are used to do the heavy-lifting. The callbacks are registered in the initialization functions, which means they are attached to specific events (or signals) and are triggered when these events are fired off asynchronously.<br /><br />The expose callback (<span style="font-weight: bold;">expose_cb</span>) is where we do our OpenGL drawing. It will be called in response to <span style="font-weight: bold;">"expose-event" </span>signals, fired whenever the window needs to be redrawn, i.e. if it is resized, exposed (where regions become visible that previously were not), or moved.<br /><br />The idle callback (<span style="font-weight: bold;">idle_cb</span>) is the only callback that is not attached to an event. It is registered using <span style="font-weight: bold;">g_timeout_add</span>, meaning that it will be called at a fixed interval. Its job is to flag the drawable area of our <span style="font-weight: bold;">GtkWindow</span> as needing to be redrawn. It is also where we would update any control or data parameters that the expose callback needs to take into account. For example, say we were stretching our teapot, we could have the scaling factor incremented in our idle callback. It's important to note, however, that <span style="font-weight: bold;">g_timeout_add</span> does not guarantee that the timeout interval will be respected, as explained in the <a href="http://library.gnome.org/devel/glib/unstable/glib-The-Main-Event-Loop.html">GLib documentation</a>:</span><br />
<blockquote>
Note that timeout functions may be delayed, due to the processing of other event sources. Thus they should not be relied on for precise timing. After each call to the timeout function, the time of the next timeout is recalculated based on the current time and the given interval (it does not try to 'catch up' time lost in delays).</blockquote>
The constraints of <a href="http://en.wikipedia.org/wiki/Real_time_computing">real-time computing</a> are beyond the scope of this entry, but the animation subsection in <a href="http://fly.srk.fer.hr/%7Eunreal/theredbook/chapter01.html">Chapter 1</a> of the <a href="http://fly.srk.fer.hr/%7Eunreal/theredbook/">OpenGL Red Book</a> offers a good overview.<br />
<br />
But lo and behold, the wonder of the <a href="http://en.wikipedia.org/wiki/Utah_teapot">Utah Teapot</a>:<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpFEtjpT5_xc_GjV1tN0YuzlImV7ROeImgmhfrhZQktsUjWxE1KpxVggfA3Vr0RI9GYaD3yt0hbC1hzeIQHzgD29tl4qwwf9qN0k3IztuUQoVuVzj5BjzpD9MvJZfQvcPdRcLl8s1DGch0/s1600-h/Screenshot-teapot.png" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}"><img alt="" border="0" id="BLOGGER_PHOTO_ID_5261939030816415442" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpFEtjpT5_xc_GjV1tN0YuzlImV7ROeImgmhfrhZQktsUjWxE1KpxVggfA3Vr0RI9GYaD3yt0hbC1hzeIQHzgD29tl4qwwf9qN0k3IztuUQoVuVzj5BjzpD9MvJZfQvcPdRcLl8s1DGch0/s320/Screenshot-teapot.png" style="cursor: pointer; display: block; height: 320px; margin: 0px auto 10px; text-align: center; width: 306px;" /></a></div>Tristan Matthewshttp://www.blogger.com/profile/00636142362637536578noreply@blogger.com13tag:blogger.com,1999:blog-2187247257118921242.post-370225118630489422008-09-26T08:06:00.000-07:002012-01-11T20:53:03.550-08:00Fullscreen video in gstreamer with gtk+<p><i><b>UPDATE:</b> </i><i>Source files are now up on <a href="https://gitorious.org/blog-examples/blog-examples/trees/master/fullscreen_video_with_gst_gtk">gitorious</a>.</i></p>
In a similar vein to my <a href="http://tristanswork.blogspot.com/2008/08/multichannel-audio-with-gstreamer.html">previous entry</a>, I'd like to explain how to get full screen video with <a href="http://www.gstreamer.net/">gstreamer</a> and <a href="http://www.gtk.org/">gtk</a>. Here again I'm using the latest CVS head of gstreamer, as well as the gtk+ and gdk development files.<br /><br />I was very impressed with gtk's <a href="http://library.gnome.org/devel/gtk/stable/">well documented API</a> and <a href="http://library.gnome.org/devel/gtk-tutorial/stable/">in-depth tutorial</a>. Also, since both gstreamer and gtk depend on <a href="http://library.gnome.org/devel/gobject/stable/">GObject</a>, combining functionality from both frameworks is quite seamless.<br /><br />The source file <a href="https://gitorious.org/blog-examples/blog-examples/blobs/master/fullscreen_video_with_gst_gtk/fullscreen.c">fullscreen.c</a> and accompanying <a href="https://gitorious.org/blog-examples/blog-examples/blobs/master/fullscreen_video_with_gst_gtk/Makefile">Makefile</a> are used for this example.<br /><br />To test that your build environment is properly setup, I would recommend making a simple C file with the following:<br /><code><br />#include <gst/gst.h><br />#include <gtk/gtk.h><br />#include <gst/interfaces/xoverlay.h><br />#include <gdk/gdk.h><br />#include <gdk/gdkx.h><br /><br />gint main(gint argc, gchar *argv[])<br />{<br />gst_init(&argc, &argv);<br />gtk_init(&argc, &argv);<br />return 0;<br />}<br /></code><br /><br />and try compiling. Make sure that you have all the necessary development files installed and that your environment can find them. The <code>gst_init</code> and <code>gtk_init</code> calls must always be called once, by any program using gstreamer and gtk. Note that if for some reason they are called more than once, the calls have no effect.<br /><br />To be able to process key-events and to keep the pipeline rolling, we need to use glib's <a href="http://library.gnome.org/devel/glib/unstable/glib-The-Main-Event-Loop.html">mainloop</a>. It may be possible to achieve the same results with some other event loop mechanism, this is just the one most often used in gtk and gstreamer applications.<br /><br /><code>loop = g_main_loop_new (NULL, FALSE);</code><br /><br />The simple pipeline here consists of a <a href="http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-plugins/html/gst-plugins-base-plugins-videotestsrc.html">videotestsrc</a> going to an <a href="http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-plugins/html/gst-plugins-base-plugins-xvimagesink.html">xvimagesink</a>. We set the <code>"force-aspect-ratio"</code> property of the xvimagesink to <code>TRUE</code> so that when the size is changed, the image's proportions are not distorted.<br /><br />We build our gtk window:<br /><code> window = gtk_window_new(GTK_WINDOW_TOPLEVEL);<br />g_signal_connect(G_OBJECT(window), "expose-event", G_CALLBACK(expose_cb), sink);<br /></code><br />and attach the "expose-event" to the expose callback function. This function will be called when our window is brought to the foreground. The expose callback overlays xvimagesink's video on our gtk window.<br /><br />A common feature in video-players is to assign a hot key to switch from windowed to full screen viewing. This is possible using:<br /><code><br />gtk_widget_set_events(window, GDK_KEY_PRESS_MASK);<br />g_signal_connect(G_OBJECT(window), "key-press-event", G_CALLBACK(key_press_event_cb), sink);<br /></code><br />which connects are key_press_event_cb function to the "key-press-event" signal emitted by the gtk window. The call to gtk_widget_set_events makes sure that the key-press event is propagated all the way up to our top-level window. This is important for determining which level of a window hierarchy is intended to handle such an event.<br /><br />Lastly, we need to make the window black initially, as otherwise the background will sometimes be white when switching from fullscreen to windowed mode.<br />We call <a href="http://library.gnome.org/devel/gtk/unstable/GtkWidget.html">gtk_widget_show_all()</a> on our window to "recursively show a widget, and any child widgets (if the widget is a container)". The pipeline is then set to playing, and the mainloop is run.<br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPUoaiwgTQOZsSFVsP_DFUl9z-S4sum0xqFlm9cxFQSSkp2VyX4zIOp_iOhsAjl1Z__WuO_LuHIyByANagTTCliO85xWG4hHduFgrjGrXxq2Il_P5KNWdiE3itOhcWBB6Lf9jJu_HzwJXS/s1600-h/testImage.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPUoaiwgTQOZsSFVsP_DFUl9z-S4sum0xqFlm9cxFQSSkp2VyX4zIOp_iOhsAjl1Z__WuO_LuHIyByANagTTCliO85xWG4hHduFgrjGrXxq2Il_P5KNWdiE3itOhcWBB6Lf9jJu_HzwJXS/s320/testImage.jpg" alt="gstreamer test video" title="gstreamer test video" id="BLOGGER_PHOTO_ID_5250396412509640754" border="0" /></a>Tristan Matthewshttp://www.blogger.com/profile/00636142362637536578noreply@blogger.com17tag:blogger.com,1999:blog-2187247257118921242.post-63538161629169072952008-08-21T11:20:00.000-07:002009-04-01T07:47:52.485-07:00Multichannel audio with gstreamerOver the past few months, I have spent a lot of time working with the <a href="http://www.gstreamer.net/">gstreamer</a> multimedia framework. From their site:<br /><blockquote>GStreamer is a library for constructing [...] graphs of media-handling components. The use cases it covers range from simple Ogg/Vorbis playback, audio/video streaming to complex audio (mixing) and video (non-linear editing) processing.<br /></blockquote>I've found gstreamer to be remarkably flexible and useful for a variety of audio-video applications. One of the trickier things I had to figure out was how to have 8 channels of audio playing in one gstreamer pipeline.<br /><br />These examples require the following packages:<br /><ol><li><a href="http://jackaudio.org/">JACK Audio Connection Kit librairies</a></li><li><a href="http://gstreamer.freedesktop.org/releases/gst-plugins-base/0.10.20.html">gst-plugins-base-0.10.20</a><br /></li><li><a href="http://gstreamer.freedesktop.org/releases/gst-plugins-good/0.10.9.html">gst-plugins-good-0.10.9</a><br /></li><li><a href="http://gstreamer.freedesktop.org/releases/gst-plugins-bad/0.10.8.html">gst-plugins-bad-0.10.8</a><br /></li></ol><br />These modules should be installed in the above order. Personally I use the CVS head for all of the above, which you can get by doing:<br /><span style="font-family:arial;">$ cvs -d:pserver:anoncvs@anoncvs.freedesktop.org:/cvs/gstreamer co </span><span style="font-style: italic;font-family:arial;" >modulename</span><br />where <span style="font-style: italic;">modulename</span> is gstreamer, gst-plugins-base, gst-plugins-good, and gst-plugins-bad respectively.<br /><br />Gstreamer includes a command-line utility, gst-launch, that allows a user to quickly build a gstreamer pipeline with a simple text description. For example:<br /><p><code><br />$ gst-launch audiotestsrc ! jackaudiosink<br /></code><br />will play a sine wave, provided you are already rolling a jack server. I generally run a jack server using the <a href="http://qjackctl.sourceforge.net/">qjacktl</a> application.<br /></p>Users should be aware of this warning from the gst-launch man page:<blockquote><p> gst-launch is primarily a debugging tool for developers and users. You should not build applications on top of it. For applications, use the gst_parse_launch() function of the GStreamer API as an easy way to construct pipelines from pipeline descriptions.</p></blockquote><p>It is possible to run a simple multichannel audio example with the following launch line:<br /><code><br />gst-launch-0.10 -v interleave name=i ! audioconvert ! audioresample ! queue ! jackaudiosink \<br />audiotestsrc volume=0.125 freq=200 ! audioconvert ! queue ! i. \<br />audiotestsrc volume=0.125 freq=300 ! audioconvert ! queue ! i. \<br />audiotestsrc volume=0.125 freq=400 ! audioconvert ! queue ! i. \<br />audiotestsrc volume=0.125 freq=500 ! audioconvert ! queue ! i. \<br />audiotestsrc volume=0.125 freq=600 ! audioconvert ! queue ! i. \<br />audiotestsrc volume=0.125 freq=700 ! audioconvert ! queue ! i. \<br />audiotestsrc volume=0.125 freq=800 ! audioconvert </code><code>! queue ! i. \<br />audiotestsrc volume=0.125 freq=900 ! audioconvert ! queue ! i.<br /></code><br />This pipeline consists of 8 audiotestsrc elements, which generate sine tones of increasing frequency. The audioconvert element is used to convert a given audiotestsrc's output to the appropriate numeric data type that the queue element expects. The <a href="http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/chapter-threads.html">queue element</a> is a simple data buffer, to which the audioconvert element writes, and from which the interleave element reads. The interleave element combines multiple channels of audio into one interleaved "frame" of audio. For example, if we had 2 independent channels of audio like so:<br /><br />Channel1: <span style="color: rgb(255, 102, 102); font-weight: bold;">00000</span>...<br />Channel2: <span style="font-weight: bold; color: rgb(51, 102, 255);">11111</span>...<br /><br />where channel 1 outputs only 0's, and channel only 1's, the interleaved frame would look like:<br /><br /><span style="color: rgb(255, 102, 102); font-weight: bold;">0</span><span style="color: rgb(51, 102, 255); font-weight: bold;">1</span><span style="color: rgb(255, 102, 102); font-weight: bold;">0</span><span style="color: rgb(51, 102, 255); font-weight: bold;">1</span><span style="color: rgb(255, 102, 102); font-weight: bold;">0</span><span style="font-weight: bold; color: rgb(51, 102, 255);">1</span><span style="color: rgb(255, 102, 102); font-weight: bold;">0</span><span style="font-weight: bold; color: rgb(51, 102, 255);">1</span><span style="color: rgb(255, 102, 102); font-weight: bold;">0</span><span style="font-weight: bold; color: rgb(51, 102, 255);">1</span>...<br /></p>The interleaved audio again needs to go through an audioconvert and an audioresample element in case the audio from our pipeline differs in datatype or sample rate from the jack server. Finally the audio is output by the jackaudiosink element, which writes audio from our pipeline into corresponding jack ports.<br /><p><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYaERKZ44jQ9bHn8nbZMNuhHUx7_CwJPDSdmV4tbUTjU6wB8QLrSz28Jw6gm_G-PHTxk7wpYV0QgaRK2a4AsXT_0aLtA80WIiulTU7sn5rCjgNlaBV5DcHiN8GfPwGpZ-Qf__rSTNNBaxG/s1600-h/jackaudioConnections.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYaERKZ44jQ9bHn8nbZMNuhHUx7_CwJPDSdmV4tbUTjU6wB8QLrSz28Jw6gm_G-PHTxk7wpYV0QgaRK2a4AsXT_0aLtA80WIiulTU7sn5rCjgNlaBV5DcHiN8GfPwGpZ-Qf__rSTNNBaxG/s400/jackaudioConnections.jpg" alt="" id="BLOGGER_PHOTO_ID_5237060956255077122" title="Connections between gstreamer's output and soundcard in jack" border="0" /></a></p>Many plugins require that the interleave element explicitly specify each channel's spatial position. Unfortunately, this cannot be done with gst-launch. I've created an example C program, <a href="https://svn.sat.qc.ca/trac/miville/browser/inhouse/prototypes/gstreamer/cpp/multichannel/multiChannel.c?rev=1259">multiChannel.c</a>, which initializes interleave appropriately. It can be compiled with <a href="https://svn.sat.qc.ca/trac/miville/browser/inhouse/prototypes/gstreamer/cpp/multichannel/Makefile?rev=1259">this Makefile</a>. The relevant section in multiChannel.c is the function set_channel_layout(GstElement *interleave). This function is passed our interleave element, and sets its <span style="font-style: italic;">channel-positions </span>property to an array of valid spatial positions.<br /><br />Gstreamer uses Glib's object system (<a href="http://en.wikipedia.org/wiki/GObject">GObject</a>) heavily, and as a result the above example program might be a little tricky to follow for programmers used to straight C. Check out gstreamer's <a href="http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/index.html">application development manual</a> for further examples of gstreamer usage in C.Tristan Matthewshttp://www.blogger.com/profile/00636142362637536578noreply@blogger.com5tag:blogger.com,1999:blog-2187247257118921242.post-12605504514887368462008-02-25T17:50:00.000-08:002008-12-26T21:45:53.515-08:00Metronome projectThe ViMic software I worked on will be used in a piece on March 4th. I will post some notes about the current state of the project and how it was used after this concert.<br /><br />Besides ViMic, I have been working on a customizable polymetric metronome built using the <a href="http://ccrma.stanford.edu/software/stk/">STK</a> for a few months now, as a personal project. After looking at several GUI toolkits, I have decided to develop the interface for this program using <a href="http://www.wxwidgets.org/">wxWidgets</a>. Since I have only used <a href="http://trolltech.com/products/qt">QT</a> before (and only a little at that), I was unsure for quite a while about which toolkit would work for my project. I was mainly looking for something that was easy to develop crossplatform, in "straightforward" (aka does not feel like I'm learning another language) C/C++. I was persuaded by a friend to try wxWidgets, and was more or less sold by Jeff Cogswell's <a href="http://www.informit.com/articles/article.aspx?p=606222">wxWidgets tutorial</a>. Also the free <a href="http://www.wxwidgets.org/docs/book/">wxWidgets ebook</a> didn't hurt either.<br /><br />I won't say too much more about the metronome at this point, I want to keep the feature list short for the time being. A few important points:<br /><ul><br /><li>It will be licensed under the GPL version 3.<br /></li><li>It will have customizable ticking sounds (pitch, resonance, loudness, etc.).<br /></li><li>It will allow for multiple metres at the same time.<br /></li><li>It will be easy to use.<br /></li></ul><br />The last feature is really the most important, but out of habit the GPL notice always comes first. Besides, mentioning ease of use last makes it more memorable (for me at least) than if I had listed it second.Tristan Matthewshttp://www.blogger.com/profile/00636142362637536578noreply@blogger.com0tag:blogger.com,1999:blog-2187247257118921242.post-21437906213543766992008-01-09T08:38:00.000-08:002008-01-09T09:35:48.624-08:00Source directivitySound source directivity patterns are the newest (and possibly last, for a while) feature to be added to the ViMic project. The user will be able to load or draw a directivity pattern in a Max table object. The ViMicMax~ external will use this table to obtain a directivity gain value. It remains to be seen whether or not the source object should have its own copy of the table, or alternately use a built-in function to read individual values from the table.Tristan Matthewshttp://www.blogger.com/profile/00636142362637536578noreply@blogger.com0tag:blogger.com,1999:blog-2187247257118921242.post-56833289607246169992008-01-07T12:12:00.001-08:002008-01-07T12:42:39.357-08:00Room model filteringSo I'm now working on a spatial audio project called ViMic, in C/C++ and Max/Msp. I will post a more comprehensive overview at some point, but I think I should write about the current issue I am working on. The problem is as follows:<br /><ul><br /><li> To avoid the Doppler effect when moving a sound source rapidly (i.e. sudden change in delay time resulting in a pitch shift), the program crossfades between the previous delay time and the new delay time.<br /><li> This approach is good for avoiding the Doppler effect, but has led to discontinuities in the signal. These result from reading from the wrong part of the delay line. <br /><li> Adding filtering to simulate the walls absorption of the sound seems to worsen the discontinuities apparent in the crossfading case.<br /></ul><br />A potential solution we are working on right now is to have one set of filters for the signal that is faded out, and another for the signal that is faded in. These filter sets are being swapped to avoid discontinuities. Moving the filter swap to the start of the crossfade, rather than the end, seems to have resolved the issue.Tristan Matthewshttp://www.blogger.com/profile/00636142362637536578noreply@blogger.com0