Problems with Matlab Projects? You may face many Problems, but do not worry we are ready to solve your Problems. All you need to do is just leave your Comments. We will assure you that you will find a solution to your project along with future tips. On Request we will Mail you Matlab Codes for Registered Members of this site only, at free service...Follow Me.

Real-time video processing in Linux with an IIDC camera: Lab


Summary: This module provides example code for displaying captured IIDC (Instrumentation and Industrial Digital Camera) video in Linux via the camwire, licdc1394, and SDL (Simple DirectMedia Layer) libraries. The reader is directed to modify the example to apply simple video effects and display the results. A Unibrain Fire-i RAW Color Digital OEM Board Camera is used, but the code should apply to similar IIDC cameras.

Download, compile, and run the example application

  • From your home directory, download the tar file to the (backed-up) w_drive subdirectory.
  • Open a terminal and type: cd w_drive
  • Extract the files with tar -xvzf projectlab_linuxvideo.tar.gz. Typing a few letters of the filename followed by the Tab key will automatically fill in the file name.
  • cd projectlab_linuxvideo
  • Compile the application by running ./makescript. Open this file in a text editor to see what commands are executed.
  • Plug in a Unibrain Fire-i RAW Color Digital OEM Board Camera and run the application: ./videoexample
Video from the camera should now be duplicated in the four quadrants of the screen, as well as some text, a line drawing of a moving sine wave, and a colored box. Press the Esc key to exit the application, or type Ctrl-C at the terminal.
The file videoexample.c contains comments and code illustrating how the video is transferred and displayed. Some additional comments on how waveforms are plotted are contained inscreengraphics.h. The keyboard, mouse, and display are handled by the SDL library; documentation can be found here.

Copy and modify the example

Make a copy of the example application: cp videoexample.c myprojectlab.c ; cp makescript mymakescript. Modify mymakescript to compile myprojectlabinstead of videoexample. Modify the Bayer downsampling code to use two green input pixels, one red input pixel, and one blue input pixel to compute each pixel in the color output image. Currently, only one green pixel is used. In other words, each non-overlapping, 2x2 square of single-color pixels in the input image will map to one color pixel in the output image. Can you think of and implement a method that more efficiently indexes (no multiplications, few additions) the raw-input and downsampled-output images?
Now modify myprojectlab.c so that it displays three different sets of processed video, accessible via the 1-3 keys, with 1 being the default. Make the contents of each screen as follows:

Screen 1: simple processing

  • Top left: original downsampled video
  • Top right: horizontally-flipped video
  • Bottom left: vertically-flipped video
  • Bottom right: inverted video - black becomes white and white becomes black, etc. Note that each pixel has a max value of 255.

Screen 2: RGB display

  • Top left: original downsampled video
  • Top right: red channel
  • Bottom left: green channel
  • Bottom right: blue channel
  • When the user presses the g key, toggle between displaying the channels as shades of gray or the channel colors. Which channel best approximates a grayscale version of the image?

Screen 3: YUV conversion and display

  • Top left: original downsampled video
  • Top right: Y channel
  • Bottom left: U channel
  • Bottom right: V channel
  • When the user presses the a key, toggle between displaying the channels in their default ranges and auto-contrasting the channels (with a single offset and single scale factor per quadrant image) so that the maximum RGB-color component of each quadrant image is 255 and the minimum 0. Which channel contains the grayscale image?
  • Note that displaying one of the YUV channels in RGB form requires conversion to YUV, selecting a channel, and converting back to RGB. Think carefully about how to do the conversions efficiently before programming them; eliminate redundant and/or unnecessary operations. In other words, six matrix transformations per video frame are not required to display the YUV channels. In fact, three matrix transformations are not required. An internet search should provide many references to RGB and YUV conversion.
Make your code as modular as possible so that it is easy to follow and debug; create functions, inline functions, and/or macros for operations that will be repeatedly performed, and group similar functions/macros into separate files.

0 comments:

Post a Comment

Recent Comments

Popular Matlab Topics

Share your knowledge - help others

Crazy over Matlab Projects ? - Join Now - Follow Me

Sites U Missed to Visit ?

Related Posts Plugin for WordPress, Blogger...

Latest Articles

Special Search For Matlab Projects

MATLAB PROJECTS

counter

Bharadwaj. Powered by Blogger.