2013年10月17日 星期四

EmguCV Image Process: Process Video Sequences part 4


EmguCV Image Process: Process Video Sequences

參考OpenCV 2 Computer Vision Application Programming Cookbook第十章

介紹內容如下:

Reading video sequences

Processing the video frames

Writing video sequences

Tracking feature points in video

Extracting the foreground objects in video

上一篇介紹了如何透過程式來寫檔

本篇將開始承接過去第八章的主題

影像特徵點(feature)的擷取

而運用在視訊影像中

就變成了視訊特徵點(features)的追蹤

這在視訊處理中扮演了相當重要的角色

許多的智慧影像分析的應用

都是從特徵點的追蹤開始做起

承接上一篇的VideoProcess的類別

這裡另外實作一個FeatureTracker的類別

public class FeatureTracker
{
    //current gray-level image
    Image<Gray, Byte> gray;
    //previous gray-level image
    Image<Gray, Byte> gray_prev;

    //maximum number of feature to detect
    int max_count;
    //quality level for feature detection
    double qlevel;
    //min distance between two points
    double minDist;
    //track features from 0 -> 1
    PointF[] points0;
    PointF[] points1;
    //initial position of tracked points
    PointF[] initial;
    //detected features
    PointF[][] features;
    //status of tracked features
    byte[] status;
    //error in tracking
    float[] err;
    //the number of tracked points
    int acceptTrackedNumber;

    public FeatureTracker()
    {
        this.max_count = 500;
        this.qlevel = 0.01;
        this.minDist = 10;
    }
...
}

這個類別裡面存放幾個全域變數

gray存放當前的影像

gray_prev存放前一張的影像

max_count初始設定最大的特徵點偵測數量為500

qlevel特徵點偵測的參數為0.01

minDist特徵點之間的距離為10

points0存放上一張影像的特徵點位置

points1存放追蹤後的特徵點位置

initial初始的特徵點位置

features初始擷取的特徵點位置

status特徵點擷取的狀態

err特徵點擷取失敗的狀況

acceptTrackedNumber需要持續追蹤的特徵點數量

public IImage Process(IImage image)
{
    Image<Gray,Byte> output; 
    if (image.NumberOfChannels != 1)
    {
        //convert to gray-level image
        this.gray = ((Image<Bgr, Byte>)image).Convert<Gray, Byte>();
    }else
    {
        this.gray = ((Image<Gray, Byte>)image).Copy();
    }
    output = this.gray.Copy();

    //1. if new feature points must be added
    if (AddNewPoints())
    {
        //detect feature points
        DetectFeaturePoints();
        //add the detected features to the 
        //currently tracked features
        this.points0 = this.features[0];
        this.initial = this.features[0];
    }
    //for first image of the sequence
    if (this.gray_prev == null)
    {
        this.gray_prev = this.gray.Copy();
    }
        
    //2. track features
    OpticalFlow.PyrLK(
        this.gray_prev, this.gray,  //consecutive images
        this.points0,  //input point positions in first image
        new Size(15, 15), //size of the search window
        3,  //Maximal pyramid level number
        //Specifies when the iteration process of finding the flow 
        //for each point on each pyramid level should be stopped
        new MCvTermCriteria(20, 0.03D), 
        out this.points1, //output point positions in the 2nd image
        out this.status, //tracking success
        out this.err);   //tracking error

    //2. loop over the tracked points to reject some
    this.acceptTrackedNumber = 0;
    for (int i = 0; i < this.points1.Length; i++)
    { 
        //do we keep this point?
        if (AcceptTrackedPoint(i))
        {
            //keep this point in vector
            this.initial[this.acceptTrackedNumber] = this.initial[i];
            this.points1[this.acceptTrackedNumber++] = this.points1[i];
        }
    }

    //3. handle the accepted tracked points
    HandleTrackedPoints(image, output);

    //4. current points and image become previous ones
    this.points0 = this.points1;
    this.gray_prev = this.gray;

    return output;
}
對外的運作方法

要符合Func<IImage, IImage> process的方法格式

這樣才能帶入VideoProcessor中去執行

方法的過程就是先判斷影像色域(channel數)

如果是彩色則轉成灰階

並存放於gray這個變數

判斷是否要偵測新的特徵點
(如果特徵點已經追丟了,或是偵測到的特徵點過少)

如果gray_prev沒有值則

將當前影像gray複製一份於前一張影像gray_prev

執行特徵的追蹤OpticalFlow.PyrLK

帶入設定的參數

接著利用前後張影像的特徵點的距離來判斷是否需要追蹤
(太近就不用了)

接著處理所需要追蹤的點

並把這些特徵點畫在output的影像上

然後將這次追蹤到的特徵點的位置points1

儲存於下一次要追蹤的特徵點位置points0

當前的影像gray儲存於前一張影像gray_prev

/// <summary>
/// determine which tracked point should be accepted
/// </summary>
/// <param name="i">the feature point index</param>
private bool AcceptTrackedPoint(int i)
{
    return (this.status[i]==1 &&
        //if point has moved
        (Math.Abs(this.points0[i].X - this.points1[i].X) +
            Math.Abs(this.points0[i].Y - this.points1[i].Y)) > 2);
}

/// <summary>
/// handle the currently tracked points
/// </summary>
/// <param name="image">input image</param>
/// <param name="output">output image</param>
private void HandleTrackedPoints(IImage image, Image<Gray, byte> output)
{
    //for all tracked points
    for (int i = 0; i < this.acceptTrackedNumber; i++)
    { 
        LineSegment2DF line = new LineSegment2DF(
            this.initial[i], 
            this.points1[i]);
        //draw line and circle
        output.Draw(line, new Gray(255), 1);
        CircleF circle = new CircleF(
            this.points1[i], 3);
        output.Draw(circle, new Gray(255), -1);
    }
}

/// <summary>
/// feature point detection
/// </summary>
private void DetectFeaturePoints()
{
    //detect the features
    this.features = this.gray.GoodFeaturesToTrack(
        this.max_count, //the maximum number of feature
        this.qlevel,    //quality level
        this.minDist,   //min distance between two features
        3);
}

/// <summary>
/// determine if new points should be added
/// </summary>
private bool AddNewPoints()
{
    //if too few points
    return (this.points0 == null || this.acceptTrackedNumber <= 10);
}

在Process方法內呼叫的私有方法包含

用來判斷此index i的特徵點,是否需要追蹤的方法:

bool AcceptTrackedPoint(int i)

若是這個特徵點前後移動距離大於2個pixels,則採取追蹤


處理特徵點的方法:

void HandleTrackedPoints(IImage image, Image<Gray, byte> output)

將初始的特徵點位置(第一次偵測到的位置)

與當前影像所偵測到的位置做畫線與畫圈的表示


特徵值擷取的方法:

void DetectFeaturePoints()

利用GoodFeaturesToTrack方法來擷取特徵點

並帶入所需的參數


判斷是否要重新偵測特徵點的方法:

bool AddNewPoints()

判斷所需追蹤的特徵點的數量是否少於10點


最後執行的方式與part2所介紹的很像

//Create video processor instance
VideoProcessor processor = new VideoProcessor();
//Create feature tracker instance
FeatureTracker tracker = new FeatureTracker();
//Open the video file
processor.SetInput(@"tracking.avi");
//Declare a window to display the video
processor.DisplayInput("Current frame");
processor.DisplayOutput("Output frame");
//Play the video at the original frame rate
processor.SetDelay((int)(1000 / processor.GetFrameRate()));
//Set the frame processor callback function
processor.SetFrameProcessor(tracker.Process);
//Start the process
processor.Run();

一樣先建立一個VideoProcessor的實體類別

接著也建立 FeatureTracker的實體類別

然後帶入視訊的路徑

注意在設定SetFrameProcessor時帶入tracker.Process

接著就執行processor.Run();

執行結果如下:





以上的影像便可看出特徵點追蹤的效果

從初始偵測到的位置連線到目前追蹤的位置

在追蹤的過程中其實會發現特徵點會越來越少

因為在追蹤的過程

你無法避免追蹤失敗(迷失)

所以才需要在特徵點的追蹤的越來越少時

重新再做一次特徵點的偵測!


看起來是不是很有趣呢!!

本系列最後一篇將來介紹前景偵測

沒有留言:

張貼留言