PGRs meeting and Research Presentations – Oct. 2015

The monthly PGRs Research Presentations was held on Wed. 14th October, 2pm, Room MC3108.

This session we had the following presentations:

Title: “Modelling LGMD2 Visual Neuron System“. Title:   “Compressed video matching: Frame-to-frame revisited

By: Qinbing Fu

By: Saddam Bekhet

Abstract: Two Lobula Giant Movement Detectors (LGMDs) have been identified in the lobula region of the locust visual system: LGMD1 and LGMD2. LGMD1 had been successfully used in robot navigation to avoid impending collision. LGMD2 also responds to looming stimuli in depth, and shares most the same properties with LGMD1; however, LGMD2 has its specific collision selective responds when dealing with different visual stimulus. Therefore, in this paper, we propose a novel way to model LGMD2, in order to emulate its predicted bio-functions, moreover, to solve some defects of previous LGMD1 computational models. The mechanism of ON and OFF cells, as well as bio-inspired nonlinear functions, are introduced in our model, to achieve LGMD2’s collision selectivity. Our model has been tested by a miniature mobile robot in real time. The results suggested this model has an ideal performance in both software and hardware for collision recognition. Abstract:  This presentation is about an improved frame-to-frame (F-2-F) compressed video matching technique based on local features extracted from reduced size images, in contrast with previous F-2-F techniques that utilized global features extracted from full size frames. The revised technique addresses both accuracy and computational cost issues of the traditional F-2-F approach. Accuracy is improved through using local features, while computational cost issue is addressed through extracting those local features from reduced size images. For compressed videos, the DC-image sequence, without full decompression, is used. Utilizing such small size images (DC-images) as a base for the proposed work is important, as it pushes the traditional F-2-F from off-line to real-time operational mode. The proposed technique involves addressing an important problem: namely the extraction of enough local features from such a small size images to achieve robust matching. The relevant arguments and supporting evidences for the proposed technique are presented. Experimental results and evaluation, on multiple challenging datasets, show considerable computational time improvements for the proposed technique accompanied by a comparable or higher accuracy than state-of-the-art related techniques.



  • Then our usual catch-up agenda:
  • PGR Month (as per the GS email).
  • Reminder of the StatsDay session (28th Oct, Lab B).
  • Expected Deadline (for Progress Panel) v.early Nov.
  • Announce @ “App Fest”:
    • Required ~5 supervisors (PGR/MComp):






New Conference paper Accepted to the “ World Congress on Engineering 2013”

New Conference paper accepted for publishing in  “World Congress on Engineering 2013“.

The paper title is “Video Matching Using DC-image and Local Features ”


This paper presents a suggested framework for video matching based on local features extracted from the DC-image of MPEG compressed videos, without decompression. The relevant arguments and supporting evidences are discussed for developing video similarity techniques that works directly on compressed videos, without decompression, and especially utilising small size images. Two experiments are carried to support the above. The first is comparing between the DC-image and I-frame, in terms of matching performance and the corresponding computation complexity. The second experiment compares between using local features and global features in video matching, especially in the compressed domain and with the small size images. The results confirmed that the use of DC-image, despite its highly reduced size, is promising as it produces at least similar (if not better) matching precision, compared to the full I-frame. Also, using SIFT, as a local feature, outperforms precision of most of the standard global features. On the other hand, its computation complexity is relatively higher, but it is still within the real-time margin. There are also various optimisations that can be done to improve this computation complexity.

Well done and congratulations to Saddam Bekhet .

PGRs Research Presentations – March 2013

The March’s PGRs Research Presentations was held on Wed. 13th March, 2pm, Meeting Room, MC3108 (3rd floor).

This session we had the following presentations:

Title: “A primal-dual fixed point algorithm with nonnegative constraint for CT image
Title:   “Video Similarity in Compressed Domain

By: Yuchao Tang

By: Saddam Bekhet

Abstract:Computed tomography (CT) image reconstruction problems often can be solved by finding the minimizer of a suitable objective function which usually consists of a data fidelity term  and a regularization term  subject to a convex constraint set $C$. In the unconstrained case, an efficient algorithm called the  primal-dual fixed point algorithm (PDFP$^{2}$O) has recently been developed to this problem, when the data fidelity term is differentiable with Lipschitz
continuous gradient and the regularization term composed by a simple convex function (possibly non-smooth) with a linear transformation. In this paper, we propose a modification of the PDFP$^{2}$O, which allows us to deal with the constrained minimization problem. We further propose accelerated algorithms which based on the Nesterov’s accelerated method. Numerical experiments on image reconstruction benchmark problem show that the proposed algorithms can produce better reconstructed image in signal-to-noise ratio than the original PDFP$^{2}$O and state-of-the-art methods with less iteration numbers. The
accelerated algorithms exhibit the fastest performance compared with all the other algorithms
AbstractThe volume of video data is rapidly increasing, more than 4 billion hours of video are being watched each month on YouTube and more than 72 hours of video are uploaded to YouTube every minute, and counters are still running fast. A key aspect of benefiting from all that volume of data is the ability to annotate and index videos, to be able to search and retrieve them. The annotation process is time consuming and automating it, with semantically acceptable level, is a challenging task.The majority of available video data exists in compressed format MPEG-1, MPEG-2 and MPEG-4. Extraction of low level features, directly from compressed domain without decompression, is the first step towards efficient video content retrieval. Such approach avoids expensive computations and memory requirement involved in decoding compressed videos, which is the tradition in most approaches. Working on compressed videos is beneficial because they are rich of additional, pre-computed, features such as DCT coefficients, motion vectors and Macro blocks types.

The DC image is a thumbnail version that retains most of the visual features of its original full image. Taking advantage of the tiny size, timeless reconstruction and richness of visual content, the DC image could be employed effectively alone or in conjunction with other compressed domain features (e.g. AC coefficients, macro-block types and motion vectors) to represent video clips (with signature) and to detect similarity between videos for various purposes such as automated annotation, copy detection or any other higher layer built upon similarity between videos.

The Q/A was followed by a demonstration of the PGRs blog and discussion with PGRs (and attending staff) about the blog, BB community,…etc.