PGRs meeting and Research Presentations – Feb. 2016

The monthly PGRs Research Presentations was held on Wed. 10th Feb., 2pm, Room MC3108.

This session we had the following presentations:

Title: Event-based Continuous STDP Learning using HMAX Model for Visual Pattern Recognition.
 By: Daqi Liu

Abstract: Ventral stream within the visual cortex plays an important role in form recognition and object representation. Understanding and modeling the processing mechanism of the ventral stream is quite significant and necessary for visual pattern recognition application. In our research, an event-based continuous spike timing dependent plasticity (STDP) learning method (ECS) using HMAX model for visual pattern recognition has been proposed. Through the proposed spiking encoding scheme, the spatiotemporal spiking pattern would be generated from the high-level features extracted from HMAX model and such spatiotemporal information conveys unique and distinguish selectivity to each input visual stimuli. The selectivity to the input visual stimuli will be emerged after the continuous learning using the proposed event-based STDP method. By incorporating background neural noise and time jitter into the input visual stimuli of the proposed method while adding nothing into the classic SVM algorithm, cross-validated experimental results on MNSIT handwritten character database show that the proposed ECS method still achieves a better performance even in such harsh conditions.  

 

 

 

 

 

 

 

PGRs meeting and Research Presentations – Oct. 2015

The monthly PGRs Research Presentations was held on Wed. 14th October, 2pm, Room MC3108.

This session we had the following presentations:

Title: “Modelling LGMD2 Visual Neuron System“. Title:   “Compressed video matching: Frame-to-frame revisited

By: Qinbing Fu

By: Saddam Bekhet

Abstract: Two Lobula Giant Movement Detectors (LGMDs) have been identified in the lobula region of the locust visual system: LGMD1 and LGMD2. LGMD1 had been successfully used in robot navigation to avoid impending collision. LGMD2 also responds to looming stimuli in depth, and shares most the same properties with LGMD1; however, LGMD2 has its specific collision selective responds when dealing with different visual stimulus. Therefore, in this paper, we propose a novel way to model LGMD2, in order to emulate its predicted bio-functions, moreover, to solve some defects of previous LGMD1 computational models. The mechanism of ON and OFF cells, as well as bio-inspired nonlinear functions, are introduced in our model, to achieve LGMD2’s collision selectivity. Our model has been tested by a miniature mobile robot in real time. The results suggested this model has an ideal performance in both software and hardware for collision recognition. Abstract:  This presentation is about an improved frame-to-frame (F-2-F) compressed video matching technique based on local features extracted from reduced size images, in contrast with previous F-2-F techniques that utilized global features extracted from full size frames. The revised technique addresses both accuracy and computational cost issues of the traditional F-2-F approach. Accuracy is improved through using local features, while computational cost issue is addressed through extracting those local features from reduced size images. For compressed videos, the DC-image sequence, without full decompression, is used. Utilizing such small size images (DC-images) as a base for the proposed work is important, as it pushes the traditional F-2-F from off-line to real-time operational mode. The proposed technique involves addressing an important problem: namely the extraction of enough local features from such a small size images to achieve robust matching. The relevant arguments and supporting evidences for the proposed technique are presented. Experimental results and evaluation, on multiple challenging datasets, show considerable computational time improvements for the proposed technique accompanied by a comparable or higher accuracy than state-of-the-art related techniques.

 

 

  • Then our usual catch-up agenda:
  • PGR Month (as per the GS email).
  • Reminder of the StatsDay session (28th Oct, Lab B).
  • Expected Deadline (for Progress Panel) v.early Nov.
  • Announce @ “App Fest”:
    • Required ~5 supervisors (PGR/MComp):

 

 

 

 

 

PGRs meeting and Research Presentations – Sept. 2015

The monthly PGRs Research Presentations is resumed (after Summer Break) and was held on Wed. 9th September, 2pm, Room MC3108.

This session we had the following presentations:

Title: “Affordable Mobile Robotic Platforms for Teaching Computer Science at African Universities“. Title:   “Exploring the dynamics of social interaction in massive open online courses

By: Ernest Gyebi

By: Kwamena Appiah-Kubi

Abstract: Educational robotics can play a key role in addressing some of the challenges faced by higher education in Africa. One of the major obstacles preventing a wider adoption of initiatives involving educational robotics in this part of the world is lack of robots that would be affordable by African institutions. In this paper, we present a survey and analysis of currently available affordable mobile robots and their suitability for teaching computer science at African universities. To this end, we propose a set of assessment criteria and review a number of platforms costing an order of magnitude less than the existing popular educational robots. Our analysis identifies suitable candidates offering contrasting features and benefits. We also discuss potential issues and promising directions which can be considered by both educators in Africa but also designers and manufacturers of future robot platforms. Abstract:  MOOCs (Massive Open Online Courses) make free and easily accessible educational resources from participating universities spanning a wide range of courses. These learning resources are often structured and delivered to mimic a brick-and-mortar classroom. The courses usually attract large number of participants who have to collaborate within the time frame of the course to facilitate their learning as well as socialize. This large number of participants that have to collaborate within such a short time span presents a new context to investigate the dynamics of social interaction within such a group.

 

 

  • Then our usual catch-up agenda: New regulations and PGRs forms.

 

 

 

 

 

July’s Research Presentation

The monthly PGRs Research Presentations was held on Wed.  8th July, 2pm, Room MC3108.

This session we had the following presentations:

Title: “Facilitating Individualised Collaboration with Robots (FInCoR)“.

By: Peter Lightbody

Abstract: Enabling a robot to seamlessly collaborate with a human counterpart on a joint task requires not only the ability to identify human preferences, but also the capacity to act upon this information when planning and scheduling tasks. This presentation provides a review of the current state-of-the-art techniques used in human-robot collaboration; techniques which will be utilised to combine the detection of human preferences with real-time task scheduling. This system will thus allow the collaborator to subconsciously influence the planning and scheduling of the system, eventually creating a seamless and less disruptive collaboration experience. This review is followed by a brief overview of the subsequent stages of research, with a probabilistic model introduced to allow the robot to dynamically adapt to changes during the completion of a task.  

 

 

 

 

 

PGRs meeting and Research Presentations – June 2015

The monthly PGRs Research Presentations was held on Wed.  10th June, 2pm, Room MC3108.

This session we had the following presentations:

Title: “Effects of Environmental Changes on Aggregation with Robot Swarm“. Title:   “Computer-aided Liver lesion detection and classification

By: Farshad Arvin

By: Hussein Alahmer

Abstract: Aggregation is one of the most fundamental behaviors that has been studied in swarm robotic researches for more than two decades. The studies in biology revealed that environment is a very important factor especially in cue-based aggregation in which a heterogeneity in the environment such as a heat or light source act as a cue indicating an optimal aggregation zone. In swarm robotics, studies on cue-based aggregation mainly focused on different methods of aggregation and different parameters such as population size. Although of utmost importance, effects of environmental factors have not been studied extensively. In this work, we study the effect of different  environmental factors such as size and texture of aggregation cues and speed of the changes on a dynamic environment using real robots. We used aggregation time and size of the aggregate as the two metrics and evaluated the performance of the swarm aggregation in static and dynamic environments. The results of the performed experiments illustrate how environmental changes influence the performance of a swarm aggregation.

AbstractLiver cancer is one of the major death factors in the world. Transplantation and tumor resection are two main therapies in common clinical practice. Both tasks need image assisted planning and quantitative evaluations. An efficient and effective automatic liver segmentation is required for corresponding quantitative evaluations. Computed Tomography (CT) is highly accurate for liver cancer diagnosis. Manual identification of hepatic lesions done by trained physicians is a time-consuming task and can be subjective depending on the skill, expertise and experience of the physician.

Computer aided classification of liver tumors from abdominal Computer Tomography (CT) images requires segmentation and analysis of tumor. Automatic segmentation of tumor from CT images is difficult, due to the size, shape, position and presence of other objects with the same intensity present in the image.

The proposed system automatically segment liver from abdominal CT and detect hepatic lesions, then classifies the lesion into Benign or Malignant. The method uses Fuzzy C Means (FCM) clustering and region growing technique. The effectiveness of the algorithm is evaluated by comparing automatic segmentation results to the manual segmentation results. Quantitative comparison shows a close correlation between the automatic and manual as well as high spatial overlap between the regions-of-interest (ROIs) generated by expert radiologist and proposed system.