MOTION DETECTION USING IMAGE SUBTRACTION AND EDGES [PDF]

Makalah ini membahas mengenai suatu tantangan dalam bidang sistem surveilan dalam bentuk data video yang merupakan salah

3 downloads 18 Views 338KB Size

Recommend Stories


Image Manipulation Detection Using Sensor Linear Pattern
Forget safety. Live where you fear to live. Destroy your reputation. Be notorious. Rumi

AN3917 Motion and Freefall Detection
Be who you needed when you were younger. Anonymous

Fake currency detection using image processing
Almost everything will work again if you unplug it for a few minutes, including you. Anne Lamott

Using Bar Models: Addition and Subtraction
The butterfly counts not months but moments, and has time enough. Rabindranath Tagore

Detection and Classification of Highway Lanes Using Vehicle Motion Trajectories
Just as there is no loss of basic energy in the universe, so no thought or action is without its effects,

Pest Detection and Extraction Using Image Processing Techniques
The happiest people don't have the best of everything, they just make the best of everything. Anony

Motion Detection Sensor
The only limits you see are the ones you impose on yourself. Dr. Wayne Dyer

Image Classification and Object Detection Using Spatial Contextual Constraints
Love only grows by sharing. You can only have more for yourself by giving it away to others. Brian

Counterfeit Electronics Detection Using Image Processing and Machine Learning
Ask yourself: What are your biggest goals and dreams? What’s stopping you from pursuing them? Next

Idea Transcript


Kembali Risalah Lokakarya Komputasi dalam Sains dan Teknologi Nuklir XVII, Agustus 2006 (157-171)

MOTION DETECTION USING IMAGE SUBTRACTION AND EDGES DETECTION R. B. Wahyu1, Tati R. Mengko**, Bambang Pharmasetiawan**, and Andryan B. Suksmono**

ABSTRAK IDENTIFKASI GERAKAN DENGAN METODE SUBSTRAKSI IMAGE DAN DETEKSI BATAS. Makalah ini membahas mengenai suatu tantangan dalam bidang sistem surveilan dalam bentuk data video yang merupakan salah satu riset yang cukup aktif dalam bidang computer visi dan sistem surveilan. Laporan ini dikonsentrasikan pada kegiatan utama suatu sistem video surveilan yang aktif yaitu Teknik Deteksi Gerakan. Teknik pertama yang dibahas adalah dengan membandingkan elemen gambar (pixel) dari frame yang berurutan. Sedangkan teknik kedua adalah dengan mencari batas dari suatu frame dengan metode lokalisasi/neighbourhood dan membandingkan dengan batas dari frame selanjutnya. Kedua teknik ini dipersiapkan sebagai dasar pembangunan suatu video surveilan. Kedua teknik ini telah diimplementasikan dengan data video baik secara offline maupun real-time. Langkah selanjutnya dari penelitian ini adalah melakukan klasifikasi objek dari stream video data yang diikuti dengan monitor kegiatan objek. Pada akhirnya analisa terhadap objek dilakukan untuk menganalisa apakah kegiatan suatu objek dalam suatu kawasan mencurigakan atau tidak. Kata-kata kunci: Subtraksi Image, Identifikasi Gerakan berdasarkan Komputer Visi, Surveilan.

Edges, Analisa Gerak-gerik,

ABSTRACT MOTION DETECTION USING IMAGE SUBTRACTION AND EDGES DETECTION. This report describes the challenge for an active video surveillance system, which has become an active research in computer vision and surveillance system. This report is concentrated on the main activity for an active video surveillance system is Motion Detection Technique. The first technique is implemented by comparing frame by frame of image pixels from video stream. On the other hand, the edges based objects detection is implemented by finding objects’ edges using neighborhood/localization. These two techniques are prepared as the prerequisite for an active video surveillance. These two techniques have been implemented using video data stream both offline and real-time. The next stage of this research is to classify objects on the video stream which is then followed by monitoring of the objects activities. Finally the analysis is used to predict whether the activities might lead to a suspicious or hazard activities or safe otherwise. Keywords:

1

Image Subtraction, Edges Based Motion Detection, Activity Analysis, Computer Vision, Surveillance.

Centre for Nuclear Informatics Development – BATAN, Email: [email protected]

** Imaging and Image Processing Research Group (I2PRG) Technical School of Electronics and Informatics Bandung Institute of Technology

157

Risalah Lokakarya Komputasi dalam Sains dan Teknologi Nuklir XVII, Agustus 2006 (157-171)

INTRODUCTION Objects detection in a video sequence is still a big challenge in image processing and or computer vision. Many researchers have explored many ways or methods in order to improve their effectiveness in this field. However, the level of difficulties are varied so that there is no a perfect system that can be used to overcome the objects detection problems. There are many reasons why the challenge is still become an open research such as lighting and or illumination. Furthermore, the moving objects in a video sequence will result in more complex problems such as camera field of view limitation and the speed of the objects movement. Besides, the type of objects also contributes to the complexity of objects detection so that the type of detection methods applied are still case or domain of application dependencies. One of impressive projects in computer vision is the Video Surveillance and Monitoring (VSAM) project lead by Robotics Institute, Carnegie Melon University [3]. The research contributes to many areas in computer vision both in the basic research and in the field of applications. This remarkable project was trying to put the computer vision technology into action in the battle situation to win the war. Furthermore, many researchers have explored the technologies developed by VSAM project and some of them are already at the mature stage while others are still in the research stage. The mature stages are ready for production period although it has still many limitations and weaknesses. One of the research area carried out by VSAM team is the human body motion analysis using image skeletonization. We are going explore the methodology and apply it to recognize objects and their activities in an area. The objects can be human, animal and vehicles. For instance, people’s activities can be walking, running, sitting down, laying down, jumping, and crawling. Furthermore, we concentrate on the extension of the analysis the objects and their activities by adding up other features related to security information such as whether the people bring dangerous things for example a gun or a knife. In addition we explore other activities related to security attention, for instance, objects entering security or restricted area, throwing objects and moving around an area for several times. In this report, the methodology described has been implemented for two kinds of object motion i.e. Image Subtraction and Edges Based Object Motion Detection. The result shows that the methodology is one of the promised methods that can be used for active video surveillance system. The application of the methodology in parking lot will be used to identify and track human and car activities while the study in a restricted facility is used to identify and track human and animal activities. Different kinds of objects and their activities can be easily classified from entering the restricted area into a number of a various activities such as walking, running and crawling. Furthermore, the introduction of significant features that are applied to the detection methodology results in an improvement of the surveillance system. Finally,

158

Risalah Lokakarya Komputasi dalam Sains dan Teknologi Nuklir XVII, Agustus 2006 (157-171)

the spirit of the smart camera/video surveillance system is a real job that will be made possible by the computer vision approach.

METHODOLOGY This research was developed as an Active Video Surveillance System, which is basically as shown in figure 1 below:

Figure 1. An Active Video Surveillance System. The system captures video data from the environment (surrounding area). This video data is then processed by the system, which is in turn, will produce security information i.e. safe or unsafe condition. Safe condition means that there are no any possibly suspicious objects and or activities while unsafe condition means there is a possibility that a suspicious object and or activities might happen. Video data captured in form of frames is stored into computer memory. These frames are processed in order to find the motion of objects within the video stream. The first method follows the image subtraction methodology i.e. by comparing the image pixels from consecutive frames. The second method follows edges based object motion. In this case the frame is analyzed using edges based method. The motion detection techniques applied here have been tested for real-time and offline video stream.

159

Risalah Lokakarya Komputasi dalam Sains dan Teknologi Nuklir XVII, Agustus 2006 (157-171)

Image Subtraction Image subtraction is one the popular techniques in image processing and computer vision. Basically image subtraction can be represented as: ∆I(i,j) = ICurr(i,j) – IPrev(i,j)

(1)

where: ∆I(i,j) is the image intensity different from 2 consecutive frames. ICurr(i,j) and IPrev(i,j) represent images intensities for Current and Previous frames respectively. In this report the image subtraction is applied into 3 techniques. The first step is purely compare image pixels from two consecutive frames. If there exist a different value between these two frames then consequently it represents motion. The differences between these two frames will be represented as the moving parts in the scene. The destination frame displayed consists of the combination of the moving regions (which can be represented as red color) and the background. For example, image pixels allocated in the current and previous frame is displayed in the Figure 3. Suppose the image has the size 7x7 and the difference between previous and current frame is the circle. Consequently, the current frame is equal to the previous frame and the circle. 1,11

1,12

1,13 1,14 1,15 1,16 1,17

2,11

2,12

2,13 2,14 2,15 2,16 2,17

3,11

3,12

3,13 3,14 3,15 3,16 3,17

4,11

4,12

4,13 4,14 4,15 4,16 4,17

5,11

5,12

5,13 5,14 5,15 5,16 5,17

6,11

6,12

6,13 6,14 6,15 6,16 6,17

7,11

7,12

7,13 7,14 7,15 7,16 7,17

Figure 2. Image Pixel and the Motion

160

Risalah Lokakarya Komputasi dalam Sains dan Teknologi Nuklir XVII, Agustus 2006 (157-171)

The second technique that has been implemented is by introducing the threshold. The threshold is chosen in such a way that image intensity differences ∆I(i,j) will not automatically becomes a motion unless the different is greater than the threshold. In this technique, the frames will be scanned twice one from left to right, row after row and secondly from top to bottom, column by column. For example, from figure 2 the pixels of two frames will be scanned: (1,11) to (1,17), (2,11) to (2,17), (3,11) to (3,17), (4,11) to (4,17), (5,11) to (5,17), (6,11) to (6,17) and (7,11) to (7,17). When each row is scanned then there will be two conditions which made one pixel belong to boundary of an object. First, the difference between two corresponding pixels (same location (i,j)) of two frames is greater than the THRESH. For example: If the ABS [ ICur(1,11) – IPrev(1,11) ] > THRESH) THEN it will be assigned BLACK OR RED. The pixels will be considered as the motion if they satisfy second condition. The second condition is when the status of two adjacent pixels is different. For example: in the figure 2 there are seven pixels in the first row with the results: (1,11) is black, (1,12) is black, (1,13) is red, (1,14) is black, (1,14) is black, (1,15) is red, (1,16) is black and (1,17) is black. Two pixels (1,13) and (1,15) will be considered as motion. The third technique that has been applied is by introducing noise suppression technique. In this technique the process is carried out by masking the image frame with 5x5. If there are several pixels which do not belong to any objects but they form a group of pixels and look like an objects, the Noise Erase Algorithm will compare the size of these group pixels with the size of the mask. Whenever the size of the mask is bigger than the size of this group then these pixels will be deleted. 1,11

1,12 1,13 1,14 1,15 1,16 1,17

2,11

2,12 2,13 2,14 2,15 2,16 2,17

3,11

3,12 3,13 3,14 3,15 3,16 3,17

4,11

4,12 4,13 4,14 4,15 4,16 4,17

5,11

5,12 5,13 5,14 5,15 5,16 5,17

6,11

6,12 6,13 6,14 6,15 6,16 6,17

7,11

7,12 7,13 7,14 7,15 7,16 7,17 Figure 3. Motion Detection

161

Risalah Lokakarya Komputasi dalam Sains dan Teknologi Nuklir XVII, Agustus 2006 (157-171)

Edges Detection The second technique that has been implemented in this research is the edges based object detection which is then applied into object motion. The technique that is used can be described as follows: First, the image frames captured by the camera are processed by an edge detection method. In this method, edges of an object are the area where the difference between the pixels inside the boundary of object and the pixels which do not belong to the object is greater than a threshold. If I(i,j) < THRESH Then I(i,j) = BLACK Else I(i,j) = RED Furthermore, in this implementation the edges detection is carried out both for vertical and horizontal i.e. column and row. Figure 5 shows the method used in this experiment.

1,11

1,12 1,13 1,14 1,15 1,16 1,17

2,11

2,12 2,13 2,14 2,15 2,16 2,17

3,11

3,12 3,13 3,14 3,15 3,16 3,17

4,11

4,12 4,13 4,14 4,15 4,16 4,17

5,11

5,12 5,13 5,14 5,15 5,16 5,17

6,11

6,12 6,13 6,14 6,15 6,16 6,17

7,11

7,12 7,13 7,14 7,15 7,16 7,17 Figure 4. Edges Detection

The edges detection follows the cross detection method as illustrated by figure 4. The dashed box represents the 3 by 3 mask which is used to illustrate the method of processing images. In the processing the intensity of the image pixel is chosen by finding the maximum difference of intensity of the image pixel in the mask. For example, the 3 x 3 dashed square consist of 3,13; 3,14; 3,15 for the first row, 4,13; 4,14; 4,15 for the second row and 5,13; 5,14 ; 5,15 for the third row. In this

162

Risalah Lokakarya Komputasi dalam Sains dan Teknologi Nuklir XVII, Agustus 2006 (157-171)

experiment the D (difference) is calculated crossly as D1 = I(3,14) – I(5,14), D2 = I(3,13) – I(5,15), D3 = I(3,15) – I(5,13), D4 = I(4,13) – (4,15). Then, the values of the middle pixel will be assigned the maximum values from the cross differences i.e.: Max = Max(D1,D2,D3,D4)

(2)

If Max < THRESH Then I(i,j) = BLACK Else I(i,j) = RED. Based on the above procedures then the image will be converted into BLACK and RED color which is in turn represents objects’ edges. In this image BLACK represent background while RED represents object edges. After the edges detection have been applied to the image frames then for successive two frames the images are compared to find the motion. In this context, an object is moving if the edges of the object are different from frame to frame. This situation will be different if the object is in a static position i.e. the objects edges are in the same position. Consequently, if the edge is in the static position then these edges will be deleted from the motion.

IMPLEMENTATION

Motion by Image Subtraction As it has been described in the previous section the object motion is done using image subtraction method. This method is very straight where once the video data have been captured into images frames then for 2 successive frames the motion is directly processed. In this perspective, the processing is carried out by calculating the intensity differences between previous and current images from the same position as shown by equation 1). The differences show that there exist objects and motion. The algorithm of the objects and motion detection can be described as follows:

163

Risalah Lokakarya Komputasi dalam Sains dan Teknologi Nuklir XVII, Agustus 2006 (157-171)

% Capture the images While not end of the frame Begin % For 2 consecutives images calculate the differences and compare to the threshold For X = 0 to MaxWidth Begin % Calculate the differences ∆I(i,j) = ICurr(i,j) – IPrev(i,j) If ∆I(i,j) < Threshold then I(i,j) = BLACK Else I(i,j) = RED End if % Plot the motion into images WRITE I(i,j) End End The above algorithm has been applied to several cases, which is one of the cases is in the parking lot. The following figures show the parking scenario where figure 5 a & b show the image frames for two consecutive frames while figure 5 c shows the optic flow result from the two images.

Figure. 5. a & b shows two consecutive image frames

164

Risalah Lokakarya Komputasi dalam Sains dan Teknologi Nuklir XVII, Agustus 2006 (157-171)

Figure 6. The image motion

Figure 6 shows the application of the frame by frame differences where video source is the current capturing image frame, processing step 1 applying frame differences between current and previous image while the after detection is the application of image subtraction and applying noise.

Motion by Edges Detection The second method that has been implemented and as it has been described in the previous section for the object motion is done using edges object based motion. In this method, the processing is carried out in several steps. First, every frame captured is processed to determine the edges of the object. Secondly, for 2 consecutive images the edges (of the objects) is compared to find any motion of the object. Finally, if the object is static then the object will be deleted as it is considered as static object. Algorithmically, the objects and motion detection can be described as follows:

165

Risalah Lokakarya Komputasi dalam Sains dan Teknologi Nuklir XVII, Agustus 2006 (157-171)

% Capture the images While not end of the frame Begin % For each image find the edges of objects in the image For X = 0 to MaxWidth % Set 3 x 3 mask and find the max the cross differences Max(D1,D2,D3,D4) If Max < Threshold then I(i,j) = BLACK Else I(i,j) = RED End If % Plot the motion into images WRITE I(i,j) % Compare Edges of Current Images with Previous One If I(i,j) = I(i-1,j-1) Then I(i,j) = BLACK Else I(i,j) = RED End If End

166

Risalah Lokakarya Komputasi dalam Sains dan Teknologi Nuklir XVII, Agustus 2006 (157-171)

Figure 7. The Image Object Edges Detection

Figure 8. The Image Motion represented by Object Edges

Figure 7 and 8 show the Edges Object Based Motion detection. In figure 7 we detect the edges of the object while in figure 8 we show the image motion by its edges. In figure 8 the video source is the current capturing image, processing step 1 is the objects edges, and processing step 2 is the edges based motion detection applied while the after detection is the application of the motion detection to the source object.

167

Risalah Lokakarya Komputasi dalam Sains dan Teknologi Nuklir XVII, Agustus 2006 (157-171)

RESULTS AND DISCUSSION We have described our propose method for an active video surveillance system. This propose system is an effort to relieve security officer from doing a boring job as the system can provide information to security officers about objects in their environment. Moreover, the information can be classified into safe or unsafe information. In this research, we have shown that the method has the ability to recognize motion using two different techniques but both techniques are using computer vision approach. We are in the process of describing the classifications and their measurement parameters for objects and their possibly activities. The described techniques together with optic flow method and skeletonization technique will be used to make an active video surveillance system to the real world by exploring the computer vision technique in more comprehensive. Many field of applications for the active video surveillance can be implemented such as in parking areas, military areas and nuclear power plants.

FUTURE WORKS In this research, we have implemented some requirements required to implement an active video surveillance system. We will explore and implement the challenge so that an active computer vision system will happen.

REFERENCES 1.

ANDREW KIRILLOV, Image Processing; http://www.codeproject.com/cs/ media/Image_Processing_Lab, 2005.

2.

ANTHONY R. DICK, MICHAEL J. BROOKS, Issues in Automated Surveillance, School of Computer Science, University of Adelaide, Adelaide, Australia, SA 5005, 2003.

3.

COHN A. G., MAGEE D. R., A GALATA, HOGG D C, HAZARIKA S M, Toward an Architecture for Cognitive Vision Using Qualitative Spatio-Temporal Representations and Abduction, Proceeding Spatial Cognition 2002, Tutzing Castle, Lake Starnberg, May 2002.

4.

COLLINS ROBERT T., et. al., A System for Video Surveillance and Monitoring, CMU-RI-TR-00-12, The Robotics Institute, Carnegie Mellon University, Pittsburgh PA - The Sarnoff Corporation, Princeton, NJ, 2000.

168

Risalah Lokakarya Komputasi dalam Sains dan Teknologi Nuklir XVII, Agustus 2006 (157-171)

5.

FABLET RONAN and BLACK MICHAEL J., Automatic detection and tracking of human motion with a view-based representation, European Conference on Computer Vision 2002, Copenhagen May 2002.

6.

FUJIYOSHI HIRONOBU, LIPTON ALAN J., KANADE TAKEO, Real-time Human Motion Analysis by Image Skeleton, IEICE Transaction on Information and System Vol. E87 –D, NO.1 January 2004.

7.

B.K.P HORN, B.G. SCHUNK, Determining Optical Flow, Artificial Intelligence, vol. 2, (1981)185-203

8.

GAVRILA D. M., The Analysis of human motion and its application for visual surveillance, Proceeding of the 2nd International Workshop on Visual Surveillance, Fort Collins, USA, 1999.

9.

HONGENG SOMBOON, BR´EMOND FRANCOIS and NEVATIA RAMAKANT, Representation and Optimal Recognition of Human Activities, Computer Vision and Pattern Recognition, 2000.

10. LIU YANXI, COLLINS ROBERT T. and TSIN YANGHAI, Gait Sequence Analysis using Frieze Patterns, CMU-RI-TR-01-38, The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213, 2001. 11. R. B. WAHYU, TATI R. MENGKO, BAMBANG PHARMASETIAWAN, and ANDRYAN B. SUKSMONO, Objects Detection and Activity Analysis Using Computer Vision Approach, International Conference on Instrumentation, Communication and Information Technology (ICICI) 2005 Proc., August 3rd -5th , 2005, Bandung, Indonesia. 12. R. B. WAHYU, IGN A JUNAEDI, BAMBANG PHARMASETIAWAN, Identifkasi Objek dalam rangka Aktif Video Surveilan, Lokakarya Komputasi Sain dan Teknologi Nuklir, Badan Tenaga Nuklir Nasional, Jakarta, 9 Agustus 2005. 13. SINGH MONA AND SINGH SAMEER, Minerva scene analysis benchmark, Proc. 7th Australian and New Zealand Intelligent Information Systems Conference, Perth (18-21 November, 2001) 231-235 14. SINGH SAMEER, MARKOU MARKOS and HADDON JOHN, Detection of New Image Objects in Video Sequences Using Neural Networks, SPIE

169

Risalah Lokakarya Komputasi dalam Sains dan Teknologi Nuklir XVII, Agustus 2006 (157-171)

Conference on Application of ANN in Image Processing, Electronic Imaging 2000 Sanjose, 23-28 January (2000) 204-213 15. SINGH SAMEER, HADDON JOHN, MARKOU MARKOS, Nearest-neighbour classi"ers in natural scene analysis, Pattern Recognition 34 (2001) 16. TAO NEVATIA and RAM ZHAO, LV FENGJUN, Segmentation and Tracking of Multiple Humans in Complex Situations, Computer Vision and Pattern Recognition 2001. 17. WREN CHRISTOPHER R., CLARKSON BRIAN P., PENTLAND ALEX P., Understanding Purposeful of Human Motion, MIT Media Laboratory Perceptual Computing Section, Technical Reports No. 485 appears in Fourth IEEE International Conference on Automatic Face and Gesture Recognition, 2000. 18. ZHAO TAO and NEVATIA RAM, 3D Tracking of Human Locomotion: A Tracking as Recognition Approach, International Conference on Pattern Recognition, 2002.

DISKUSI SLAMET 1. 2.

Perangkat Lunak apa yang digunakan untuk pengembangan sistem ini ? Teknik mana yang baik dari 2 metode yang digunakan ?

R.B WAHYU 1.

2.

Pada tahap awal percobaan perangkat lunak yang digunakan adalah Matlab. Namun dari hasil percobaan yang dilakukan respon time dari sistem kurang baik (lambat). Untuk mengatasi hal ini sekarang perangkat lunak yang digunakan adalah Microsoft C# Dari dua metode yang dilakukan secara umum metode batas objek lebih memberikan hasil yang bagus. Salah satu alasan adalah pada metode Substraksi Image deteksi gerakan langsung dilakukan dari dua frame yang berurutan. Sedangkan untuk metode batas objek dilakukan dengan metode Cross yaitu suatu metode yang membandingkan piksel batas dengan semua piksel di sekitar sehingga dari segi akurasi akan lebih tinggi namun sedikit lebih lama.

170

Risalah Lokakarya Komputasi dalam Sains dan Teknologi Nuklir XVII, Agustus 2006 (157-171)

DAFTAR RIWAYAT HIDUP

1. Nama

: R.B Wahyu

2. Tempat/Tanggal Lahir

: Jakarta, 8 Juli 1959

3. Instansi

: Pusat Pengembangan Informatika Nuklir, BATAN

4. Pekerjaan / Jabatan

: Staf Komputasi

5. Riwayat Pendidikan

: (setelah SMA sampai sekarang)

• S1 Jurusan Fisika, FMIPA –UI, 1985 • Diploma in Computer Science Condordia University Montreal Canada • Master in Computer Science Curtin University of Technology Perth Western Australia • PhD Student Sekolah Teknik Eloktronika dan Informatika Institut Teknologi Bandung 6. Pengalaman Kerja

:

• 1982 – 1985 Pusdiklat BATAN • 1985 – 1988 Biro Bina Program BATAN • 1988 – sekarang Pusat Pengembangan Informatika Nuklir BATAN 7. Organisasi

:



INAHIA (Indonesian Health Informatics Association: Founder & Member) 8. Makalah yang dipublikasikan : • LKSTN BATAN • International Conference on Instrumentation, Communication and Information Technology (ICICI) 2005 Proc., August 3rd -5th , 2005, Bandung, Indonesia

171

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 PDFFOX.COM - All rights reserved.