Unsupervised Classification of Vineyard Parcels Using SPOT5 Images by Utilizing Spectral and Textural Features


Senturk S., TAŞDEMİR K., Kaya S., Sertel E.

2nd International Conference on Agro-Geoinformatics (Agro-Geoinformatics), Virginia, Amerika Birleşik Devletleri, 12 - 16 Ağustos 2013, ss.61-65 identifier identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Basıldığı Şehir: Virginia
  • Basıldığı Ülke: Amerika Birleşik Devletleri
  • Sayfa Sayıları: ss.61-65
  • İstanbul Teknik Üniversitesi Adresli: Evet

Özet

In order to support agricultural management of vineyards, high spatial resolution remote sensing images (less than 1 meter) enables textural representation of their periodic plantation pattern which helps for delineation. Even though this texture analysis may provide highly accurate delineation of vineyards, it may be infeasible at national scale, due to the computational complexity of texture extraction. In addition, particularly for Turkey, plantation practices for vineyards deviate from common periodic pattern, which can make those textures insufficient. In this study, we used SPOTS images to explore their capabilities for delineation of vineyard parcels, without any a priori parcel information. As the inter-row distance and the spacing between the individual vine plants are less than the used 2.5m panchromatic, which is generated from 2x5m scenes (nadir) for panchromatic and 10m (nadir) spatial resolutions for multi-spectral bands, currently used periodicity based (Fourier) texture analysis may be vague. Therefore, we used Gabor textures (with different scales and directions) to define texture characteristics at this relatively coarse resolution, and we integrated these textures with image bands (visible, near infrared and shortwave infrared) which hold the ability to spectrally distinguish the vine plants from the remaining crops. For the vineyards parcels recognition, we classified the extracted features by a recent hierarchical clustering method based on selforganizing neural networks. We compared the performance of this proposed method to the object-based image analysis (by eCognition) which depends on multi-scale image segmentation and user-defined decision rules with corresponding thresholds.