Geoinformation Service Research Team

publication

2021.02.18

Post-arrival calibration of Hayabusa2's optical navigation cameras (ONCs): Severe effects from touchdown events

Toru Kouyama, Eri Tatsumi, Yasuhiro Yokota, Koki Yumoto, Manabu Yamada, Rie Honda, Shingo Kameda, Hidehiko Suzuki, Naoya Sakatani, Masahiko Hayakawa,Tomokatsu Morota, Moe Matsuoka, Yuichiro Cho, Chikatoshi Honda, Hirotaka Sawada, Kazuo Yoshioka and Seiji Sugita

<Abstract>

Accurate measurements of the surface brightness and its spectrophotometric properties are essential for obtaining reliable observations of the physical and material properties of planetary bodies. To measure the surface brightness of Ryugu accurately, we calibrated the optical navigation cameras (ONCs) of Hayabusa2 using both standard stars and Ryugu itself during the rendezvous phase including two touchdown operations for sampling. These calibration results showed that the nadir-viewing telescopic camera (ONC-T) and nadir-viewing wide-angle camera (ONC-W1) experienced substantial variation in sensitivity. In particular, ONC-W1 showed significant sensitivity degradation (~60%) after the first touchdown operation. We estimated the degradations to be caused by front lens contamination by fine-grain materials lifted from the Ryugu surface due to thruster gas for ascent back maneuver and sampler projectile impact upon touchdown. While ONC-T is located very close to W1 on the spacecraft, its degradation in sensitivity was only ~15% over the entire rendezvous phase. If in fact dust is really the main cause for the degradation, this lighter damage likely resulted from dust protection by the long hood attached to ONC-T. However, because large variations in the absolute sensitivity occurred after the touchdown events, which should be due to dust effect, uncertainty for the absolute sensitivity was rather large (3-4%). On the other hand, the change in relative spectral responsivity (i.e., 0.55-μm-band normalized responsivity) of ONC-T was small (1%). The variation in relative responsivity during the proximity phase has been well calibrated to have only a small uncertainty (< 1%). Furthermore, the degradation (i.e., increase) in the full width at half maximum of the point spread function of ONC-T and W1 was almost negligible, although the blurring effect due to dust scattering was confirmed in W1. These optical degradations due to the touchdown events were carefully monitored as a function of time along with other time-related deteriorations, such as the dark current level and hot pixels. We also conducted a new calibration of the flat-field change as a function of the detector temperature by observing the onboard flat-field lamp and validating with Ryugu's disk images. The results of these calibrations showed that ONC-T and W1 maintained their scientific performance by updating the calibration parameters.

2020.11.04

Transfer Learning With CNNs for Segmentation of PALSAR-2 Power Decomposition Components

Poliyapram Vinayaraj , Ryu Sugimoto, Ryosuke Nakamura and Yoshio Yamaguchi

<Abstract>
Water/ice/land region segmentation is an important task for remote sensing, as it analyses the occurrence of water or ice on the earth's surface. Many previous deep learning researches effectively utilized multispectral satellite images for highly accurate water/ice/land region segmentation. However, the deep-learning-based segmentation of synthetic aperture radar images still remains a challenging task due to the unavailability of enough labeled data. In order to overcome this issue, we designed a two-step deep-learning-based transfer learning model that needs a very limited number of labeled samples. The proposed approach consists of two models. The first model is a deep encoder-decoder 6SD to Landsat-8 multispectral translation model (DTF) that translates fully polarimetric PALSAR-2 6SD data to six new features. As for the second model (transfer learning), it utilizes the DTF features to fine-tune the model using the Landsat-8 multispectral pretrained model for water/ice/land segmentation. Hereinafter, the proposed two-step model is referred to as DTF-TL. Also, a qualitative and quantitative analysis was carried out to evaluate the performance of the proposed model (DTF-TL) and compare it with various transfer learning methods. Overall, the DTF-TL model outperformed the other models with consistent and reliable water/ice/land segmentation results in terms of the recall (0.980), precision (0.981), F1-score (0.981), mean intersection over union (0.962), and accuracy (0.989).

2020.04.01

Canopy Averaged Chlorophyll Content Prediction of Pear Trees using Convolutional Auto-Encoder on Hyperspectral Data

Subir Paul, Vinayaraj Poliyapram, Nevrez İmamoğlu, Kuniaki Uto, Ryosuke Nakamura, D. Nagesh Kumar

<Abstract>
Chlorophyll content is one of the essential parameters to assess the growth process of the fruit trees. This present study developed a model for estimation of canopy averaged chlorophyll content (CACC) of pear trees using the convolutional auto-encoder (CAE) features of hyperspectral data. This study also demonstrated the inspection of anomaly among the trees by employing multi-dimensional scaling (MDS) on the CAE features and detected outlier trees prior to fit nonlinear regression models. These outlier trees were excluded from the further experiments which helped in improving the prediction performance of CACC. Gaussian process regression (GPR) and support vector regression (SVR) techniques were investigated as nonlinear regression models and used for prediction of CACC. The CAE features were proven to be providing better prediction of CACC when compared with the direct use of hyperspectral bands or vegetation indices as predictors. The CACC prediction performance was improved with the exclusion of the outlier trees during training of the regression models. It was evident from the experiments that GPR could predict the CACC with better accuracy compared to SVR. In addition, the reliability of the tree canopy masks, which were utilized for averaging the features' values for a particular tree, was also evaluated.