Geoinformation Service Research Team

Team Outline

Geoinformation service as a bridge between Cyber and Physical space

Geoinformation Service Research Team develops intelligent and effective analysis engines to handle rapidly growing geoinformation, such as satellite imager, aerial photo and terrestrial lidar data.

ToppageDigiarcB.pngToppageDigiarcA.pngのサムネイル画像

Information

2024.04.24 3DDB Viewer and Landbrowser are out of service due to the electric power outage
2024.04.22 3DDB Viewer and Landbrowser Planned maintenance
2024.04.18 3DDB Viewer and Landbrowser planned maintenance has been finished
2024.04.16 3DDB Viewer and Landbrowser Maintenance Schedule Extension
2024.03.14 3DDB Viewer and Landbrowser Planned maintenance
2023.06.01 3DDB Viewer and Landbrowser Planned maintenance
2023.03.27 3DDB Viewer and Landbrowser Planned maintenance
2022.12.06 3DDB Viewer and Landbrowser Planned maintenance
2022.08.24 3DDB Viewer and Landbrowser Planned maintenance
2022.08.16 3DDB Viewer and Landbrowser Planned maintenance
2022.07.20 3DDB Viewer and Landbrowser planned maintenance has been finished
2022.07.20 3DDB Viewer and Landbrowser Planned maintenance
2022.07.15 New 3Ddata of Ryusenji in Osaka and Science Square Tsukuba in AIST are now available on TDV .
2022.06.21 3DDB Viewer and Landbrowser planned maintenance has been finished
2022.06.10 3DDB Viewer and Landbrowser Planned maintenance
2022.06.01 3DDB Viewer Planned maintenance
2022.05.11 2022/4/27 InSARBrowser has released
2022.05.10 3DDB Viewer and Landbrowser maintenance
2022.04.07 3DDB Viewer and Landbrowser planned maintenance has been finished
2022.01.04 3DDB Viewer planned maintenance has been finished
2021.12.28 3DDB Viewer and Landbrowser Planned maintenance
2021.12.06 3DDB Viewer and Landbrowser Planned maintenance
2021.08.04 We have received a technical award from DAPCON
2021.04.20 New data have been added on 3DDB Viewer
2021.04.16 COG file of PALSAR L2.1, L2.1PD is available
2020.03.04 PolSAR Browser is available
More

Publication

2021.02.18

Post-arrival calibration of Hayabusa2's optical navigation cameras (ONCs): Severe effects from touchdown events

Toru Kouyama, Eri Tatsumi, Yasuhiro Yokota, Koki Yumoto, Manabu Yamada, Rie Honda, Shingo Kameda, Hidehiko Suzuki, Naoya Sakatani, Masahiko Hayakawa,Tomokatsu Morota, Moe Matsuoka, Yuichiro Cho, Chikatoshi Honda, Hirotaka Sawada, Kazuo Yoshioka and Seiji Sugita

<Abstract>

Accurate measurements of the surface brightness and its spectrophotometric properties are essential for obtaining reliable observations of the physical and material properties of planetary bodies. To measure the surface brightness of Ryugu accurately, we calibrated the optical navigation cameras (ONCs) of Hayabusa2 using both standard stars and Ryugu itself during the rendezvous phase including two touchdown operations for sampling. These calibration results showed that the nadir-viewing telescopic camera (ONC-T) and nadir-viewing wide-angle camera (ONC-W1) experienced substantial variation in sensitivity. In particular, ONC-W1 showed significant sensitivity degradation (~60%) after the first touchdown operation. We estimated the degradations to be caused by front lens contamination by fine-grain materials lifted from the Ryugu surface due to thruster gas for ascent back maneuver and sampler projectile impact upon touchdown. While ONC-T is located very close to W1 on the spacecraft, its degradation in sensitivity was only ~15% over the entire rendezvous phase. If in fact dust is really the main cause for the degradation, this lighter damage likely resulted from dust protection by the long hood attached to ONC-T. However, because large variations in the absolute sensitivity occurred after the touchdown events, which should be due to dust effect, uncertainty for the absolute sensitivity was rather large (3-4%). On the other hand, the change in relative spectral responsivity (i.e., 0.55-μm-band normalized responsivity) of ONC-T was small (1%). The variation in relative responsivity during the proximity phase has been well calibrated to have only a small uncertainty (< 1%). Furthermore, the degradation (i.e., increase) in the full width at half maximum of the point spread function of ONC-T and W1 was almost negligible, although the blurring effect due to dust scattering was confirmed in W1. These optical degradations due to the touchdown events were carefully monitored as a function of time along with other time-related deteriorations, such as the dark current level and hot pixels. We also conducted a new calibration of the flat-field change as a function of the detector temperature by observing the onboard flat-field lamp and validating with Ryugu's disk images. The results of these calibrations showed that ONC-T and W1 maintained their scientific performance by updating the calibration parameters.

2020.11.04

Transfer Learning With CNNs for Segmentation of PALSAR-2 Power Decomposition Components

Poliyapram Vinayaraj , Ryu Sugimoto, Ryosuke Nakamura and Yoshio Yamaguchi

<Abstract>
Water/ice/land region segmentation is an important task for remote sensing, as it analyses the occurrence of water or ice on the earth's surface. Many previous deep learning researches effectively utilized multispectral satellite images for highly accurate water/ice/land region segmentation. However, the deep-learning-based segmentation of synthetic aperture radar images still remains a challenging task due to the unavailability of enough labeled data. In order to overcome this issue, we designed a two-step deep-learning-based transfer learning model that needs a very limited number of labeled samples. The proposed approach consists of two models. The first model is a deep encoder-decoder 6SD to Landsat-8 multispectral translation model (DTF) that translates fully polarimetric PALSAR-2 6SD data to six new features. As for the second model (transfer learning), it utilizes the DTF features to fine-tune the model using the Landsat-8 multispectral pretrained model for water/ice/land segmentation. Hereinafter, the proposed two-step model is referred to as DTF-TL. Also, a qualitative and quantitative analysis was carried out to evaluate the performance of the proposed model (DTF-TL) and compare it with various transfer learning methods. Overall, the DTF-TL model outperformed the other models with consistent and reliable water/ice/land segmentation results in terms of the recall (0.980), precision (0.981), F1-score (0.981), mean intersection over union (0.962), and accuracy (0.989).

2020.04.01

Canopy Averaged Chlorophyll Content Prediction of Pear Trees using Convolutional Auto-Encoder on Hyperspectral Data

Subir Paul, Vinayaraj Poliyapram, Nevrez İmamoğlu, Kuniaki Uto, Ryosuke Nakamura, D. Nagesh Kumar

<Abstract>
Chlorophyll content is one of the essential parameters to assess the growth process of the fruit trees. This present study developed a model for estimation of canopy averaged chlorophyll content (CACC) of pear trees using the convolutional auto-encoder (CAE) features of hyperspectral data. This study also demonstrated the inspection of anomaly among the trees by employing multi-dimensional scaling (MDS) on the CAE features and detected outlier trees prior to fit nonlinear regression models. These outlier trees were excluded from the further experiments which helped in improving the prediction performance of CACC. Gaussian process regression (GPR) and support vector regression (SVR) techniques were investigated as nonlinear regression models and used for prediction of CACC. The CAE features were proven to be providing better prediction of CACC when compared with the direct use of hyperspectral bands or vegetation indices as predictors. The CACC prediction performance was improved with the exclusion of the outlier trees during training of the regression models. It was evident from the experiments that GPR could predict the CACC with better accuracy compared to SVR. In addition, the reliability of the tree canopy masks, which were utilized for averaging the features' values for a particular tree, was also evaluated.

More

Researcher Profile

Toru Kouyama
Team Leader

Toru Kouyama

Remote sensing, Planetary Meteorology
t.kouyama[at]aist.go.jp
Nevrez IMAMOGLU
Senior Researcher

Nevrez IMAMOGLU

nevrez.imamoglu[at]aist.go.jp
Nevrez Imamoglu
Hirokazu Yamamoto
Chief Senior Researcher

Hirokazu Yamamoto

hirokazu.yamamoto[at]aist.go.jp
Yuri Nishikawa
Senior Researcher

Yuri Nishikawa

nishikawa.yuri[at]aist.go.jp
https://yurinishikawa.github.io/index.html
Ali Caglayan
Senior Researcher

Ali Caglayan

Computer Vision, Artificial Intelligence, Deep Learning, Robotics.
ali.caglayan[at]aist.go.jp
Atsushi Oda
Invited Researcher

Atsushi Oda

x-oda[at]aist.go.jp
Ryosuke Nakamura
Principal Research Manager

Ryosuke Nakamura

Planetary Science,Satellite remote sensing
r.nakamura[at]aist.go.jp
Chiaki Tsutsumi
Principal Research Manager

Chiaki Tsutsumi

Geoinfomation service
tsutsumi.chiaki[at]aist.go.jp
Soushi Kato
Specified Concentrated Research Specialist

Soushi Kato

Remote Sensing
kato.soushi[at]aist.go.jp
Ryu Sugimoto
Specified Concentrated Research Specialist

Ryu Sugimoto

sugimoto.ryu[at]aist.go.jp
Yosuke Ikeda
Specified Concentrated Research Specialis

Yosuke Ikeda

yosuke.ikeda[at]aist.go.jp
Ryo Ito
Specified Concentrated Research Specialist

Ryo Ito

itou.ryo[at]aist.go.jp
Yuya Arima
AIST Postdoctoral Researcher

Yuya Arima

y-arima[at]aist.go.jp 
Yusuke Kobayashi
Specified Concentrated Research Specialist

Yusuke Kobayashi

kobayashi.yusuke[at]aist.go.jp