Continuum Computing Architecture Research Team

Team Outline

The Continuum Computing Architecture Research Team is conducting middleware research for edge-to-cloud services and infrastructure (DigiARC calls as Continuum Computing core), towards the realization of various digital services considered in Society 5.0. Specially, we fucus on 1) Acceleration technologies at edge, for advanced and high-performance continuum computing applications. 2) Cloud technologies, which can provide huge AI computing and data processing powers with end services, through tighter integration of edge infrastructure.

     Research Topics
    • Data management and use in continuum computing
      • Offloading and data reduction at edge for scalability and energy saving, Data pipeline optimization
      • DataOps automation, balanced with effective human interaction
      • Data traceability in sharing and integration
    • Management of continuum computing application services
      • Develop and build methods of high-performance and robust continuum computing applications for geo-distributed, unstable and heterogeneous environments
      • Zero-touch service deployment, Dynamic QoS control, Offloading decision, Client-mobility support
      • Low-latency service-to-service interaction, Autonomous service orchestration
    • High performance cloud technologies tightly connecting with edge
      • Large-scale computing with accelerators
      • High performance AI, AI resource hub
      • Low-latency and scalable connection services with edge, Efficient resource management
overview-edge2cloud.png

Information

Publication

2021.09.07 Ricardo Macedo, Cláudia Correia, Marco Dantas, Cláudia Brito, Weijia Xu, Yusuke Tanimura, Jason Haga, João Paulo, The Case for Storage Optimization Decoupling in Deep Learning Frameworks, Workshop on Re-envisioning Extreme-Scale I/O for Emerging Hybrid HPC Workloads (REX-IO'21), in conjunction with IEEE Cluster.
2021.08.11 Jun Li, Minjun Li, Zhigang Cai, Francois Trahay, Mohamed Wahib, Balazs Gerofi, Zhiming Liu, Jianwei Liao, Intra-page Cache Update in SLC Mode with Partial Programming in High Density SSDs, The 50th International Conference on Parallel Processing (ICPP 2021).
2021.08.01 Fareed Mohammad Qararyah, Mohamed Wahib, Didem Unat, ParDNN: A Generic and Deterministic Method to Partition Graphs of Memory-Constrained DNNs, Elsevier Journal of Parallel Computing (PARCO). (Accepted: to appear in August 2021)
2021.07.21 Shinichiro Takizawa, Yusuke Tanimura, Hidemoto Nakada, Ryousei Takano, Hirotaka Ogawa, ABCI 2.0: Advances in Open AI Computing Infrastructure at AIST, IPSJ SIGHPC-180 (SWoPP2021).
2021.06.23 Albert Njoroge Kahira, Truong Thao Nguyen, Leonardo Bautista Gomez, Ryousei Takano, Rosa M. Badia, An Oracle for Guiding Large-Scale Model/Hybrid Parallel Training of Convolutional Neural Networks, ACM Symposium on High-Performance Parallel and Distributed Computing 2021 (HPDC'21).
2021.06.17 Peng Chen, Mohamed Wahib, Xiao Wang, Shinichiro Takizawa, Takahiro Hirofuchi, Hirotaka Ogawa, Satoshi Matsuoka, Performance Portable Back-projection Algorithms on CPUs: Agnostic Data Locality and Vectorization Optimizations, International Conference on Supercomputing 2021 (ICS21).
2021.06.01 Martin Schlueter, Mehdi Neshat, Mohamed Wahib, Masaharu Munetomo, Markus Wagner: GTOPX space mission benchmarks, Elsevier SoftwareX Volume 14, June 2021. [paper]
2021.05.20 Jens Domke, Emil Vatai, Aleksandr Drozd, Peng Chen, Yosuke Oyama, Lingqi Zhang, Shweta Salaria, Daichi Mukunoki, Artur Podobas, Mohamed Wahib, Satoshi Matsuoka, Matrix Engines for High Performance Computing: A Paragon of Performance or Grasping at Straws?, The 35th IEEE International Parallel and Distributed Processing Symposium (IPDPS 2021), pp.121-132.
2021.05.10 Truong Thao Nguyen, Mohamed Wahib, An Allreduce Algorithm and Network Co-Design for Large-Scale Training of Distributed Deep Learning, The 21st IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing 2021 (CCGrid 2021). [video]
More

Researcher Profile

Akihisa Sakabe
Specified Concentrated Research Specialist

Akihisa Sakabe


Tadashi Sugita
Technical Staff

Tadashi Sugita


Satoshi Nagai
Specified Concentrated Research Specialist

Satoshi Nagai


Peng Chen
Researcher

Peng Chen

High Performance Computing, Image Processing
https://researchmap.jp/pengchen
Shinichiro Takizawa
Senior Researcher

Shin'ichiro Takizawa

High Performance Computing, Big Data, System management
https://stakizawa.github.io/
Truong Thao Nguyen
Researcher

Truong Thao Nguyen

High-performance Computing, Interconnection network
https://researchmap.jp/NguyenTT
Attia Mohamed Wahib
Senior Researcher

Attia Mohamed Wahib

High Performance Computing, Parallel Programming, Large-scale AI
Yusuke Tanimura
Team Leader

Yusuke Tanimura

Parallel and distributed storage, Large-scale data processing, High-performance Computing, Continuum computing
Jason Haga
Chief Senior Researcher

Jason Haga

Immersive visualization and analytics, UX/UI, applied AI, edge computing
Hidemoto Nakada
Chief Senior Researcher

Hidemoto Nakada

Parallel/Distributed Computing, Machine Learning, Programming Languages
https://sites.google.com/site/hidemotonakada