1. Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification Ishida, T., Yamane, I., Charoenphakdee, N., Niu, G., and Sugiyama, M. In Proceedings of 11th International Conference on Learning Representations (ICLR2023). [link] [arXiv]
  2. Towards Universal Neural Network Potential for Material Discovery Applicable to Arbitrary Combination of 45 Elements Takamoto, S., Shinagawa, C., Motoki, D., Nakago, K., Li, W., Kurata, I., Watanabe, T., Yayama, Y., Iriguchi, H., Asano, Y., Onodera, T., Ishii, T., Kudo, T., Ono, H., Sawada, R., Ishitani, R., Ong, M., Yamaguchi, T., Kataoka, T., Hayashi, A., Charoenphakdee, N., and Ibuka, T. Nature Communications, 2022. [link] [arXiv]
  3. Cross-lingual Transfer for Text Classification with Dictionary-based Heterogeneous Graph Chairatanakul, N., Sriwatanasakdi, N., Charoenphakdee, N., Liu, X., and Murata, T. Findings of EMNLP, 2021. [link] [arXiv] [code]
  4. Semi-Supervised Ordinal Regression Based on Empirical Risk Minimization Tsuchiya, T., Charoenphakdee, N., Sato, I., and Sugiyama, M. Neural Computation, 2021. [link] [arXiv]
  5. Classification with Rejection Based on Cost-sensitive Classification Charoenphakdee, N., Cui, Z., Zhang, Y., and Sugiyama, M. In Proceedings of 38th International Conference on Machine Learning (ICML2021). [link] [arXiv] [poster] [slides]
  6. On Focal Loss for Class-Posterior Probability Estimation: A Theoretical Perspective Charoenphakdee, N.*, Vongkulbhisal, J.*, Chairatanakul, N., and Sugiyama, M. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR2021). [link] [arXiv] [poster] [slides]
  7. Robust Imitation Learning from Noisy Demonstrations Tangkaratt, V., Charoenphakdee, N., and Sugiyama, M. In Proceedings of 24th International Conference on Artificial Intelligence and Statistics (AISTATS2021). [link] [arXiv]
  8. Learning from Aggregate Observations Zhang, Y., Charoenphakdee, N., Wu, Z., and Sugiyama, M. In Advances in Neural Information Processing Systems 33 (NeurIPS2020). [link] [arXiv] [poster] [code]
  9. Classification from Triplet Comparison Data Cui, Z., Charoenphakdee, N., Sato, I., and Sugiyama, M. Neural Computation, 2020. [link] [arXiv] [code]
  10. On the Calibration of Multiclass Classification with Rejection Ni, C., Charoenphakdee, N., Honda, J., and Sugiyama, M. In Advances in Neural Information Processing Systems 32 (NeurIPS2019), Vancouver, Canada, Dec. 8-14, 2019. [link] [arXiv] [poster]
  11. Learning Only from Relevant Keywords and Unlabeled Documents Charoenphakdee, N., Lee, J., Jin, Y., Wanvarie, D., and Sugiyama, M. In Proceedings of 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP2019), Hong Kong, Nov. 3-7, 2019. [link] [arXiv] [poster] [code]
  12. Imitation Learning from Imperfect Demonstration Wu, Y., Charoenphakdee, N., Bao, H., Tangkaratt, V., and Sugiyama, M. In Proceedings of 36th International Conference on Machine Learning (ICML2019), Long Beach, California, USA, Jun. 9-15, 2019. [link] [arXiv] [poster] [slides] [code]
  13. On Symmetric Losses for Learning from Corrupted Labels Charoenphakdee, N., Lee, J., and Sugiyama, M. In Proceedings of 36th International Conference on Machine Learning (ICML2019), Long Beach, California, USA, Jun. 9-15, 2019. [link] [arXiv] [poster] [slides] [code]
  14. Positive-Unlabeled Classification under Class Prior Shift and Asymmetric Error Charoenphakdee, N., and Sugiyama, M. In Proceedings of the SIAM International Conference on Data Mining (SDM2019), Calgary, Alberta, Canada, May 2-4, 2019. [link] [arXiv] [poster] [slides]
  15. Unsupervised Domain Adaptation Based on Source-guided Discrepancy Kuroki, S., Charoenphakdee, N., Bao, H., Honda, J., Sato, I., and Sugiyama, M. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI2019), Honolulu, Hawaii, USA, Jan. 27-Feb. 1, 2019. [link] [arXiv] [poster] [slides]

Preprints and Workshop Articles

  1. Diffusion models for missing value imputation in tabular data Zheng, S., and Charoenphakdee, N. NeurIPS Table Representation Learning (TRL) Workshop, 2022. [link] [arXiv]
  2. Do Better QM9 Models Extrapolate as Better Quantum Chemical Property Predictors? Zhang, Y., Charoenphakdee, N., and Takamoto, S. NeurIPS Machine Learning and the Physical Sciences Workshop, 2022. [link]
  3. Towards Creating Benchmark Datasets of Universal Neural Network Potential for Material Discovery Takamoto, S., Charoenphakdee, N., and Shinagawa, C. NeurIPS Machine Learning and the Physical Sciences Workshop, 2022. [link]
  4. A Symmetric Loss Perspective of Reliable Machine Learning Charoenphakdee, N., Lee, J., and Sugiyama, M. arXiv preprint arXiv:2101.01366 [arXiv]
  5. Time-varying Gaussian Process Bandit Optimization with Non-constant Evaluation Time Imamura, H., Charoenphakdee, N., Futami, F., Sato, I., Honda, J., and Sugiyama, M. arXiv preprint arXiv:2003.04691 [arXiv]
  6. Learning from Indirect Observations Zhang, Y., Charoenphakdee, N., and Sugiyama, M. arXiv preprint arXiv:1910.04394 [arXiv] [code]
  7. Domain Discrepancy Measure Using Complex Models in Unsupervised Domain Adaptation Lee, J., Charoenphakdee, N., Kuroki, S., and Sugiyama, M. Symbolic-Neural Learning Workshop, 2022. [arXiv]