Intro
Hirokatsu KATAOKA is a Research Scientist at Computer Vision Research Group in National Institute of Advanced Industrial Science and Technology (AIST) . At the same time, he is recently collaborating with Keio University, University of Tsukuba and Tokyo Denki University. He also leads the cvpaper.challenge which is comprehensive survey project in the field of computer vision and pattern recognition.

I'm finding a Post-doc and Ph.D. student to submit top conferences (e.g. CVPR/ICCV/ICRA/ACL). Please send your CV.


What's new?
May 16, 2018: Our paper is accepted to CVPR 2018 Workshop on Sight and Sound (WSS)
May 8, 2018: Our paper is accepted to CVPR 2018 Workshop on Language and Vision
Apr 19, 2018: We opened webpage of cvpaper.challenge
Apr 11, 2018: 2 papers are accepted to ICPR 2018
Feb 19, 2018: 2 papers are accepted to CVPR 2018
Feb 8, 2018: 1 paper is accepted to Sensors (Open Access Journal)
Jan 12, 2018: 1 paper is accepted to ICRA2018
Nov 28, 2017: New paper is on arXiv
Aug 26, 2017: Spatiotemporal 3D ResNets are released!
Aug 22, 2017: 1 paper is accepted to ICCV2017Workshop
Jul 4, 2017: Our paper is accepted to ICDAR2017
May 10, 2017: 4 papers are accepted to CVPR 2017 Workshop
Feb 14, 2017: 2 papers are accepted to IAIP MVA2017
Oct 8, 2016: Oral, Brave New Idea in ECCV2016 BNMW
Sep 8, 2016: 2 papers are accepted to ECCV2016 Workshop
Jul 15, 2016: 1 paper is accepted to BMVC2016


Education
Ph.D. in Engineering, Keio University (April 2011 - March 2014)
Adviser: Prof. Yoshimitsu Aoki
慶應義塾大学大学院 理工学研究科 博士(工学) 指導教員:青木義満准教授

M.E. Keio University (April 2009 - March 2011)
Adviser: Prof. Yoshimitsu Aoki
慶應義塾大学大学院 理工学研究科 修士課程 指導教員:青木義満准教授

B.E. Shibaura Institute of Technology (April 2005 - March 2009)
芝浦工業大学 工学部 学士課程 指導教員:大関和夫教授

Midorioka High School (April 2002 - March 2005)
茨城県立緑岡高校 普通科


Selected Projects

cvpaper.challenge
[Webpage] [Twitter] [SlideShare] [Webpage] [PDF1] [PDF2]
Hirokatsu Kataoka (AIST) et al.
Top conferences
We are finding a collaborator to read/write a sophisticated paper!
cvpaper.challenge is focusing on reading top conference papers in the fields of computer vision, image processing, pattern recognition and machine learning. In this challenge, we simultaneously read papers and create documents for easy understanding top conference papers. The first challenge was to completely read the CVPR 2015 papers. Recently, we are reading 1,000+ papers/year and submitting 10+ papers/year. More details are on the Twitter, SlideShare and Web pages.

Neural Joking Machine: Humorous image captioning
[PDF] [Oral] [Poster]
Kota Yoshida (TDU), Munetaka Minoguchi (TDU/AIST), Kenichiro Wani, Akio Nakamura (TDU), Hirokatsu Kataoka (AIST)
CVPR 2018 WS
Now we are joking!
What is an effective expression that draws laughter from human beings? In the present paper, in order to consider this question from an academic standpoint, we generate an image caption that draws a "laugh" by a computer. A system that outputs funny captions based on the image caption proposed in the computer vision field is constructed. Moreover, we also propose the Funny Score, which flexibly gives weights according to an evaluation database. The Funny Score more effectively brings out "laughter" to optimize a model. In addition, we build a self-collected BoketeDB, which contains a theme (image) and funny caption (text) posted on "Bokete", which is an image Ogiri website. In an experiment, we use BoketeDB to verify the effectiveness of the proposed method by comparing the results obtained using the proposed method and those obtained using MS COCO Pre-trained CNN+LSTM, which is the baseline and idiot created by humans. We refer to the proposed method, which uses the BoketeDB pre-trained model, as the Neural Joking Machine (NJM).

Drive Video Analysis for the Detection of Traffic Near-Miss Incidents
[PDF] [Poster] [Digest]
Hirokatsu Kataoka (AIST), Teppei Suzuki (Keio/AIST), Shoko Oikawa, Yasuhiro Matsui (NTSEL), Yutaka Satoh (AIST)
ICRA 2018, CVPR 2018
We have collected large-scale traffic near-miss incident database!
We presents a novel traffic database that contains information on a large number of traffic near-miss incidents. The study makes the following two main contributions: (i) In order to assist automated systems in detecting near-miss incidents based on database instances, we created a large-scale traffic near-miss incident database that consists of video clip of dangerous events captured by monocular driving recorders. (ii) To illustrate the applicability of traffic near-miss incidents, we provide two primary database-related improvements: parameter fine-tuning using various near-miss scenes from NIDB, and foreground/background separation into motion representation. Then, using our new database in conjunction with a monocular driving recorder, we developed a near-miss recognition method that provides automated systems with a performance level that is comparable to a human-level understanding of near-miss incidents (64.5% vs. 68.4% at near-miss recognition, 61.3% vs. 78.7% at near-miss detection).

Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet?
[PDF] [GitHub (700 stars!)]
Kensho Hara, Hirokatsu Kataoka, Yutaka Satoh (AIST)
CVPR 2018
3D Conv is ready to be used various video applications!
The purpose of this study is to determine whether current video datasets have sufficient data for training very deep convolutional neural networks (CNNs) with spatio-temporal three-dimensional (3D) kernels. Recently, the performance levels of 3D CNNs in the field of action recognition have improved significantly. However, to date, conventional research has only explored relatively shallow 3D architectures. We examine the architectures of various 3D CNNs from relatively shallow to very deep ones on current video datasets. The Kinetics dataset has sufficient data for training of deep 3D CNNs, and enables training of up to 152 ResNets layers, interestingly similar to 2D ResNets on ImageNet. We believe that using deep 3D CNNs together with Kinetics will retrace the successful history of 2D CNNs and ImageNet.

Dynamic Fashion Cultures
[PDF] [Poster] [Presen]
Kaori Abe (TDU/AIST), Teppei Suzuki (Keio/AIST), Shunya Ueta (Tsukuba/AIST), Akio Nakamura (TDU), Yutaka Satoh, Hirokatsu Kataoka (AIST)
arXiv Pre-print, MIRU 2017 (Oral: 28.0%, Best Student Paper)
Now we can start a world-wide fashion analysis!
The paper presents a novel concept that analyzes and visualizes worldwide fashion styles. Our goal is to reveal web-based viral fashion styles. To achieve the fashion-based analysis, we created fashion culture database (FCDB), which consists of 76 million geo-tagged images in 16 cosmopolitan cities. By grasping a trend of mixed fashion styles, the paper also proposes an unsupervised fashion trend descriptor (FTD) using a descriptor, codeword vector and temporal subtraction. As the result of large-scale data collection and an unsupervised analyzer, we achieved world-level fashion visualization.

Generated Motion Maps
[PDF]
Yuta Matsuzaki, Kazushige Okayasu, Akio Nakamura (TDU), Hirokatsu Kataoka (AIST)
CVPR 2017 Workshop
Motion maps are directly generated with a generation model.
The paper presents a cencept for generated motion maps to directly generate a human-specific modality such as human pose and stacked optical flow, with only one rgb-image. Although the conventional approaches have achieved a complicated estimation with a discriminative model, we find the solution with a recent generative model. The two primary contributions in this paper are as follows: (i) proposed approach directly generates a {human pose heatmap, stacked optical flow} from an rgb-image, (ii) we have collected a database which contains image pairs between RGB-channel and image modality (pose-based heatmap and stacked optical flow). The experimental results clearly show the effectiveness of our generative model, as well as its ability to generated motion maps.

Human Action Recognition without Human
[PDF] [Slide] [Poster]
Yun He, Soma Shirakabe (Tsukuba/AIST), Yutaka Satoh, Hirokatsu Kataoka (AIST)
ECCV 2016 Workshop (Oral, Brave New Idea), MIRU 2017 (Oral: 28.0%)
Human action recognition can be done without a human!
The objective of this paper is to evaluate “human action recognition without human”. Motion representation is frequently discussed in human action recognition. We have examined several sophisticated options, such as dense trajectories (DT) and the two-stream convolutional neural network (CNN). However, some features from the background could be too strong, as shown in some recent studies on human action recognition. Therefore, we considered whether a background sequence alone can classify human actions in current large-scale action datasets (e.g., UCF101). In this paper, we propose a novel concept for human action analysis that is named “human action recognition without human”. An experiment clearly shows the effect of a background sequence for understanding an action label.

Transitional Action Recognition for Short-term Action Prediction
[PDF] [Abstract] [Poster] [Slide] [Video]
Hirokatsu Kataoka (AIST), Yudai Miyashita (TDU/AIST), Masaki Hayashi (Liquid/Keio), Kenji Iwata, Yutaka Satoh (AIST)
BMVC 2016
We defined "Transitional Action" to make an human action prediction easier.
Herein, we address transitional actions class as a class between actions. Transitional actions should be useful for producing short-term action predictions while an action is transitive. However, transitional action recognition is difficult because actions and transitional actions partially overlap each other. To deal with this issue, we propose a subtle motion descriptor (SMD) that identifies the sensitive differences between actions and transitional actions. The two primary contributions in this paper are as follows: (i) defining transitional actions for short-term action predictions that permit earlier predictions than early action recognition, and (ii) utilizing convolutional neural network (CNN) based SMD to present a clear distinction between actions and transitional actions. Using three different datasets, we will show that our proposed approach produces better results than do other state-of-the-art models. The experimental results clearly show the recognition performance effectiveness of our proposed model, as well as its ability to comprehend temporal motion in transitional actions.



Publications
(#) indicates top-ranked conference/journal in Google Scholar Metrics.
----2018----
- Shintaro Yamamoto, Yoshihiro Fukuhara, Ryota Suzuki, Shigeo Morishima, Hirokatsu Kataoka, "Automatic Paper Summary Generation from Visual and Textual Information", International Conference on Machine Vision (ICMV) 2018
- Kaori Abe, Munetaka Minoguchi, Teppei Suzuki, Tomoyuki Suzuki, Naofumi Akimoto, Yue Qiu, Ryota Suzuki, Kenji Iwata, Yutaka Satoh, Hirokatsu Kataoka, "Fashion Culture Database: Construction of Database for World-wide Fashion Analysis", IEEE ICARCV 2018. [PDF]
- Tomoyuki Suzuki, Munetaka Minoguchi, Ryota Suzuki, Akio Nakamura, Kenji Iwata, Yutaka Satoh, Hirokatsu Kataoka, "Semantic Change Detection", IEEE ICARCV 2018. [PDF]
- Tomoyuki Suzuki, Takahiro Itazuri, Kensho Hara, Hirokatsu Kataoka, "Learning Spatiotemporal 3D Convolution with Video Order Self-Supervision", ECCV 2018 Workshop on Person in Context (PIC).
- Ryota Natsume*, Kazuki Inoue*, Shintaro Yamamoto, Yoshihiro Fukuhara, Shigeo Morishima, Hirokatsu Kataoka, "Understanding Fake-Faces", ECCV 2018 Workshop on Brain-Driven Computer Vision (BDCV), 2018. [PDF]
- (#) Kensho Hara, Hirokatsu Kataoka, Yutaka Satoh, "Towards Good Practice for Action Recognition with Spatiotemporal 3D Convolutions", International Conference on Pattern Recognition (ICPR), 2018. [PDF]
- (#) Hirokatsu Kataoka Shuhei Ohki, Kenji Iwata, Yutaka Satoh, "Occlusion Handling Human Detection with Refocused Images", International Conference on Pattern Recognition (ICPR), 2018. [PDF]
- Tomoyuki Suzuki, Kensho Hara, Takahiro Itazuri, Hirokatsu Kataoka, “Self-supervised Learning for Spatiotemporal 3DCNN Towards Effective Motion Feature”, in MIRU 2018.
- Hirokatsu Kataoka, Yuchen Zhang, Yutaka Satoh, “Weakly Supervised Out-of-context Action Understanding”, in MIRU 2018.
- (#) Kensho Hara, Hirokatsu Kataoka, Yutaka Satoh, "AIST Submission to ActivityNet Challenge 2018", ActivityNet Large Scale Activity Recognition Challenge in Conjunction with CVPR 2018. [PDF]
- (#) Tenga Wakamiya, Takumu Ikeya, Akio Nakamura, Kensho Hara, Hirokatsu Kataoka, "TDU&AIST Submission for ActivityNet Challenge 2018 in Video Caption Task", ActivityNet Large Scale Activity Recognition Challenge in Conjunction with CVPR 2018. [PDF]
- (#) Yue Qiu, Fangge Chen, Shuhei Oki, Hirokatsu Kataoka, “Image generation associated with music data”, in CVPR 2018 Workshop on Sight and Sound (WSS). [PDF]
- (#) Kota Yoshida, Munetaka Minoguchi, Kenichiro Wani, Akio Nakamura, Hirokatsu Kataoka, "Neural Joking Machine: An image captioning for a humor", CVPR 2018 Language and Vision Workshop. [PDF] [Oral] [Poster]
- (#) Kensho Hara, Hirokatsu Kataoka, Yutaka Satoh, "Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet?", CVPR, 2018. [PDF] [Poster] [GitHub] (Acceptance rate: 29.6%; 1st place in Computer Vision at Google Scholar Metrics)
- (#) Tomoyuki Suzuki*, Hirokatsu Kataoka*, Yoshimitsu Aoki, Yutaka Satoh, "Anticipating Traffic Accidents with Adaptive Loss and Large-scale Incident DB", CVPR, 2018. [PDF] [Poster] (Acceptance rate: 29.6%; 1st place in Computer Vision at Google Scholar Metrics; * indicates equal contribution)
- (#) Hirokatsu Kataoka, Yutaka Satoh, Yoshimitsu Aoki, Shoko Oikawa, Yasuhiro Matsui, "Temporal and Fine-grained Pedestrian Action Recognition on Driving Recorder Database", Sensors, 2018. [PDF] (Impact Factor: 2.67)
- (#) Hirokatsu Kataoka, Teppei Suzuki, Shoko Oikawa, Yasuhiro Matsui, Yutaka Satoh, "Drive Video Analysis for the Detection of Traffic Near-Miss Incidents", IEEE International Conference on Robotics and Automation (ICRA), May 2018. [PDF] [Poster] [Digest] (Acceptance rate: 40.6%; 1st place in Robotics at Google Scholar Metrics)

----2017----
- (#) Fangge Chen, Hirokatsu Kataoka, Yutaka Satoh, "Text Detection in Traffic Informatory Signs Using Synthetic Data", International Conference on Document Analysis and Recognition (ICDAR 2017), Nov. 2017.
- (#) Kensho Hara, Hirokatsu Kataoka, Yutaka Satoh, "Learning Spatio-Temporal Features with 3D Residual Networks for Action Recognition", ICCV 2017 Workshop on ChaLearn Looking at People, Oct. 2017. [PDF] [GitHub]
- Hirokatsu Kataoka, Yun He, Soma Shirakabe, Yutaka Satoh, "Humanless Human Action Recognition: An evaluation of background effects and pure human motions", in MIRU 2017, Aug. 2017. (Acceptance rate (oral): 28.0%)
- Kaori Abe, Teppei Suzuki*, Shunya Ueta, Akio Nakamura, Yutaka Satoh, Hirokatsu Kataoka, "Dynamic Fashion Cultures", in MIRU 2017, Aug. 2017. (* indicates equal contribution) [PDF] [Poster] [Presen] (Acceptance rate (oral): 28.0%, Best Student Award)
- Teppei Suzuki, Yoshimitsu Aoki, Hirokatsu Kataoka, "Pedestrian Risk Analysis with Temporal Convolutional Neural Networks", in MIRU 2017, Aug. 2017.
- Kensho Hara, Hirokatsu Kataoka, Yusuke Goutsu, Yutaka Satoh, Ryozo Yamashita, "NIAD Submission to ActivityNet Challenge 2017", in ActivityNet Large Scale Activity Recognition Challenge, in conjuction with CVPR, Jul. 2017. [PDF] [Result]
- Hirokatsu Kataoka, Soma Shirakabe, Yun He, Shunya Ueta, Teppei Suzuki, Kaori Abe, Asako Kanezaki, Shin'ichiro Morita, Toshiyuki Yabe, Yoshihiro Kanehara, Hiroya Yatsuyanagi, Shinya Maruyama, Ryosuke Takasawa, Masataka Fuchida, Yudai Miyashita, Kazushige Okayasu, Yuta Matsuzaki, "cvpaper.challenge in 2016: Futuristic Computer Vision through 1,600 Papers Survey", arXiv pre-print, 1707.06436, Jul. 2017. [PDF] [PDF2]
- (#) Yuta Matsuzaki, Kazushige Okayasu, Akio Nakamura, Hirokatsu Kataoka, "Generated Motion Maps", in submission to CVPR 2017 Workshop on Brave New Ideas for Motion Representations in Videos (BNMW), Jul. 2017. [PDF]
- (#) Yue Qiu, Yutaka Satoh, Ryota Suzuki, Hirokatsu Kataoka, "Sensing and recognition of typical indoor family scenes using an RGB-D camera", in CVPR 2017 Workshop, WiCV, Jul. 2017.
- (#) Kaori Abe, Hirokatsu Kataoka, Akio Nakamura, "Weighted Feature Integration for Person Re-identification", in CVPR 2017 Workshop, WiCV, Jul. 2017.
- Hirokatsu Kataoka, Kaori Abe, Akio Nakamura, Yutaka Satoh, "Collaborative Descriptors: Convolutional Maps for Preprocessing", in arXiv pre-print:1705.03595, May 2017. [PDF]
- Yuta Matsuzaki*, Kazushige Okayasu*, Takaaki Imanari, Naomichi Kobayashi, Yoshihiro Kanehara, Ryousuke Takasawa, Akio Nakamura, Hirokatsu Kataoka, "Could you guess an interesting movie from the posters?: An evaluation of vision-based features on movie poster database", IAPR Conference on Machine Vision Applications (MVA2017), May 2017. (* equal contribution) [PDF] [Poster]
- Teppei Suzuki, Yoshimitsu Aoki, Hirokatsu Kataoka, "Pedestrian Near-Miss Analysis on Vehicle-Mounted Driving Recorders", IAPR Conference on Machine Vision Applications (MVA2017), May 2017.
- Yudai Miyashita, Hirokatsu Kataoka, Akio Nakamura, "Analyzing Fine Motion Considering Individual Habits for Appearance-based Proficiency Evaluation", IEICE Transactions on Information and Systems, vol.E100-D, no.1, Jan. 2017. [PDF]

----2016----
- Yun He, Soma Shirakabe, Yutaka Satoh, Hirokatsu Kataoka, "Human Action Recognition without Human", ECCV 2016 Workshop on Brave New Ideas for Motion Representations in Videos (BNMW), Oct. 2016. (Oral, Best Paper) [PDF] [Slide] [Poster]
- Hirokatsu Kataoka, Yun He, Soma Shirakabe, Yutaka Satoh, "Motion Representation with Acceleration Images", ECCV 2016 Workshop on Brave New Ideas for Motion Representations in Videos (BNMW), Oct. 2016. [PDF] [Poster]
- (#) Hirokatsu Kataoka, Yudai Miyashita, Masaki Hayashi, Kenji Iwata, Yutaka Satoh, "Recognition of Transitional Action for Short-Term Action Prediction using Discriminative Temporal CNN Feature", British Machine Vision Conference (BMVC), Sep. 2016. (Acceptance rate: 39.4%) [PDF] [Abstract] [Poster] [Slide] [Video]
- Hirokatsu Kataoka, Yudai Miyashita, Tomoaki Yamabe, Soma Shirakabe, Shin'ichi Sato, Hironori Hoshino, Ryo Kato, Kaori Abe, Takaaki Imanari, Naomichi Kobayashi, Shinichiro Morita, Akio Nakamura, "cvpaper.challenge in 2015 - A review of CVPR2015 and DeepSurvey", arXiv pre-print 1605.08247, May. 2016. [PDF]
- (#) Hirokatsu Kataoka, Masaki Hayashi, Kenji Iwata, Yutaka Satoh, Yoshimitsu Aoki, Slobodan Ilic, "Dominant Codewords Selection with Topic Model for Action Recognition", IEEE CVPR 2016 Workshop on ChaLearn Looking at People, Jul. 2016. [PDF] [Slide]
- Hirokatsu Kataoka, Soma Shirakabe, Yudai Miyashita, Akio Nakamura, Kenji Iwata, Yutaka Satoh, "Semantic Change Detection with Hypermaps", arXiv pre-print arXiv:1604.07513, Apr. 2016. [PDF]
- Hirokatsu Kataoka, Yoshimitsu Aoki, Kenji Iwata, Yutaka Satoh, "Activity Prediction Using a Space-Time CNN and Bayesian Framework", 11th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP), Feb. 2016. [PDF] [Slide]

----2015----
- Hirokatsu Kataoka, Yudai Miyashita, Tomoaki Yamabe, Soma Shirakabe, Shin'ichi Sato, Hironori Hoshino, Ryo Kato, Kaori Abe, Takaaki Imanari, Naomichi Kobayashi, Shinichiro Morita, Akio Nakamura, "cvpaper.challenge in CVPR2015 - A review of CVPR2015", Special Talk at Pattern Recognition and Media Understanding (PRMU), Dec. 2015. [PDF] [Slide]
- Hirokatsu Kataoka, Kenji Iwata, Yoshimitsu Aoki, Yutaka Satoh, "Evaluation of Vision-based Human Activity Recognition in Dense Trajectory Framework", 11th International Symposium on Visual Computing (ISVC), Dec. 2015. [PDF] [Slide]
- Tomoaki Yamabe, Yudai Miyashita, Shin'ichi Sato, Yudai Yamamoto, Akio Nakamura, Hirokatsu Kataoka, "What is an Effective Feature for a Detection Problem? -Feature Evaluation in Multiple Scenes-", IEEE International Conference on Systems, Man, and Cybernetics (SMC), Oct. 2015.
- Hirokatsu Kataoka, Kenji Iwata, Yutaka Satoh, "Feature Evaluation of Deep Convolutional Neural Networks for Object Recognition and Detection", arXiv preprint arXiv:1509.07627, Sep. 2015. [PDF] [Slide]
- (#) Hirokatsu Kataoka, Yoshimitsu Aoki, Yutaka Satoh, Shoko Oikawa, Yasuhiro Matsui, "Fine-grained Walking Activity Recognition via Driving Recorder Dataset", IEEE Intelligent Transportation Systems Conference (ITSC), Sep. 2015. [PDF] [Slide]
- Yudai Miyashita, Hirokatsu Kataoka, Akio Nakamura, "Appearance-based Proficiency Evaluation of Micro-operation Skill in Removing Individual Habit", Chinese Control Conference and SICE annual Conference (CCC&SICE), Jul. 2015.
- (#) Masamichi Shimosaka, Nishi Kentaro, Junichi Sato, Hirokatsu Kataoka, "Predicting Driving Behavior Using Inverse Reinforcement Learning with Multiple Reward Functions towards Environmental Diversity", IEEE Intelligent Vehicles Symposium (IV), May 2015. (Acceptance rate (oral): 6.6%) [PDF]
- Tomoaki Yamabe, Hirokatsu Kataoka, Akio Nakamura, "Quantized Feature with Angular Displacement for Activity Recognition", IEEJ Trans. information and systems, Vol.135, No.4, pp.372-380, Apr. 2015. [PDF]
- Junji Kurano, Masaki Hayashi, Taiki Yamamoto, Hirokatsu Kataoka, Masamoto Tanabiki, Junko Furuyama, Yoshimitsu Aoki, "Ball Trajectory Extraction in Team Sports Videos by Focusing on Ball Holder Candidates for a Play Search and 3D Virtual Display System", Journal of Signal Processing, Vol.19, No.4, pp.147-150, 2015. [Link]

----2014----
- (#) Hirokatsu Kataoka, Kiyoshi Hashimoto, Kenji Iwata, Yutaka Satoh, Nassir Navab, Slobodan Ilic, Yoshimitsu Aoki, "Extended Co-occurrence HOG with Dense Trajectories for Fine-grained Activity Recognition", Asian Conference on Computer Vision (ACCV), Nov. 2014. (Acceptance rate: 27.8%) [PDF] [Slide]
- Hirokatsu Kataoka, Kiyoshi Hashimoto, Yoshimitsu Aoki, "Feature Integration with Random Forests for Real-time Human Activity Recognition", International Conference on Machine Vision (ICMV), Nov. 2014. (Acceptance rate: 40.0%) [PDF]
- Hirokatsu Kataoka, Yoshimitsu Aoki, "An Analysis of Edge Orientation and Magnitude in Co-occurrence Feature Descriptor", SICE Annual Conference, Sep. 2014. [PDF]
- Tomoaki Yamabe, Hirokatsu Kataoka, Akio Nakamura, "A Study on Features for Early Recognition of Human Activities", SICE Annual Conference, Sep. 2014.
- Hirokatsu Kataoka, Kimimasa Tamura, Kenji Iwata, Yutaka Satoh, Yasuhiro Matsui, Yoshimitsu Aoki, "Extended Feature Descriptor and Vehicle Motion Model with Tracking-by-detection for Pedestrian Active Safety", IEICE Transactions on Information and Systems, Vol.E97-D, No.2, 2014. [PDF]
- Junji Kurano, Taiki Yamamoto, Hirokatsu Kataoka, Masaki Hayashi, Yoshimitsu Aoki, "Ball Tracking in Team Sports by Focusing on Ball Holder Candidates", International Workshop on Advanced Imate Technology (IWAIT2014), Jan. 2014.

Please see My Google Scholar/Full List for other publications.


Awards
- YANS Award, YANS2018, Aug. 2018.
- Best Student Award, MIRU2017, Aug. 2017.
- Fujiwara Award, Fuculty of Science and Technology at Keio University, Mar. 2014.
- IEEJ Best Journal Award 2012, IEEJ Transactions on Electrical and Electronic Engineering, Sep. 2013.
- IEEE IECON2012 Best Presentation Award, IEEE IECON2012, Oct. 2012.
- Global COE Excellent Achievement Award, GCOE Symposium, Feb. 2012.
- Odawara Award, ViEW2011, Dec. 2011.
- Graduate School Award of Society of Automotive Engineers of Japan, Mar. 2011.
- Best Master Course Student Award of School of Integrated Design Engineering in Master Course, Mar. 2011.
- Best Paper Award of SICE pattern analysis symposium, Dec. 2010.
- Encouragement Award of Dynamic Image Processing for Real Application Workshop 2010 (DIA2010), Mar. 2010.
- Best Presentation Award of School of Integrated Design Engineering Award, Feb. 2010.
- Best Presentation Award of Summer Seninar 2009, Aug. 2009.


Invited Talks
- STAIR Lab., May 2018. [Slide]
- OITDA, Apr. 2018. [Slide]
- Waseda University, Apr. 2018. [Slide]
- A Company, Jan. 2018.
- Nagoya CV/PRML, Dec. 2017. [Slide]
- teamLab, Dec. 2017. [Slide]
- Tokyo Denki University, Nov. 2017. [Slide]
- Abeja Innovation Meetup, Sep. 2017. [Slide]
- Waseda University, Sep. 2017. [Slide]
- Chukyo University, Feb. 2017. [Slide]
- Keio University, Jan. 2017.
- STAIR Lab., Jan. 2017. [Slide]
- The University of Tokyo, Sep. 2016. [Slide]
- IAIP, Jul. 2016. [Slide]
- Pattern Recognition and Media Understanding (PRMU), Dec. 2015. [Slide]
- SSII2015, Jun. 2015. [Slide]
- Max Planck Institute for Infomatics (MPII), Oct. 2014. [Slide]
- The University of Tokyo, Jun. 2012.


Most-viewed Slides
- CVPR 2017 Report(CVPR 2017 速報) (20,000+ views)
- Dense Trajectories Tutorial(Dense Trajectories チュートリアル資料) (19,000+ views)
- CVPR 2015 Survey (11,000+ views)
- ICCV 2017 Report(ICCV 2017 速報) (7,000+ views)
- これからのコンピュータビジョン技術(cvpaper.challenge in PRMU Grand Challenge 2016) (7,000+ views)

Hirokatsu Kataoka