Workshop photos in the gallery
We uploaded photos that were taken during the presentations.
Room announcement for W1
Our Workshop (W1) will be held in 'Emerald Room'.
The program is now organized.
We now published the workshop's schedule.
The spread of IoT devices for smart cities brings about novel issues on how to take advantages of various urban big data. It is obviously important to realize urban intelligence with various sensing data, eventually for the development of sustainable cities and the improvement of quality of life. The workshop aims to address a number of practical and theoretical issues for real-world stream data analyses and exploitations, from sensor networks for monitoring urban environments to emerging urban applications with IoT data. In particular, a special session will be organized to demonstrate data analysis and share multidisciplinary knowledge with the KISTI urban sensing dataset. We also welcome and solicit research papers in urban data science, technologies, and applications to discuss new challenges related to urban big data.
KISTI Urban Sensing Dataset: We KISTI operate an urban sensing platform for collecting various urban environmental data
through taxi-based sensors and provide the dataset for the public use. We encourage researchers to utilize the dataset
for interdisciplinary and share interesting ideas with others in this workshop. For the details about the dataset,
Please, contact us by sending an email to Lee.Ryong@gmail.com.
Details will be announced soon.
(but, not limited to)
|9:00||Chair (Ryong Lee)||Opening|
|9:10||Emmanuel Pietriga||Monitoring Air Quality in Korea’s Metropolises on Ultra-High Resolution Wall-Sized Displays|
Emmanuel Pietriga received a PhD degree in computer science from INPG (France) in 2002. He worked for INRIA and Xerox Research Centre Europe, did his postdoctoral research at MIT (USA) as a team member of the World Wide Web Consortium (W3C). He is now working for INRIA in France as a senior research scientist, heading project-team ILDA (Interacting with Large Data), after two years (2012-2014) spent in Chile where he worked for INRIA Chile and the ALMA radio-observatory, focusing on advanced visualization for operations monitoring and control. His research interests include interaction techniques for multi-scale user interfaces, ultra-high-resolution wall-sized displays, visualization techniques for massive datasets, and user interaction with novel forms of data, as enabled by technologies around the Web of Data. http://pages.saclay.inria.fr/emmanuel.pietriga/
Over the last decade, wall-sized displays have evolved from experimental setups to sophisticated arrays of LED panels. Such displays, called ultra-high-resolution wall-sized displays, typically accommodate several hundred megapixels, and are driven by clusters of computers. As an example, WILD, the first wall display we set up in our lab, uses 32+1 graphic processing units in 16+1 computers to display 20480×6400 = 131 megapixels on a 5.5m×1.8m surface. Ultra-high resolution wall displays enable the collaborative visualization of very large datasets. They can represent the data with a high level of detail while retaining context, and enable the juxtaposition of heterogeneous data in various forms. They can be used in many application domains, including command and control centers, geographical information systems, scientific visualization, collaborative design and public information displays. In the context of a collaborative research project between INRIA and KISTI, we are investigating the potential of ultra-high-resolution wall-sized displays for the visualization of stream IOT data in the field of air quality monitoring in large and dense urban areas in Korea. We have designed and implemented an interactive multi-scale visualization of streamed data collected from vehicles (taxis) equipped with a battery of sensors and geolocation devices. Research conducted in this project focuses on the design of effective visualizations that take advantage of the specific characteristics of large surfaces featuring a very high pixel density (in this particular case a set of 4K displays tiled together); and on how to handle streams of IOT data. We will present the main results obtained so far.
Authors: Emmanuel Pietriga and Olivier Chapuis
|9:30||Yee Leung||Sensing the Urban Environment - Towards an Approach to the Integration of Station-Based and Mobile-Sensor-Based Data|
Yee Leung is currently Research Professor in the Department of Geography and Resource Management; Director of the Institute of Future Cities; and Group Leader of the Climate Change and Big Data Program of the Big Data Decision Analytics Center of The Chinese university of Hong Kong. He pioneers research in geographical analysis and planning under fuzziness, and generalizes uncertainty analysis to various types of uncertainties. He has done innovative research in the statistical approach to uncertainty analysis in general and uncertainty analysis and propagation in geographical information systems in particular. He also engages in novel theoretical and applied research in intelligent spatial decision support systems, spatial data mining and knowledge discovery, remote sensing and GIS, climate variability, air and water pollution, as well as urban and regional analysis. He has published 6 research monographs and over 180 international journal papers. He is the recipient of the Second Class Award of the 2007 State Natural Science Award, P.R. China; First Class Award of the 2007 Natural Science Award of the Ministry of Education, P.R. China; editorial board member of 9 international journals; former Chair, Commission on Modeling Geographical Systems, International Geographical Union; Chair, Commission on Mathematical and Computational Geography, The Chinese Geographical Society.
Effective and efficient monitoring of the urban environment is of fundamental importance in decision making for smart cities. There are generally two modes of monitoring for a variety of city-related problems, air pollution is a typical case. For one, measurement data can come from ground-based stations which are stationary and sparse in space. We usually only have a finite number of measurement points/stations which are severely insufficient to provide a complete spatial coverage. These data are however dense in time because they are real-time or near real-time measurements taken at each monitoring station. To make up for the deficiency in spatial coverage, mobile sensors can be deployed to take measurements over a city. With the deployment of a large number of mobile sensors, measurements can be dense in space. This type of data is however non-stationary in space. We only have data taken along the trajectory of a mobile sensor. Therefore, to capitalize on the complementary advantages of both types of sensing data requires the development of reliable methods so that data which are sparse in space but dense in time, and data which are dense in space but sparse in time can be effectively integrated to construct a coverage which are both dense in space and time. This talk will introduce a two-step local smoothing method for such integration. We will apply the method to the KISTI urban sensing data to demonstrate the effectiveness of the proposed methodology.
Author: Yee Leung
|9:50||Wonjun Hwang||Deep Learning-based Fiducial Object Extraction for Efficient Urban Data Analysis|
Wonjun Hwang received both B.S. and M.S. degrees from the Department of Electronics Engineering, Korea University, Korea, in 1999 and 2001, respectively, and Ph.D. degree in the School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Korea, in 2016. From 2001 to 2008, he was a research staff member in Samsung Advanced Institute of Technology (SAIT), Korea. In 2004, he contributed to the promotion of Advanced Face Descriptor, Samsung and NEC joint proposal, to MPEG-7 international standardization. In 2006, he proposed the SAIT face recognition engine which achieved the best results under the uncontrolled illumination situation at Face Recognition Grand Challenge (FRGC) and Face Recognition Vendor Test (FRVT). In 2006, he developed the real-time face recognition engine for the Samsung cellular phone, SGH-V920. From 2009 to 2011, he was a senior engineer in Samsung Electronics, Korea, where he worked on developing face and gesture recognition modules for Samsung humanoid robot, a.k.a RoboRay. In 2011, he rejoined the SAIT as a research staff member and from 2011 to 2014 he worked for a 3D medical image processing of Samsung surgical robot. From 2014 to 2016, he worked on developing a deep learning-based face recognition method for Samsung Galaxy series. Now he is an assistant professor in Ajou University. His research interests are in face recognition, computer vision, pattern recognition, and deep learning.
In the urban data analysis, a huge number of data are collected from the mobile urban sensing devices implemented on the mobile vehicles. Among such large amount of data, the size of sequential images collected from the camera devices is much larger than the others. Nevertheless, since there are many redundant information in the image data, it is necessary to extract the fiducial information. Particularly, in the case of urban data analysis, this need is further increased because the urban data is collected over several days. To solve this issue, we will study how we can extract the fiducial information form image data for efficient urban data analysis. Recently, many deep learning-based image analysis methods have been proposed for understanding the current situation using only image data. The You Only Look Once (YOLO) method is one of the well-known Convolutional Neural Network (CNN)-based Object Detection Methods. In this study, we will propose the YOLO-based fiducial object extraction method and the definition of the fiducial object in the urban data analysis. Though the YOLO achieved the good results from the viewpoint of both accuracy and speed, it is designed for a general object detection task, not the analyzing the urban data. We first validate the YOLO method using the real urban image data. The overall performance of the YOLO was good, but it was lower than expected in the certain real conditions such as blurred images, low-illumination at the tunnel, and serious occlusions, that could happen in the real urban situation. Moreover, to count the population of the fiducial object in the urban data, we propose the object counting method where comparing the CNN features for determining the object identities. For this purpose, we modify the YOLO with the multi-task learning scheme. Extensive experimental result will be given using the real urban image data.
Author: Wonjun Hwang
|10:10||Kyoung-Sook Kim||Space-Time Visual Analytics of OGC Moving Features with Cesium|
Kyoung-Sook Kim is now the team leader of Data Platform Research Team at the Artificial Intelligence Research Center (AIRC) of AIST in Japan. She served as a researcher of National Institute of Information and Communications Technology (NICT) in Japan from Nov. 2007 to Mar. 2014. She received my B.S., M.S., and Ph.D. Degrees in Computer Science from Pusan National University in Korea in 1998, 2001, and 2007, respectively. She is also serving as a co-chair of OGC Moving Features Standard Working Group. Her research interests are in Geo-enabled Computing Framework based on GIS, Location-based Services, Spatiotemporal databases, Big data analysis, Cyber-Physical Cloud Computing, etc.
With the development of position tracking technologies and the increasing usage of mobile devices, various applications have taken a growing interest in the movements of objects and phenomena, such as pedestrians, vehicles, drones, and hurricanes. The historical location data of moving objects have brought new capabilities of free and open source software to manage and analyze big spatiotemporal data. In this talk, we introduce a new data format (MF-JSON) and interactive visualization tool (Stinuum) for handling moving-object data, not only spatiotemporal geometries but also dynamic thematic properties, by using Cesium, an open-source geovisualization platform. First, MF-JSON provides an alternative JSON format for encoding moving objects based on OGC Moving Features Encoding standards. In particular, MF-JSON covers the movements of 0-dimensional Point, 1-dimensional curve LineString, and 2-dimensional surface Polygon based on the application requirements such as disaster risk management, traffic information services, and geo-fencing services. Second, the Stinuum (Spatio-temporal continua on Cesium) visualization tool customizes the perspective view of Cesium for the continuum representation of spatiotemporal geometries in a space-time 3D cube whose x-axis and y-axis represent a geographic space and orthogonal z-axis (height) represents time. Comparing with a static 2D map with timelines and animated maps, the space-time cube visualization technique has advantages on the analysis of topological relationships among multiple moving objects. We will show how Stinuum can support a holistic analysis to reduce the cognitive workload needed to understand space, time, and thematic properties of moving objects in this talk.
Author: Kyoung-Sook Kim
|10:50||Kayo Osako||Evaluation of Taxi Driver Based on Urban Sensing Data|
Kayo Osako received B.E. degree in computer science from Nagoya Institute of Technology in 2016. She is now a research assistant in National Institute of Advanced Industrial Science and Technology (AIST), and a student in a master’s course of Department of Computer Science at Tokyo Institute of Technology. Her research interests include human-agent-interaction (HAI).
To measure quality of driving is an important task and is still an open problem in analysis of driver behavior data. Driver behavior data are generally collected via attached sensors including video cameras and a receiver of signals for controller area network (CAN). Even though a part of these data is getting accessible in the recent IoT era, detailed behavior data such as pedal use and steering angle in the CAN data are still confidential and are not straightforward to make them open. Thus, it is challenging to elucidate and analyze actions and intentions of drivers though simple sensor data such as vehicle locations and acceleration. In this study, we consider how to analyze behavior of taxi drivers based on the KISTI mobile urban dataset, and propose some criteria to evaluate the drivers. While the passengers should be sent to their destinations as rapidly as possible, trip time may not be the single requirement; there are some possible advantages of the taxi service such as comfort and environment-friendly drives. Our method of the analysis and criteria will be useful to discover new aspects of the service
Authors: Kayo Osako, Takuya Ishihara, Keisuke Yamazaki and Kyoung-Sook Kim
|11:10||Michael Färber||When Entrepreneurs Ask for Success Stories and Tourists Ask for Travel Plans: Toward an Interconnected World|
Michael Färber received a JSPS fellowship for performing research at the Department of Social Informatics at Kyoto University, Japan. Before that, we worked at the University of Freiburg, Germany, and obtained his PhD in February 2017 at the Institute AIFB at Karlsruhe Institute of Technology (KIT), Germany. Michael Färber's research interests lie in the intersection of Natural Language Processing, Semantic Web, and Machine Learning. He wrote an often cited survey about knowledge graphs and served as reviewer and PC member for major NLP and Semantic Web conferences and journals, such as AAAI, ECMLPKDD, ESWC, IJCAI, ISWC, JWS, and SWJ.
In the first part of the talk, we present Crunchbase, which is a database about startups and technology companies. We show how we created a wrapper around the CrunchBase REST API that provides data as Linked Data. The wrapper provides both schema-level links and instance-level links to other data sources. Furthermore, we outline how we harvested CrunchBase RDF data to allow processing and querying that goes beyond the access facilities of the CrunchBase API. The second part of the talk deals with assisting users in planning trips. Tourism recommender systems are complex systems with many sub-tasks, such as selecting relevant points-of-interests (POIs), the order in which they should be visited, the start and end time of the visits, etc. Being able to provide tourists with an additional travel plan which explains how to reach those POIs using public transportation is a feature in which such recommender systems come short. We developed novel approaches to generate realistic visit plans and their corresponding travel plans. Our experiments on a real-world data set from the city of Izmir show that our approaches do not only outperform the state-of-the-art in terms of the quality of recommendations, but also provide both visit plans and travel plans in real-time and are robust in case of delays.
Author: Michael Färber
|11:30||Tetsuo Ogino||Visualization of airflow using movable sensors around urban cities|
Tetsuo Ogino received his M.S. degrees in Informatics from graduate school of Informatics Kyoto University, Japan in 2003 and graduated in 2006. He received an award from ACM International Collegiate Programming Contest in world final at 2000. He researches learning systems such as LMS about educational information, at Information Science and Technology Center in Kobe University, Japan until 2016. He now works at Kwansei Gakuin University, Japan. He is interested in databases and programming.
The purpose of our research is to construct a system that visualizes the airflow from the environmental dataset on the atmosphere collected by sensors. This system aims to be useful for measures against various urban problems such as the heat island problem in urban areas, health problems due to heat stroke and air pollution, safety problems such as typhoons and tornadoes. Methods for visualizing the flow of the atmosphere include a direct observation using an anemometer and a method of performing prediction by simulation using meteorological observation data. In this research, however, we try to visualize the flow of air directly from changing the concentration of various substances contained in the atmosphere. Hereby, we will be able to visualize much more densely than before, and we believe that it will be useful for more precise urban planning. Specifically, we attach sensors to a taxi or the like circulating in the city, and collect concentration data of measurable nitrogen oxides such as carbon dioxide and PM 2.5 in real time. Although some substances are generated or extinguished, by analyzing the concentration change of multiple substances, the flow of the atmosphere is estimated.
Authors: Tetsuo Ogino and Kazutoshi Sumiya
|11:50||Linda Achilles||Urban Elster: An Urban Data Analysis Tool for Heterogeneous Spatio-temporal Urban Sensing Data|
Linda Achilles received her B.A. degree in information science from University of Hildesheim, Germany in 2016. She is now a joint-degree Master student at Pai Chai University, Korea and does an internship at Korea Institute of Science and Technology Information (KISTI). Her research interests include human-computer interaction (HCI), 4th industrial revolution and IoT.
The IoT Data Research group in KISTI has developed testbeds for collecting urban sensing data, examining a variety of issues from sensors to IoT data analyses, and looking for novel applications to take advantages of the uncharted urban data lands. In particular, our datasets come from mobile urban sensor networks which utilize taxis and rented cars with a variety of physical sensors to monitor real-time urban environments such air quality, traffic, living conditions and floating population of major metropolitan cities in Korea. In order to facilitate the understanding about the datasets that we collected and possibly to support interactive data analytics online, we are developing an online data analytics tool. We expect from our system to provide a user interface to easily access the data being streamed and accumulated on the data server. Data analysts can get better pictures of spatio-temporal events. In practice, the user can perform various analysis tasks with a web-based interactive interface, on which one can combine heterogeneous types of sensing data and look into the real-time urban sensing data. In this talk we will demonstrate the IoT data browser and show how spatio-temporal data analysis can be realized.
Authors: Linda Achilles, Lea Wöbbekind, Ryong Lee, Hanmin Jung and Dowan Kim