Novice programmers are increasingly relying on Large Language Models (LLMs) to generate code for learning programming concepts. However, this interaction can lead to superficial engagement, giving learners an illusion of learning and hindering skill development. To address this issue, we conducted a systematic design exploration to develop seven cognitive engagement techniques aimed at promoting deeper engagement with AI-generated code. In this paper, we describe our design process, the initial seven techniques and results from a between-subjects study (N=82). We then iteratively refined the top techniques and further evaluated them through a within-subjects study (N=42). We evaluate the friction each technique introduces, their effectiveness in helping learners apply concepts to isomorphic tasks without AI assistance, and their success in aligning learners’ perceived and actual coding abilities. Ultimately, our results highlight the most effective technique: guiding learners through the step-by-step problem-solving process, where they engage in an interactive dialog with the AI, prompting what needs to be done at each stage before the corresponding code is revealed.
UIST 2024 Workshop
Dynamic Abstractions Building the Next Generation of Cognitive Tools and Interfaces
Sangho Suh, Hai Dang, Ryan Yen, Josh M. Pollock, Ian Arawjo, Rubaiat Habib Kazi, Hariharan Subramonyam, Jingyi Li, Nazmus Saquib, and Arvind Satyanarayan
This workshop provides a forum to discuss, brainstorm, and prototype the next generation of interfaces that leverage the dynamic experiences enabled by recent advances in AI and the generative capabilities of foundation models. These models simplify complex tasks by generating outputs in various representations (e.g., text, images, videos) through diverse input modalities like natural language, voice, and sketch. They interpret user intent to generate and transform representations, potentially changing how we interact with information and express ideas. Inspired by this potential, technologists, theorists, and researchers are exploring new forms of interaction by building demos and communities dedicated to concretizing and advancing the vision of working with dynamic abstractions. This UIST workshop provides a timely space to discuss AI’s impact on how we might design and use cognitive tools (e.g., languages, notations, diagrams). We will explore the challenges, critiques, and opportunities of this space by thinking through and prototyping use cases across various domains.
SOUPS 2024 Workshop on Creating Engaging Security and Privacy Educational Interfaces for Educators and Families
A Comic Authoring Tool for Enhancing Privacy and Security Lessons Through Informal Stories
Research shows that many people learn about privacy and security risks from anecdotal stories shared by others, which influence their perceptions and actions. However, no studies have been conducted with children, who may also gain significant knowledge about security and privacy from peers, trusted adults, and social networks. To promote the creation and sharing of privacy stories, we created PrivacyToon, a conceptdriven storytelling tool that facilitates the visual production of privacy stories and visualizations. The tool provides users with creative and technical drawing support, where a comic story can be created, downloaded, and shared while improving the reflection of privacy issues in the process. A comic authoring tool centers users in the creation process to tell their own narratives that express their lived digital experiences orlessons from stories. We discuss our ongoing research on PrivacyToon and its potential as a security and privacy learning interface for children.
CHI 2024
Luminate Structured Generation and Exploration of Design Space with Large Language Models for Human-AI Co-Creation
Thanks to their generative capabilities, large language models (LLMs) have become an invaluable tool for creative processes. These models have the capacity to produce hundreds and thousands of visual and textual outputs, offering abundant inspiration for creative endeavors. But are we harnessing their full potential? We argue that current interaction paradigms fall short, guiding users towards rapid convergence on a limited set of ideas, rather than empowering them to explore the vast latent design space in generative models. To address this limitation, we propose a framework that facilitates the structured generation of design space in which users can seamlessly explore, evaluate, and synthesize a multitude of responses. We demonstrate the feasibility and usefulness of this framework through the design and development of an interactive system, Luminate, and a user study with 14 professional writers. Our work advances how we interact with LLMs for creative tasks, introducing a way to harness the creative potential of LLMs.
UIST 2024
CoLadder Manipulating Code Generation with Multi-Level Blocks
Ryan Yen, Jiawen Zhu, Sangho Suh, Haijun Xia, and Jian Zhao
Programmers increasingly rely on Large Language Models (LLMs) for code generation. However, current LLM-driven code assistants lack sufficient scaffolding to help programmers construct intentions from their overarching goals, translate these intentions into natural language prompts, and further refinement on the intention by editing prompts or code. To address this gap, we adopted an iterative design process to gain insights into programmers’ strategies when using LLMs for programming. Building on our findings, we created CoLadder a system that supports programmers by facilitating hierarchical task decomposition, direct code segment manipulation, and result evaluation during prompt authoring. A user study with 12 experienced programmers showed that CoLadder is effective in helping programmers externalize their problem-solving intentions flexibly, improving their ability to evaluate and modify code across various abstraction levels, from goal to final code implementation.
UIST 2023 Demonstration
🏅 Best Demo Honorable Mention
Demonstration of Masonview: Content-Driven Viewport Management
Bryan Min, Matthew T Beaudouin-Lafon, Sangho Suh, and Haijun Xia
Comics is emerging as a popular medium for providing visual explanations of programming concepts and procedures. Recent research into this medium opened the door to new opportunities and tools to advance teaching and learning in computing. For instance, recent research on coding strip, a form of comic strip with its corresponding code, led to a new visual programming environment that generates comics from code and experience report detailing various ways coding strips can be used to benefit students’ learning. However, how comics can be designed and used to teach programming has not yet been documented in a concise, accessible format to ease their adoption. To fill this gap, we developed a cheat sheet that summarizes the pedagogical techniques and designs teachers can use in their teaching. To develop this cheat sheet, we analyzed prior work on coding strip, including 26 coding strips and 30 coding strip design patterns. We also formulated a concept-language-procedure framework to delineate how comics can facilitate teaching in programming. To evaluate our cheat sheet, we presented it to 11 high school CS teachers at an annual conference for computer studies educators and asked them to rate its readability, usefulness, organization, and their interest in using it for their teaching. Our analysis suggests that this cheat sheet is easy to read/understand, useful, well-structured, and interests teachers to further explore how they can incorporate comics into their teaching.
UIST 2023
Sensecape Enabling Multilevel Exploration and Sensemaking with Large Language Models
Sangho Suh, Bryan Min, Srishti Palani, and Haijun Xia
People are increasingly turning to large language models (LLMs) for complex information tasks like academic research or planning a move to another city. However, while they often require working in a nonlinear manner — e.g., to arrange information spatially to organize and make sense of it, current interfaces for interacting with LLMs are generally linear to support conversational interaction. To address this limitation and explore how we can support LLM-powered exploration and sensemaking, we developed Sensecape, an interactive system designed to support complex information tasks with an LLM by enabling users to (1) manage the complexity of information through multilevel abstraction and (2) switch seamlessly between foraging and sensemaking. Our within-subject user study reveals that Sensecape empowers users to explore more topics and structure their knowledge hierarchically, thanks to the externalization of levels of abstraction. We contribute implications for LLM-based workflows and interfaces for information tasks.
DIS 2023
Metaphorian Leveraging Large Language Models to Support Extended Metaphor Creation for Science Writing
Jeongyeon Kim, Sangho Suh, Lydia Chilton, and Haijun Xia
Science writers commonly use extended metaphors to communicate unfamiliar concepts in a more accessible way to a wider audience. However, creating metaphors for science writing is challenging even for professional writers; according to our formative study (n=6), finding inspiration and extending metaphors with coherent structures were critical yet significantly challenging tasks for them. We contribute Metaphorian, a system that supports science writers with the creation of scientific metaphors by facilitating the search, extension, and iterative revision of metaphors. Metaphorian uses a large language model-based workflow inspired by the heuristic rules revealed from a study with six professional writers. A user study (n=16) revealed that Metaphorian significantly enhances satisfaction, confidence, and inspiration in metaphor writing without decreasing writers’ sense of agency. We discuss design implications for creativity support for figurative writing in science.
SIGCSE 2023 Poster
Reference Guide for Teaching Programming with Comics
Comics is emerging as a popular, promising medium for teaching programming. Recent research on coding strip, a form of comic strip that has a direct correspondence to its corresponding code, led to a novel visual programming environment that generates comics from code and experience report demonstrating various ways coding strips can be used to benefit students’ learning and their experiences. However, how teachers can use and design comics has not yet been documented into a concise, accessible format to ease their adoption. To fill this gap, we developed a quick reference guide that summarizes pedagogical techniques and designs teachers can use in their teaching. To develop this guide, we analyzed prior work on coding strip, including 26 coding strips and 30 coding strip design patterns. We also formulated a concept-language-procedure framework to delineate how comics can facilitate teaching in programming. To evaluate our reference guide, we presented it to seven high school CS teachers at an annual conference for computer studies educators and asked them to rate its readability, usefulness, organization, and their interest in using it for their teaching. Our analysis suggests this reference guide is easy to read/understand, useful, well-structured, and interests teachers to further explore how they can incorporate comics into their teaching.
SIGCSE 2023 Poster
Developing Comic-based Learning Toolkits for Teaching Computing to Elementary School Learners
Francisco Castro, Sangho Suh, Jane E, Weena Naowaprateep, and Yang Shi
Our work explores the idea of teaching computing by having learners create, arrange, and design comic panels. We designed comic-based learning toolkits, guided by the following research question: \emphHow can we support the informal learning of basic computing concepts for elementary school learners through a physical comic-based learning toolkit? This question emerged as a result of our partnership with a community organization that teaches art to elementary school learners through the production and distribution of art \emphsubscription boxes. Subscription boxes contain art materials and instruction manuals that learners can use to create artistic artifacts at home. The organization was interested in teaching computing through art activities. This led us to design a special subscription box containing materials that include paper comic panels, coloring pens, and magnets and an activity manual for a comic creation activity. Our learning toolkits guide learners to use computational concepts in the story-crafting process: \emphdecomposing a narrative within each comic panel, \emphsequencing comic panels to create a narrative flow, using \emphif-else for character decision-making within the story, and \emphiterating or \emphrefining the comic to create a cohesive narrative flow.
UIST 2022
CodeToon Story Ideation, Auto Comic Generation, and Structure Mapping for Code-Driven Storytelling
Recent work demonstrated how we can design and use coding strips, a form of comic strips with corresponding code, to enhance teaching and learning in programming. However, creating coding strips is a creative, time-consuming process. Creators have to generate stories from code (code->story) and design comics from stories (story->comic). We contribute CodeToon, a comic authoring tool that facilitates this code-driven storytelling process with two mechanisms: (1) story ideation from code using metaphor and (2) automatic comic generation from the story. We conducted a two-part user study that evaluates the tool and the comics generated by participants to test whether CodeToon facilitates the authoring process and helps generate quality comics. Our results show that CodeToon helps users create accurate, informative, and useful coding strips in a significantly shorter time. Overall, this work contributes methods and design guidelines for code-driven storytelling and opens up opportunities for using art to support computer science education.
DIS 2022
🏅 Best Paper Honorable Mention
PrivacyToon Concept-driven Storytelling with Creativity Support for Privacy Concepts
Sangho Suh, Sydney Lamorea, Edith Law, and Leah Zhang-Kennedy
With privacy-related concepts often abstract and difficult to define, comics can be an effective visual storytelling medium for explaining and raising awareness about privacy. However, existing privacy and security educational comics do not support content creation. To address this, we contribute PrivacyToon, a comic-based authoring tool that leverages concept-driven storytelling and ideation cards to help users create customizable privacy-related visual content. Our exploratory user study with 23 students and teachers shows PrivacyToon’s potential as a creative tool for communicating privacy concepts and stories. Our results show that a wide range of creativity preferences and contexts must be considered when designing systems that integrate ideation card-based design processes.
PhD Thesis
Coding Strip A Tool for Supporting Interplay within Abstraction Ladder for Computational Thinking
As technologies advance and play an increasingly larger role in our lives, computational thinking—the ability to understand computing concepts and procedures and their role in the tools we use—has become an important part of our training and education in the 21st century. Thus initiatives to improve people’s technical literacy have become a top priority in many countries around the world, with programming classes becoming a mandatory part of many K-12 curricula and increasingly available online.
Unfortunately, many find programming intimidating and difficult to learn because it requires learning abstract concepts, languages, and procedures. Programming concepts and languages employ abstract terms and unfamiliar syntax and conventions; yet how they relate to what we already know (e.g., real-life situations and grounding metaphors) is not always explicated in learning materials or instructions. Learning programming procedures is also difficult due to its abstract nature. For instance, while learners generally need to be explicitly trained on the execution steps in order to be able to trace (or abstract) execution steps, how computer programs are executed and what happens (e.g., in memory) in each step are often omitted or presented as abstractions (e.g., loop), obscuring the process for novice learners before they can master the ability to step through the procedure on their own. As a result, they are often forced to mechanically memorize arbitrary conventions, procedures, and rules in programming without forming any intuition about them.
The reason for these difficulties can be attributed to dead-level abstracting, a phenomenon where information is stuck in certain levels of abstraction. High or low, the lack or absence of interplay between abstraction levels makes it challenging to understand new information in a meaningful, efficient, and effective way. Although the ability to "rapidly change levels of abstraction" has been recognized as a key characteristic of computational thinking and its importance has been stressed countless times, instructions in computing education tend to be mired in abstract levels of abstraction and lack opportunities for students to develop this ability to move up and down the ladder of abstraction.
This thesis aims to address this problem by proposing a model where learners can move between levels of abstraction. Specifically, I introduce coding strip, a form of comic strip that has a direct correspondence to code, as a tool for teaching and learning programming concepts, languages, and procedures. By using comics that directly corresponds to code, coding strip is a model instantiated from a framework for computational thinking: learners can move between concrete and abstract levels of abstraction to develop a way of thinking about programming concepts, languages, and procedures in terms of real-life situations and objects. To support its use, this thesis contributes methods, tools, and empirical studies to facilitate the design, creation, and use of coding strips.
IUI 2022 Demonstration
Leveraging Generative Conversational AI to Develop a Creative Learning Environment for Computational Thinking
We explore how generative conversational AI can assist students’ learning, creative, and sensemaking process in a visual programming environment where users can create comics from code. The process of visualizing code in terms of comics involves mapping programming language (code) to natural language (story) and then to visual language (of comics). While this process requires users to brainstorm code examples, metaphors, and story ideas, the recent development in generative models introduces an exciting opportunity for learners to harness their creative superpower and researchers to advance our understanding of how generative conversational AI can augment our intelligence in creative learning contexts. We provide an overview of our system and discuss interaction scenarios to demonstrate ways we can partner with generative conversational AI in the context of learning computer programming.
SIGCSE
SIGCSE 2022 Demonstration
CodeToon A New Visual Programming Environment Using Comics for Teaching and Learning Programming
Visual programming environments such as Scratch and Alice contributed to lowering the barrier to computer programming. They use virtual characters that users can manipulate with code, enabling creative activities, such as creating animations and interactive games and stories. We introduce CodeToon, a new visual programming environment that uses comics to visualize code. In the environment, users can generate stories from code and produce comics. We showcase the environment, the authoring process, how teachers and students can use it to teach and learn programming, and the benefits of using comics as the visual representation.
SIGCSE 2021
Using Comics to Introduce and Reinforce Programming Concepts in CS1
Sangho Suh, Celine Latulipe, Ken Jen Lee, Bernadette Cheng, and Edith Law
Recent work investigated the potential of comics to support the teaching and learning of programming concepts and suggested several ways coding strips, a form of comic strip with its corresponding code, can be used. Building on this work, we tested the recommended use cases of coding strip in an undergraduate introductory computer science course at a large comprehensive university. At the end of the course, we surveyed students to assess their experience and found they benefited in various ways. Our work contributes a demonstration of the various ways comics can be used in introductory CS courses and an initial understanding of benefits and challenges with using comics in computing education gleaned from an analysis of students’ survey responses and code submissions.
arXiv 2021
Exploring Individual and Collaborative Storytelling in an Introductory Creative Coding Class
Sangho Suh, Ken Jen Lee, Celine Latulipe, Jian Zhao, and Edith Law
Teaching programming through storytelling is a popular pedagogical approach and an active area of research. However, most previous work in this area focused on K-12 students using block-based programming. Little, if any, work has examined the approach with university students using text-based programming. This experience report fills this gap. Specifically, we report our experience administering three storytelling assignments—two individual and one collaborative—in an introductory computer science class with 49 undergraduate students using p5.js, a text-based programming library for creative coding. Our work contributes an understanding of students’ experiences with the three authoring processes and a set of recommendations to improve the administration of and experience with individual and collaborative storytelling with text-based programming.
SIGCSE
SIGCSE Demonstration
CodingToon Using Authoring Tool to Create Concept-driven Comics for Programming Concepts
One of the main challenges with teaching and learning programming is the abstract nature of its concepts and procedures. To address this, recent work suggested using comics. Specifically, they proposed coding strip, a form of comic strip accompanied by its corresponding code, and demonstrated how they can be designed and used in classrooms. Their subsequent in-class study revealed that students enjoy learning with coding strips and benefit from them in various ways. Unfortunately, creating comics can be a time-consuming and challenging task; this can compromise the usefulness and adoption of coding strips. Thus in this demo, we introduce CodingToon, an authoring tool for coding strips, and provide a brief tutorial to help interested instructors use coding strips in their classrooms. The authoring process shown in the demo follows a concept-driven storytelling process that can be extended to support the creation of explanations on concepts beyond programming concepts. A laptop or tablet is recommended but not required.
VL/HCC 2020
Coding Strip A Pedagogical Tool for Teaching and Learning Programming Concepts through Comics
Sangho Suh, Martinet Lee, Gracie Xia, and Edith Law
The abstract nature of programming makes learning to code a daunting undertaking for many novice learners. In this work, we advocate the use of comics—a medium capable of presenting abstract ideas in a concrete, familiar way—for introducing programming concepts. Particularly, we propose a design process and related tools to help students and teachers create coding strips, a form of comic strips that are associated with a piece of code. We conducted two design workshops with students and high school computer science teachers to evaluate our design process and tools. We find that our design process and tools are effective at supporting the design of coding strips and that both students and teachers are excited about using coding strip as a tool for learning and teaching programming concepts.
CHI 2020 Late-Breaking Work
Curiosity Notebook A Platform for Learning by Teaching Conversational Agents
Edith Law, Parastoo Baghaei Ravari, Nalin Chhibber, Dana Kulic, Stephanie Lin, Kevin D Pantasdo, Jessy Ceha, Sangho Suh, and Nicole Dillen
Learning by teaching is an established pedagogical technique; however, the exact process through which learning happens remains difficult to assess, in part due to the variability in the tutor-tutee pairing and interaction. Prior research proposed the use of teachable agents acting as students, in order to facilitate more controlled studies of the learning by teaching phenomenon. In this work, we introduce a learning by teaching platform, Curiosity Notebook, which allows students to work individually or in groups to teach a conversational agent a classification task in a variety of subject topics. We conducted a 4-week exploratory study with 12 fourth and fifth grade elementary school children, who taught a conversational robot how to classify animals, rocks/minerals and paintings. This paper outlines the architecture of our system, describes the lessons learned from the study, and contributes design considerations on how to design conversational agents and applications for learning by teaching scenarios.
IDC 2020
How Do We Design for Concreteness Fading? Survey, General Framework, and Design Dimensions
Over the years, concreteness fading has been used to design learning materials and educational tools for children. Unfortunately, it remains an underspecified technique without a clear guideline on how to design it, resulting in varying forms of concreteness fading and conflicting results due to the design inconsistencies. To our knowledge, no research has analyzed the existing designs of concreteness fading implemented across different settings, formulated a generic framework, or explained the design dimensions of the technique. This poses several problems for future research, such as lack of a shared vocabulary for reference and comparison, as well as barriers to researchers interested in learning and using this technique. Thus, to inform and support future research, we conducted a systematic literature review and contribute: (1) an overview of the technique, (2) a discussion of various design dimensions and challenges, and (3) a synthesis of key findings about each dimension. We open source our dataset to invite other researchers to contribute to the corpus, supporting future research and discussion on concreteness fading
VL/HCC 2020 Doctoral Consortium
Promoting Meaningful Learning by Supporting Interplay within Abstraction Ladder
How can we express programming concepts in a more accessible form and manner? To address this question, my research explores ways to design, create, and use coding strip, a form of comic strip that offers corresponding code for learners to understand programming concepts in both concrete and abstract context. The motivation that drives this research is my belief that the key to efficient and effective learning lies in enabling dynamic interplay between high-level and low-level abstractions. Coding strip is proposed as the first step towards the goal of understanding how to design, create, and use tools that support such interplay.
ICER 2019 Doctoral Consortium
Using concreteness fading to model & design learning process
Concreteness fading is a technique for teaching abstract concepts, where a given concept is re-introduced in three stages with decreasing levels of concreteness. Over the years, its effectiveness has been empirically and theoretically supported in mathematics and science education, encouraging the recent adoption of the technique in computing education research. My research aims to advance our understanding of this technique and use it to support learning in computing education. The motivation that drives this research is my belief that the concreteness fading approach can have a significant impact on how we model and design learning process in computing education, and broad implications for how we design learning interfaces and systems.
KAIS 2018
Localized User-driven Topic Discovery via Boosted Ensemble of Nonnegative Matrix Factorization
Sangho Suh, Sungbok Shin, Joonseok Lee, Chandan K Reddy, and Jaegul Choo
Nonnegative matrix factorization (NMF) has been widely used in topic modeling of a large-scale document corpus, where a set of underlying topics are extracted by a low-rank factor matrix from NMF. However, the resulting topics often convey only general, thus redundant information about the documents rather than minor, but potentially meaningful information to users. To address this problem, we present a novel ensemble method of nonnegative matrix factorization that discovers meaningful local topics. Our method leverages the idea of an ensemble model, which has shown advantages in supervised learning, into an unsupervised topic modeling context. That is, our model successively performs NMF given a residual matrix obtained from previous stages and generates a sequence of topic sets. Our algorithm for updating the input matrix has novelty in two aspects. The first lies in utilizing the residual matrix inspired by a state-of-the-art gradient boosting model, and the second stems from applying a sophisticated local weighting scheme on the given matrix to enhance the locality of topics, which in turn delivers high-quality, focused topics of interest to users. We extend this ensemble model further with keyword- and document- based user interaction to introduce user-driven topic discovery
IJCAI 2017
Local topic discovery via boosted ensemble of nonnegative matrix factorization
Sangho Suh, Jaegul Choo, Joonseok Lee, and Chandan K Reddy
Nonnegative matrix factorization (NMF) has been increasingly popular or topic modeling of largescale documents. However, the resulting topics often represent only general, thus redundant information about the data rather than minor, but potentially meaningful information to users. To tackle this problem, we propose a novel ensemble model of nonnegative matrix factorization for discovering high-quality local topics. Our method leverages the idea of an ensemble model to successively perform NMF given a residual matrix obtained from previous stages and generates a sequence of topic sets. The novelty of our method lies in the fact that it utilizes the residual matrix inspired by a state-of-theart gradient boosting model and applies a sophisticated local weighting scheme on the given matrix to enhance the locality of topics, which in turn delivers high-quality, focused topics of interest to users.1
NIPS 2016 Workshop
Re-VACNN Steering Convolutional Neural Network via Real-time Visual Analytics
Sunghyo Chung, Cheonbok Park, Sangho Suh, Kyeongpil Kang, Jaegul Choo, and Bum Chul Kwon
Recently, deep learning has become exceptionally popular due to its outstanding performances in various machine learning and artificial intelligence applications. Convolutional neural network (CNN), a representative model of deep learning, has been successfully applied to solve computationally burdening tasks in various applications like computer vision. Despite its outstanding capability, training a CNN model properly is time-consuming and prone to overfitting and/or bad local minima. To address these issues, this study aims at improving the interpretation of the training process and using it for subsequent human intervention, specifically steering the training process of CNN model in the real time. In this paper, we present ReVACNN, a real-time visual analytic system for CNN, where (1) the overall training process (e.g., the amount of activations and that of changes each filter/layer has at a particular iteration/epoch) is visualized in network view and (2) the 2D embedding of trained filters within layers is visualized to show the relationships between filters as well as layers. In particular, ReVACNN allows users to perform several interactions in real time: (1) skipping the gradient descent update on the sub-part of a CNN model to reduce the subsequent training time and (2) steering filters interactively in the 2D embedding view to avoid bad local minima. At the end, we present several use cases that demonstrate the benefits users can gain from ReVACNN.
KDD 2016 Workshop
ReVACNN Real-time Visual Analytics for Convolutional Neural Network
Sunghyo Chung, Sangho Suh, Cheonbok Park, Kyeongpil Kang, Jaegul Choo, and Bum Chul Kwon
Recently, deep learning has gained exceptional popularity due to its outstanding performances in many machine learning and artificial intelligence applications. Among various deep learning models, convolutional neural network (CNN) is one of the representative models that solved various complex tasks in computer vision since AlexNet, a widely-used CNN model, has won the ImageNet challenge1 in 2012. Even with such a remarkable success, the issue of how it handles the underlying complexity of data so well has not been thoroughly investigated, while much effort was concentrated on pushing its performance to a new limit. Therefore, the current status of its increasing popularity and attention for various applications from both academia and industries is demanding a clearer and more detailed exposition of their inner workings. To this end, we introduce ReVACNN, an interactive visualization system that makes two major contributions: 1) a network visualization module for monitoring the underlying process of a convolutional neural network using a filter-level 2D embedding view and 2) an interactive module that enables real-time steering of a model. We present several use cases demonstrating benefits users can gain from our approach.
ICDM 2016
🏆 Best Paper
L-EnsNMF Boosted Local Topic Discovery via Ensemble of Nonnegative Matrix Factorization
Sangho Suh, Jaegul Choo, Joonseok Lee, and Chandan K Reddy
Nonnegative matrix factorization (NMF) has been widely applied in many domains. In document analysis, it has been increasingly used in topic modeling applications, where a set of underlying topics are revealed by a low-rank factor matrix from NMF. However, it is often the case that the resulting topics give only general topic information in the data, which tends not to convey much information. To tackle this problem, we propose a novel ensemble model of nonnegative matrix factorization for discovering high-quality local topics. Our method leverages the idea of an ensemble model, which has been successful in supervised learning, into an unsupervised topic modeling context. That is, our model successively performs NMF given a residual matrix obtained from previous stages and generates a sequence of topic sets. Our algorithm for updating the input matrix has novelty in two aspects. The first lies in utilizing the residual matrix inspired by a state-of-the-art gradient boosting model, and the second stems from applying a sophisticated local weighting scheme on the given matrix to enhance the locality of topics, which in turn delivers high-quality, focused topics of interest to users. We evaluate our proposed method by comparing it against other topic modeling methods, such as a few variants of NMF and latent Dirichlet allocation, in terms of various evaluation measures representing topic coherence, diversity, coverage, computing time, and so on. We also present qualitative evaluation on the topics discovered by our method using several real-world data sets.
This paper followed CRISP-DM1 development cycle for building classification models for two different datasets: “student performance” dataset consisting of 649 instances and 33 attributes; “Turkiye Student Evaluation” dataset consisting of 5,820 instances and 33 attributes. To avoid confusion, this paper is organized into two parts (Part A, B) where analysis on each dataset is presented separately. Note that the general flow of the paper will abide by the steps shown in the following Table of Contents.
HCI Korea 2016
EYEscort Beacon-driven Navigation Service for People with Visual Impairment
We propose beacon-driven navigation service for people with visual impairment. EYEscort interacts with beacons to help its users navigate their way around the city. We conducted focus group interview on people with visual impairment and reflected their needs, stories, and feedbacks to refine our model.