A Trip to the West World -- Attending VIS 2017

Eggs-like Cactus

This year, VIS was held in Phoenix, Arizona. As the name implies, Phoenix is a hot and dry city located in the Sonoran Desert.

This is my second year attending VIS. Unlike last year, I am able to present a conference paper on VAST this year. This is really a great experience for me. More importantly, it’s really nice to have the opportunity to learn what others are doing in this community.

My Presentation

Let’s start with some statistics. This year, VIS has accepted (39/170) papers in InfoVis, (37/173) papers in TVCG-track VAST, 15 papers in conference-track VAST (where mine is published), and (23/120) papers in SciVis. From simple math, we see here the acceptance rate for journal papers is (99/463=) 21.4%. Accounting for conference-track VAST papers, the rate is a little higher (24.6%). Though I felt a bit strange about this conference-track, the result that I am able to present my work in front of the community seems to be more important and more “impactful” than just a journal publication.


VIS+ML

Time is always limited. I have to say I have the bias towards this topic during the conference, considering that I kind of fixed my research direction on it in the next few years.

Some said this year’s VIS is a milestone for VIS+ML. We have VIS+ML tutorial, visual analytics for deep learning (VADL) workshop. We have 3 sessions in VAST and 1 session in InfoVis that titled with “ML”. We have a panel discussing the influence of ML to our field. Indeed, the great advances in ML have influenced many other fields, including database, graphics, networks and many non-CS fields. Some even said that computer vision is somewhat like a sub-field of ML now (no offense).

Now, this tide is also coming into our field of visualization. In some aspect, such a huge rise of interest in ML in such a community like ours is not healthy. It restricted the research and development of many other important topics (see below). I totally agree with this point. But in my mind, research field develops and grows as we start to work on promising interdisciplinary new areas. With such powerful techniques coming out, it’s not wise to close our doors and say we have all we need in our home (闭关锁国). Are these VIS+ML papers really useful and inspiring? Is VIS+ML really a significant and promising direction? We don’t know yet. But I think, to refuse the new is definitely a wrong mindset. Research is about working on new and innovative things that are potentially useful. Definitely useful things must have been worked on by more resourceful organizations like companies. I believe we will get to know whether such a direction is non-sense or not in the next few years. But of course, VIS is a conference about visualization anyway. We don’t want it to be flooded by “ML” papers (though I also have a contribution to this).

Now back to the conference. A few related events that I attended are listed below.

Visualization: The Secret Weapon For Machine Learning (A keynote in VDS)

The keynote is given by Fernanda Viegas and Martin Wattenberg at Google Inc.

They present several works of them at the Big Picture group at Google Brain and talked a few about the “People + AI” initiative at Google. Actually, I very much agree with their idea of introducing humans in the loop of AI, or the so-called “human-in-the-loop” AI. Autonomous systems have indeed improved the efficiency of many applications. But in the seeable future, a lot more applications cannot be fully automated and require humans to be in the loop. That is why we need VIS and HCI. Interestingly, the closing capstone of the conference is titled “Data Humanism”.

One interesting thing is the insights that can be found by simply visualizing the datasets. They present the Facet that they recently released. Though I think the scalability is still an issue of their tool.

A few applications that Vis can be used in ML:

  • debugging data set

  • debugging/understand models

  • education (teach ML models)

  • fairness

VIS+ML: Symbiosis of Visualization and Machine Learning (Tutorial)

Something that caught my eye: An in-depth survey on how visualization can be used in augmenting embedding, presented by Yang Wang from Uber research.

Visual Analysis for Deep Learning (Workshop)

Shixia Liu at Tsinghua University has given a talk about their recent work on this topic.

She presented the work from three possible applications of visualization for deep learning: Visual Understanding / Visual Diagnosis / Model Refinement. This taxonomy is actually the same as the one presented in their previously published survey paper.

In my mind, the three categories are for experts with DL knowledge. One significant part that this taxonomy is missing is visualization for end-users that are non-experts. Non-expert users of DL techniques can actually be a larger population. I believe visualization can make a larger impact out of this.

A research question that came to my mind during the talk:

How to remove the information of a region of the image more wisely? Current methods: 1) simply fill the region with gray/white 2) replace the region with random noise (uniform distribution)

VIS Panel: How do Recent ML Advances Impact the Data Visualization Research Agenda?

Q: Endangering the Human-in-the-loop Paradigm?

Q: Extending the Visualization Research Agenda

Some interesting points that are raised by the panelists.

Min Chen: The space of Machine Learning

I really appreciate his research on the foundation of visualization.

What he discussed on the computation of machine learning based on the scope of Universal Turing Machine is really inspiring. Though I am not sure if the results he presented has any theoretical proofs.

Articulating what current-state machines cannot do is a more solid argument for why we want to include human into the loop.

  • Visually exploring the space of ML? Cannot remember what this is about…

Visually supporting software engineering with ML

Not sure who discussed this idea (maybe Alexandru Telea). In the sense of current IT industry, ML is still in the scope of software engineering. Then, visualization used for supporting software engineering can be naturally extended to support ML. For example, quality assurance, testing…

Ross Maciejewski: Why Open the Box?

Humans always have cognitive biases. So why we want to add bias to our system? Sometimes humans actually worsened the model’s prediction.

He also talked about extending the concept of Algorithmic Aversion into this topic.

He actually raised a very important (in my mind, the most important) question for the Vis for ML topic.

Before we want to add human in the loop, we have to clearly consider why we want to add VIS for ML. Can VIS really be helpful? Or will humans make it worse?

I don’t have an answer about it. I do agree that sometimes human interventions make things worse. But in many other scenarios, visualizations do help humans get insights. Let’s see.


Sunflowers in the botanic garden

Important and Unsolved Problems in VAST:

A list summarized from Prof. Qu’s notes and other random discussions:

  1. How to quantify the visual complexity and cognitive loads?

  2. How to formalize the framework of visual reasoning in VA?

  3. How to make the process of visualization design more efficient (like using AI to automate and recommend visualization)?

  4. How to better integrate the research results of InfoVis and cognition science to our design process using more autonomous systems? (credits to Dongyu)

  5. *How to scale visualization to support really large dataset (GPU acceleration, distributed system)?


Presenting at Conference

Presenting your own work at conferences is a good way to make impact. The day before my talk, I suddenly felt so nervous when practicing for the last time accompanied with Qiaomu and Wei. This is kind of normal. In Chinese education environments, shy students like me may not be sufficiently trained for public speaking. Then I practiced about another 4 times to make sure I can go through my presentation fluently. I have to say, this is my first time giving a talk in front of so many people in such a formal event. I believed there are more than 150 people at my talk. Considering “deep learning” is such a buzz word nowadays, it’s not a surprise to see this happened. Fortunately, thanks to my labmates and Prof. Qu, I made a talk that at least satisfied myself. Cheers!

Some tips that I learned from conference talks:

  • Conference talk is usually a good way to discuss your work with others. Try to make the most out of it.
  • Don’t fill the presentation with the full content of the paper. Present 30% critical part of the paper clearly can give the audience a deeper impression.
  • Before the talk, practice at least a few times (for me it’s 3) and make sure your talk and slides work together fluently.
  • Even if you have already done a rehearsal several days before, practice it for another time the day before the talk to make sure your memory is fresh.
  • Time matters. It’s the basic respect that you follows the time schedule in a conference.
  • When giving the talk, make eye contacts with the audience (you can pick a few people that make you feel comfortable and look at them).
  • Remind your self to slow down your words. We (non-native speakers) may slur our words when speaking too fast, which may increase the nervousness.

Other Random Notes

Following is some random notes that I took during the conference. Maybe irrelevant to the main idea of this blog.

VAHC: Visual Analytics in Healthcare

This year, the first event that I attended is VAHC. A keynote was presented by Hadi Kharrazi, a professor in public health at John Hopkins University. He mainly talked about the need for visualization in the field of population health. In a country like U.S., a large amount of funding (over 20% of the GDP) is allocated in healthcare. For example, in hospitals, electronic health records are now pervasive. But the data is not well utilized. The visualization / visual analytics in this field is more practical and domain-specific and can be directly used by or presented to decision makers. Some key points are summarized as follows:

  • Feature reduction
  • Temporal data (how to deal with zero fills?)
  • Cognitive issues
  • Comparing models (models across different sub-populations)
  • Data quality (how it affects the analytics results/visualization)
  • Variety and volume of the data
  • data quality issues

People in healthcare domain may prefer simple charts (bar charts) because they don’t understand what VA can do.

Different people (clinical/policy makers/medical researcher) have different needs in the visualization.

VDS: Visualization in Data Science

Professor Vasant Dhar at NYU gave a keynote talk about decision making with autonomous systems, with a focus on finance data.

Autonomous systems are increasingly used in many areas. In critical areas where human’s knowledge are required during decision-making process (e.g. finance), one crucial question is whether we should trust a machine. In finance, dealers or traders may prefer a model that is more predictable (or, stable) with less return revenue. Visualization can be helpful in model evaluation and model monitoring.

VAST sequence

Yuanzhe’s paper

  • Locality sensitive hashing
  • Presentation tips: embed videos inside ppt.
  • A good research problem should be general enough to have a broad range of applications.