
Budig, B., van Dijk, T.C., Wolff, A.: Matching Labels and Markers in Historical Maps: an Algorithm with Interactive Postprocessing. Transactions on Spatial Algorithms and Systems. (2016).

van Dijk, T.C., Fink, M., Fischer, N., Lipp, F., Markfelder, P., Ravsky, A., Suri, S., Wolff, A.: Block Crossings in Storyline Visualizations. Proc. 24nd Int. Sympos. Graph Drawing. (2016).

Löffler, A., van Dijk, T.C., Wolff, A.: Snapping Graph Drawings to the Grid Optimally. Proc. 24nd Int. Sympos. Graph Drawing. (2016).

Budig, B., van Dijk, T.C., Kirchner, F.: Glyph Miner: A System for Efficiently Extracting Glyphs from Early Prints in the Context of OCR. In: Adam, N.R., Cassel, L. (B.), Yesha, Y., Furuta, R., and Weigle, M.C. (eds.) Proceedings of the 16th ACM/IEEECS on Joint Conference on Digital Libraries. p. 3134. ACM (2016).
While offtheshelf OCR systems work well on many modern documents, the heterogeneity of early prints provides a significant challenge. To achieve good recognition quality, existing software must be "trained" specifically to each particular corpus. This is a tedious process that involves significant user effort. In this paper we demonstrate a system that generically replaces a common part of the training pipeline with a more efficient workflow: Given a set of scanned pages of a historical document, our system uses an efficient user interaction to semiautomatically extract large numbers of occurrences of glyphs indicated by the user. In a preliminary case study, we evaluate the effectiveness of our approach by embedding our system into the workflow at the University Library Würzburg.

Beckmann, L., Budig, B., van Dijk, T.C., Schamel, J.: There and Back Again: Using FréchetDistance Diagrams to Find Trajectory Turning Points. Proceedings of the 23rd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL 2015). p. 238241. ACM (2015).
There is much interest in the automatic analysis of GPS trajectories, motivated amongst others by animal tracking, traffic modelling and tourism studies. Core questions involve for example detecting stopping points and flocking behaviour, and developing routing models. In this paper we consider turning points. We develop a formal definition of "turning point" based on analysing the structure of freespace diagrams for Fréchet distance. We give an efficient algorithm for computing an optimal set of turning points under this criterion. Our method is evaluated in the context of a current study on tourist preferences at Berchtesgaden National Park, Germany. We evaluate the suitability of our definition and the efficiency of our algorithm on a large set of realworld GPS trajectories collected by outdoor recreationists. A ground truth of turning points was established by hand for the complete data set. Experiments show that the runtime and quality of our method are suitable for practical applications.

Budig, B., Dijk, T.C. van: Active Learning for Classifying Template Matches in Historical Maps. In: Japkowicz, N. and Matwin, S. (eds.) Discovery Science. pp. 3347. Springer International Publishing (2015).
Historical maps are important sources of information for scholars of various disciplines. Many libraries are digitising their map collections as bitmap images, but for these collections to be most useful, there is a need for searchable metadata. Due to the heterogeneity of the images, metadata are mostly extracted by hand  if at all: many collections are so large that it is important to minimise the effort spent on individual maps. We propose an activelearning approach to one of the practical problems in automatic metadata extraction from historical maps: locating occurrences of image elements such as text or place markers. For that, we combine template matching (to locate possible occurrences) with active learning (to efficiently determine a classification). Using this approach, we design a human computer interaction in which large numbers of elements on a map can be located reliably using little user effort. We experimentally demonstrate the effectiveness of this approach on realworld data.

van Dijk, T.C., Haunert, J.H.: Interactive focus maps using leastsquares optimization. International Journal of Geographical Information Science. 28, 20522075 (2014).

Chimani, M., van Dijk, T.C., Haunert, J.H.: How to Eat a Graph: Computing Selection Sequences for the Continuous Generalization of Road Networks. Proceedings of the 22st ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems. pp. 243252 (2014).
In a connected weighted graph, consider deleting the edges one at a time, in some order, such that after every deletion the remaining edges are still connected. We study the problem of finding such a deletion sequence that maximizes the sum of the weights of the edges in all the distinct graphs generated: the weight of an edge is counted in every graph that it is in. This effectively asks for the highweight edges to remain in the graph as long as possible, subject to connectivity. We apply this to road network generalization in order to generate a sequence of successively more generalized maps of a road network so that these maps go well together, instead of considering each level of generalization independently. In particular, we look at the problem of making a road segment selection that is consistent across zoom levels. We show that the problem is NPhard and give an integer linear program (ILP) that solves it optimally. Solving this ILP is only feasible for small instances. Next we develop constantfactor approximation algorithms and heuristics. We experimentally demonstrate that these heuristics perform well on realworld instances.

Budig, B., van Dijk, T.C., Wolff, A.: Matching Labels and Markers in Historical Maps: an Algorithm with Interactive Postprocessing. Proceedings of the 2nd ACM SIGSPATIAL International Workshop on MapInteraction. (2014).
In this paper we present an algorithmic system for determining the proper correspondence between place markers and their labels in historical maps. We assume that the locations of place markers (usually pictographs) and labels (pieces of text) have already been determinedeither algorithmically or by handand want to match the labels to the markers. This timeconsuming step in the digitization process of historical maps is nontrivial even for humans, but provides valuable metadata (for example when subsequently georeferencing the map). In order to speed up this process, we model the problem in terms of combinatorial optimization, solve that problem efficiently, and show how user interaction can be used to improve the quality of results. We test the algorithm on a manuallyextracted ground truth for two historical maps with a combined total of over 4000 markers and labels. The algorithm correctly matches 99% of the labels and is robust against noisy input. It furthermore performs a sensitivity analysis and in this way computes a measure of confidence for each of the assignments. We use this as the basis of an interactive system, where the user's effort is directed to checking the parts of the map where the algorithm is unsure; any corrections the user makes can be propagated by the algorithm. We discuss an early prototype of this system and statistically confirm that it successfully locates the areas on the map where the algorithm needs help.

van Dijk, T.C., van Goethem, A., Haunert, J.H., Meulemans, W., Speckmann, B.: Map Schematization with Circular Arcs. Geographic Information Science. 8728, 117 (2014).
We present an algorithm to compute schematic maps with circular arcs. Our algorithm iteratively replaces two consecutive arcs with a single arc to reduce the complexity of the output map and thus to increase its level of abstraction. Our main contribution is a method for replacing arcs that meet at highdegree vertices. This allows us to greatly reduce the output complexity, even for dense networks. We experimentally evaluate the effectiveness of our algorithm in three scenarios: territorial outlines, road networks, and metro maps. For the latter, we combine our approach with an algorithm to more evenly distribute stations. Our experiments show that our algorithm produces highquality results for territorial outlines and metro maps. However, the lack of caricature (exaggeration of typical features) makes it less useful for road networks.

van Dijk, T.C., van Goethem, A., Haunert, J.H., Meulemans, W., Speckmann, B.: An Automated Method for CircularArc Metro Maps. Schematic Mapping. (2014).

Nederlof, J., van Rooij, J.M.M., van Dijk, T.C.: Inclusion/Exclusion Meets Measure and Conquer. Algorithmica. 69, 685740 (2014).
Inclusion/exclusion and measure and conquer are two central techniques from the field of exact exponentialtime algorithms that recently received a lot of attention. In this paper, we show that both techniques can be used in a single algorithm. This is done by looking at the principle of inclusion/exclusion as a branching rule. This inclusion/exclusionbased branching rule can be combined in a branchandreduce algorithm with traditional branching rules and reduction rules. The resulting algorithms can be analysed using measure and conquer allowing us to obtain good upper bounds on their running times.

Bekos, M.A., van Dijk, T.C., Fink, M., Kindermann, P., Kobourov, S.G., Pupyrev, S., Spoerhase, J., Wolff, A.: Improved Approximation Algorithms for Box Contact Representations. Proc. 22th Annual European Symposium on Algorithms (ESA'14). p. 8799 (2014).
We study the following geometric representation problem: Given a graph whose vertices correspond to axisaligned rectangles with fixed dimensions, arrange the rectangles without overlaps in the plane such that two rectangles touch if the graph contains an edge between them. This problem is called Contact Representation of Word Networks (Crown) since it formalizes the geometric problem behind drawing word clouds in which semantically related words are close to each other. Crown is known to be NPhard, and there are approximation algorithms for certain graph classes for the optimization version, MaxCrown, in which realizing each desired adjacency yields a certain profit. We show that the problem is APXcomplete on bipartite graphs of bounded maximum degree. We present the first O(1)approximation algorithm for the general case, when the input is a complete weighted graph, and for the bipartite case. Since the subgraph of realized adjacencies is necessarily planar, we consider several planar graph classes (stars, trees, outerplanar, and planar graphs), improving upon the known results. For some graph classes, we also describe improvements in the unweighted case, where each adjacency yields the same profit.

van Dijk, T.C., van Goethem, A., Haunert, J.H., Meulemans, W., Speckmann, B.: Accentuating Focus Maps via Partial Schematization. Proceedings of the 21st ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems. p. 428431. ACM, Orlando, Florida (2013).
We present an algorithm for schematized focus maps. Focus maps integrate a high detailed, enlarged focus region continuously in a given base map. Recent methods integrate both with such low distortion that the focus region becomes hard to identify. We combine focus maps with partial schematization to display distortion of the context and to emphasize the focus region. Schematization visually conveys geographical accuracy, while not increasing map complexity. We extend the focusmap algorithm to incorporate geometric proximity relationships and show how to combine focus maps with schematization in order to cater to different use cases.

van Dijk, T.C., Haunert, J.H.: A Probabilistic Model for Road Selection in Mobile Maps. In: Liang, S.H.L., Wang, X., and Claramunt, C. (eds.) Web and Wireless Geographical Information Systems. pp. 214222. Springer Berlin Heidelberg (2013).
Mobile devices provide an interesting context for map drawing. This paper presents a novel roadselection algorithm based on PageRank, the algorithm famously used by Google to rank web pages by importance. Underlying the PageRank calculation is a probabilistic model of user behavior. We provide suitable generalizations of this model to road networks. Our implementation of the proposed algorithm handles a sizable map in approximately a tenth of a second on a desktop PC. Therefore, our methods should be feasible on modern mobile devices.

van Dijk, T.C., Fleszar, K., Haunert, J.H., Spoerhase, J.: Road Segment Selection with Strokes and Stability. Proceedings of the 1st ACM SIGSPATIAL International Workshop on MapInteraction. p. 7277. ACM, Orlando, Florida (2013).
In order to visualize a road network without producing visual clutter, a subset of all road segments needs to be selected. Many algorithms for road segment selection are based on a relevance score for edges in a network (for example betweenness centrality) and proceed by taking a greedy selection based on these weights. This can give dissatisfactory results. In order to improve readability, we introduce a strokebased constraint and provide an efficient dynamic program that makes an optimal selection given this constraint. Next, we consider the computation of animated road selections from changing edge weights (for example a focus area that follows a moving user). Handling each time step of the animation individually can lead to distracting flickering effects. Here we introduce an optimization objective to achieve a more stable selection and provide a polynomialtime algorithm for solving it. While separately solvable in polynomial time, we show that the combination of the stroke constraints and stability optimization is NPhard.