Speakers

The participating students and abstracts.

RUG

Procam

Capgemini

Quintor

Multiclass pattern recognition is concerned with accurately mapping an input feature space to an output space of more than two classes. An example of a multiclass classification problem in medical diagnosis would be the distinction between different types of tumors. Binary classifiers, i.e. systems that distinguish between two classes, solve a special case of the multi classification problem. Most research is done on these binary classifiers. Within the medical diagnosis example a binary classifier would only be able to differentiate between two types of tumors. How to extend these classifiers to a system that discriminates between more than two classes is still an ongoing research issue.

In general, methods that consider more than two classes at once are computationally more expensive or perform worse than binary classifiers. An alternative solution to the multiclass problem is to combine several binary classifiers into a system that can differentiate multiple classes.

In our research we compare schemes to combine binary classifiers into one multiclass classifier. As a binary classifier we use the support vector machine (SVM). The SVMs are combined into multiclass classifiers using the following schemes: ‘one-against-one’ (OAO), ‘one-against-all’ (OAA), ‘half-against-half’ (HAH), and ‘directed acyclic graph’ (DAG). We compare these schemes on: accuracy of classification, interpretability, and computational and spatial complexity. We use two real world datasets of comparable size with the same dimensionality. The patterns in one of the sets are distributed uniformly over the classes, whereas this distribution is strongly skewed towards one class in the other dataset. Our research is based on both results reported in literature and our own implementation of the mentioned schemes.

We expect that all multiclass classifiers perform better on the data that are distributed uniformly over the classes. Furthermore, we think it likely that training OAA has a higher time complexity than OAO. We also expect that the classifiers following the HAH and DAG schemes have a lower space, but higher time complexity than those trained according to the OAA and OAO schemes.

Object detection is one of the most prominent applications in computer vision. Keypoint detection is the initial step in many object detection algorithms. Challenges for most existing methods are the wide range of existing shapes and textures, sensitivity to illumination, presence of noise and object repetitions. This is the reason for extensive ongoing research in this field.

In our research we compare two recently developed methods in object detection and pattern recognition. Under comparison are the Combination Of Shifted FIlter REsponses (COSFIRE) filters and deep learning algorithms (specifically with convolutional neural networks). Both methods are based on the mechanism of neurons in the human visual cortex. The comparison considers experiments on some pattern recognition tasks, including traffic sign and handwritten character recognition.

The goal of this paper is to discriminate for what problem which method is more suitable or if one method is clearly better than the other. As there is a broad range of keypoint detection and pattern recognition problems, we will look at subset of problems for which the methods in question are most frequently used, like handwritten character and traffic sign recognition. We will not only look at the performance of the methods, but also study their inner workings.

In recent years optical-flow-aided position measurement solutions have been used in both commercial and academic applications. Movement in camera images is detected and converted to real world position change. These systems are used for navigating unmanned aerial vehicles (UAVs) in GPS deprived environments. Multiple approaches have been suggested, ranging from using a optical mouse sensor to using of a stereo camera setup. Our research focuses on single camera solutions.

Previous research have used a variety of optical flow algorithms for single camera solutions. This paper presents a comparison on three algorithms to check if cherry-picking algorithms can enhance flow estimation quality or reduce CPU time usage. This paper also provides insight into the general theory behind using single camera optical flow for UAV navigation. The compared algorithms are Lucas–Kanade method, Gunnar Farneback’s algorithm and block matching. A testing framework and custom indoor and outdoor datasets were created to measure algorithms flow estimation quality and computation time.

Results show that cherry-picking pays off. Distinguishable differences were found between the algorithms performance in both computation time and quality. Also different winners per test set were found in terms of estimation quality.

In machine learning, learning algorithms are often trained by minimizing a given cost function using function optimization algorithms, such as gradient descent. There are two leading classes of gradient descent: batch gradient descent (BGD), which considers every example at each time step, and stocastic gradient descent (SGD), which only looks at one example at a time.

Most optimization algorithms, including BGD and SGD, require multiple “hyperparameters" that are not immediately obvious from the data. In practice, this requires researchers to manually tune these hyperparameters depending on the data set and desired results.

Recent proposals have suggested modifications to the BGD and SGD algorithms aiming to improve performance and reduce the dependance upon hyperparameters.

In this paper, we investigate the waypoint averaging and vSGD algorithms in addition to standard BGD and standard SGD. We evaluate each of these algorithms with respect to their respective convergence properties, number of hyperparameters, and resulting final cost values.

In image processing, connected filters are used to remove ‘unwanted’ details without distorting the ‘wanted’ structures. This is done by keeping or discarding ‘connected components’ based on some attribute of the component. A connected component is a connected subset of an image that cannot be extended whilst remaining connected. Whether a set is connected or not depends on a ‘connectivity’ that is chosen. A restriction is that the connected components partition the image; no point can lie in two different connected components.

Hyperconnectivity is an extension of normal connectivity that relaxes this condition. A point may simultaneously lie in two different hyperconnected components. Hyperconnected filters are then defined analogously to connected filters, either keeping or discarding hyperconnected components. The relaxed condition better represents the continuities present in reality. One hopes this makes the resulting filter more robust.

We will look at different methods of using hyperconnected filters for segmentation and detail extraction. Because this is an extension of the concept of connectivity, we will be using connective filters as our baseline in comparisons. Meanwhile, we will also pay attention to the theoretical basis and general properties of hyperconnected systems.

With the advent of social networks as Facebook and Twitter, graph data has taken an even more important stance in the world. Visualizing graph data is important as it may lead to new insights in the data, e.g. hidden relations or bottlenecks in a system. At the same time it can be a difficult task, especially when the graph is of substantial size.

Naive methods of graph drawing, e.g. random placement of vertices, typically produce poor results where edges have many crossings and nodes are placed too close to each other. This is not aesthetically pleasing and thus may not produce the prospected insights. Forcedirected graph drawing may produce a solution to this problem. By applying constraints to the location of vertices better results can be obtained. These constraints include dynamic forces inspired by physics, like gravity, spring forces, and electrostatic forces. The weights of these forces can be influenced by vertex measures, as its degree, its closeness to other vertices, or more complex measures.

In this paper, we compare several approaches to create a force-directed graph layout. We perform a comparison of the approaches in running time, compactness, and the number of edge crossings and a comparison of the various results in terms of shape and comprehensibility. These comparisons will mostly be qualitative. Moreover, we try to adapt the approaches to also be applicable to weighted graphs, and propose a new vertex measure inspired by the methods described in and edge weight.

This paper examines the usage of graph theory in the natural context. Specifically several studies were highlighted that have used graph theory, this paper present an indepth view of how these studies have used graph theory within their application domain. It was found that the application of graph theory within biology is variety. Specifically we found studies that use graph theory to examine the characteristics of terminte nests in terms of connectivity and topology,landscape connectivity where graph theory was used to determine the connectivity of animal habitats within the spatial domain, food-webs where graph theory was used to examine the trophic interactions of species. Lastly we found that graph theory was used examine protein structure, including examining protein folding, identification, etc.

Research has shown that comorbidity, a phenomenon where a patient exhibits multiple mental disorders simultaneously, is very common. In the scientific field of psychopathology, the scientific study of mental disorders, the causes of comorbidity are currently a hot research topic. In order to discover the causes of comorbidity we may use psychological features.

Psychological features, or psychological variables, are measures of some state of mind. Examples at any given moment may include cheerfulness, irritability, fatigue but also degree of worrying. These features can be modeled by using a graph. In this graph the features are represented by the nodes and the edges are represented by the causal relationships between the features. The benefit of modeling the features by using a graph is that many algorithms exist to perform analysis on a graph, giving way to a large variety of possible analysis techniques to be used on this data.

An example of such an analysis technique is comparing two graphs. Being able to compare these graphs is valuable as it provides insight into the similarity of two people. In this paper we provide an overview of existing graph comparison methods, while also proposing two new techniques to determine the similarity between two graphs. Our proposed techniques are based on methods that determine node similarity by working under the assumption that ”two nodes are similar if their neighbours are similar”.

Nowadays, software may consist of millions of lines of source code which make the maintainability of the code a difficult, expensive and time consuming process. This is why the need of getting insight and understanding these huge structures is higher than ever before. A great way of tackling this problem is through visualization. By getting insight, we answer questions such as, which files should be modified and what is the impact of these modifications.

In order to visualize these structures, a measurement for source code has to be defined. This paper aims to give a global overview of the different software structures that are around, and how to measure them. Acquiring these structures alone is not very helpful, since getting insight in a huge amount of numbers is not intuitive for humans, therefore visualization is necessary. In the visualization part, we zoom in on the call dependency relations. Some techniques and their corresponding tools to visualize these relations are analyzed.

Finally, these tools will be used on test data with the purpose of comparing the different visualization techniques. We will present the drawbacks as well as the benefits of each technique and then we will evaluate the tools used as to the computational speed, or the functionality of the tool. With this we aim to guide future users to select the most suitable technique and their corresponding tool for specific situations.

Smart spaces can make autonomous decisions in order to minimize the interactions the user has to make with their environment by adapting the environment to the user’s needs and preferences. By controlling certain parts of the environment that people live in, smart spaces try to optimize the comfort and safety of the inhabitants, while at the same time trying to minimize costs like power consumption and user interaction. This way, the productivity of the users will be increased, because they can focus on tasks that are more important to them.

In our research, we discuss the expectations users have of smart spaces. We then look at several methods and techniques that have been devised to automate the control in the smart spaces. These methods range from simple rule-based systems to advanced planners and machine learning. The user expectations are used to compare the different techniques. We also evaluate the pros and cons of the different techniques regarding the production, integration and interactivity of these systems. Finally we look at some notable implementations of smart spaces to see how they behave in practice, and to provide insight into the current state of this field.

Cloud computing service offers some advantages in its implementation to e-learning system, for example, increased cost savings and also improved efficiency and convenience of educational services. Furthermore, e-learning services can be also enhanced to be smarter and more efficient using context-aware technologies since context-aware services is based on the users behavior. To implement the technologies into current e-learning services, a service architecture model is needed to transform the existing e-learning environment that is only situation-aware, into the environment that also understands context. The rationale behind this paper is to study the existence or lack of existing approaches regarding the implementation of cloud computing services in smart learning system. This is done by surveying the state of the art in the area, and illustrating the requirements of context-aware smart learning system with regard to some important factors: context-awareness, security, ontology, multi-device support, and flexibility. This paper is eager to help investigating the works that have been done before for cloud computing services in smart learning system and to show the possible requirements for the future smart learning system.

This paper will compare four different smart city architectures Khan, Anjum, and Kiani 2013, Villanueva et al. 2013, Ye et al. 2013, Girtelschmid et al. 2013 by analyzing the design principles, technologies, and tools used in each architecture. The paper compares these architectures to show the strengths of these characteristics for each of the architectures, which leads to a conclusion that offers a brief review of the material discussed.

In city landscapes, the investment of Information and Communication Technology (ICT) for enhanced governance becomes more present in urban environments. These technologies provide the basis of sustainability for smart cities of the future. The smart city can be seen as a set of advanced services to create citizen friendly, efficient and sustainable cities. The ICT tools for a smart city are often applied in different domains like transport-, energy- and environment management. The systems controlling these smart cities have to deal with scalability, heterogeneity, geolocation information, privacy issues and the large amounts of continuously incoming data from the city environment. This is where big data comes into play. Zikopoulos, Eaton, et al. (2011) defined Big data as a large amount of data that increases exponentially over time.