Many pattern recognition problems deal with high dimensional data. For instance, measurements from biomolecular technologies often contain more than thousand features. Classification using non-statistical distance methods, such as the Euclidean distance, are not accurate enough. Datasets often contain a lot of features which are less relevant in classification. Data may also contain features from different domains, making non-statistical distance methods an arbitrary measurement. By using distance measurements based on statistical regularities in the data classification can be significantly improved. In this project we will compare two new methods for accumulating distance metrics to be used in distance based classifiers, Suvrel and GML. Both methods construct a quadratic distance measure which weights dimensions and pairs of dimensions. This distance measure can be used to scale and rotate the feature space such that classification will be more accurate. That is, both methods find metric tensors that minimize a cost function which promotes small intraclass distances and penalizes small interclass distances.
In GML the distance metric is obtained by minimizing a cost function using gradient descent. The Suvrel method specifies an algorithm to explicitly obtain a distance metric, given the observed statistical properties of the data.
We will discuss and judge the distance metric obtained by both methods. We will compare which features have the highest weights, classification results, robustness, and overall usability. To do this both methods will be implemented and used in combination with K-nearest-neighbour classification. We will use several datasets from the open UCI machine learning repository to evaluate both methods.
It has been 15 years since the agile manifesto was first introduced and various developers are shifting between the traditional approach of plan-driven software architecture and flexible agile development. Several approaches on comparing and reviewing these two methods have taken place in the recent years and it has been seen that both of the methods have advantages and disadvantages.
Complexity of the projects in terms of number of stakeholders, number of different systems involved and size of the working team is a major factor of developers’ choice between these two methods. Combining the advantages of these two methods rather than focusing on one would enable the developers to benefit from both of them and would expand their perspectives. The question that we will try to answer will be how to combine these two methods.
We will perform a systematic review of the current approaches to combine software architecture and agile development methods. We will then describe how to combine these two methods according to the recent research papers. We will also discuss the advantages of the current methods of combining software architecture and agile development.
Nowadays, there is no doubt that the social networks are some of the most visited websites on the internet. Most of them have registered an exponential growth regarding the number of users and, certainly, they have been attracting the interest of many hackers and scammers. Therefore, they want to steal and use the personal data stored in the social network data bases, in order to generate profits.
Our paper will focus on some popular forms of attacks, emphasizing their usability, feasiblity and the consequences they have on the victims. Concretely, we will describe how social networks like Facebook, Linkedin, Forsquare and popular websites, like the ones used for internet banking, can be used for stealing information. We will show how those web services can be vulnerable to attacking techniques like phishing, de-anonymization, location cheating and automated identity theft. Moreover, we will discuss the existence of control and prevention methods, highlighting if and how can people defend themselves from such cyber-attacks.
Furthermore, our research will review the improvements that have been made, in the past years, to those techniques, presenting the latest dangers and outcomes in this area. In addition to this, we will describe some possible attacking scenarios which are based on the combination of the above mentioned ones. Thus, we will create a general view of some more complex and complete methods used for cyber-attacks, highlighting the involving steps and the results they have on people and on social networks.
Time for coffee
Over the last few years there has been an increase in the power con- sumption of Information and Communication Technology (ICT) facilities. A large fraction of this consumption can be ascribed to user devices like Personal Computers (PC) and displays. Previous research has shown that a lot of energy is wasted due to inefficient use of the previous mentioned devices. People tend to leave their devices on even when they are not being used actively. To counter this waste of energy several systems and tools have been developed. These systems make use of different techniques and sensors to achieve a reduction of wasted energy with varied results.
Our research extends to PoliSave, E-Net-Manager and Gicomp that are being deployed in large environments such as office spaces. We consider these systems from a viewpoint of a user who does not want to experience any negative consequences by using these systems. Also we look at the systems from a management perspective as they want to achieve a reduction of used energy and costs.
We will discuss and judge these methods based on their intrusiveness for the users, the needed resources for the system and the obtained results. To do this we will look into the complexity of the system, the used sensors, the interfaces used to communicate with the system and finally the reduction of consumed energy. Furthermore we will discuss the future of this field by looking into experimental research that is being conducted.
The power grid is an enormous network to transport energy from sup- pliers to customers. The foundations for the power grid are laid in the 1960s, it is a large centralized grid with few energy sources. Increasing environmental awareness and depletion of fossil fuels requires a transition to renewable energy sources, such as solar energy and wind energy. To reduce the energy loss in transmission, the network topology must be changed into a network of smaller transmission links. Because the energy production of renewable energy sources is not constant, the power grid must transform into a smart grid. In such a smart grid, the energy usage and energy production can be monitored more efficiently, and the direction of the energy is no longer unidirectional but bidirectional as customers can generate energy themselves.
Recent research discusses challenges and new techniques for the power grid. For instance, plug-in hybrid electric vehicles (PHEV) will consume more and more energy from the power grid, mostly at night. Such new and high loads to the power grid introduce new difficulties that were not anticipated when the current power grid was designed.
We will select and discuss some of the challenges and possible implementations with the most impact and try to identify which new techniques for the power grid are able to counteract the effect of the challenges. Focusing on information and communication technology techniques rather than the physical techniques.
Sustainability is a goal that many individuals and companies strive to achieve to reduce their ecological footprint and to support economic and social development. One of the ways to support sustainability is to look at ways in which households and offices use energy and other resources, and to determine if and how resources can be saved in these environments and feed this information back to users.
This is the main purpose of so-called resource management systems. In the context of smart homes and offices, resource management refers to the efficient and effective use of often limited resources, such as energy, water and heat, to fulfil user tasks. Resource management has been found to be an important tool in improving sustainability and stimulating sustainable behaviour by users.
Classical examples of resource management systems include the in-home displays that allow users to view real-time energy or water consumption. The idea behind these kind of interfaces is that users will be aware of their excessive consumption or wasteful behaviour and will use this information to make their environment and behaviour more sustainable.
However, practice shows that the way in which current resource management sys- tems operate is not an optimal strategy for stimulating sustainable behaviour. In this paper, we review various researches that discuss case studies of resource management systems to uncover limitations in the design of these systems.
We will take these limitations as a starting point for our research, to formulate a set of design guidelines to which these resource management systems should meet in order to maximise their effectiveness in improving sustainability and stimulating sustainable behaviour by users.
In our presentation, we will explain the basis of resource management systems, the current and emerging resource systems and grids, the limitations of existing resource management systems, and will finally conclude with a detailed list of guidelines that may govern the design of such systems.
Saving energy in buildings has become and remains a major issue for the planet. The last decade, eco-feedback systems have been developed to provide consumers with information about their electricity consumption. Research has shown that the type of information displayed and the techniques used to present it have an impact on the user energy saving. This raises the question about how to display the information to the consumer in a comprehensive, attractive and non-intrusive way.
In this paper we compare and discuss the various methods of visualizing energy usage for consumers. Some of the design components of user interfaces such as historical comparisons and presentation of costs are more likely to aid in providing the consumer with an understanding of his energy usage and changing his behavior. We will extract the most effective methods from research and surveys.
The comparison of the different methods is based on the reduction of energy usage of consumers using such eco-feedback systems and if consumers keep using the eco-feedback systems for longer periods of time. Additionally, the results of interviews with users of such eco-feedback systems will also be taken into account. We expect to find the most effective methods to visualize energy consumption data for future eco-feedback systems.
Time for some food
Join me in this session to see how to use the Go language to program small devices – Raspberry Pi and the like. I’ll discuss why it may be a good choice for your IoT projects, and on what hardware Go will run (and why on some it won’t). As not everyone may be familiar with Go, a short introduction to the language is given. Next I will show Go code to read sensor data, communicate over the network using MQTT, and control a small water pump. Last but not least you will see a live demo of how I managed make plants survive the harsh environment of my home! This talk will focus on how to actually make something like this, so it will include live coding and some DIY components.
Barycentric coordinates are a system in which a point on a simplex (a triangle, tetrahedron, etc.) is specified as the center of mass of weights coupled to the vertices of the simplex. Barycentric coordinates are uniquely defined on triangles, but not on polygons of higher order, like quadrilaterals or pentagons. Barycentric coordinates are particularly useful in interpolation over simplices and are therefore used widely in computer graphics. Generalised barycentric coordinates make it possible to use barycentric coordinates for higher order polygons. There are several techniques that define barycentric coordinates over polygons, but they do not lead to useful results in all cases. Like regular barycentric coordinates, the idea is to calculate weights with respect to a vertex of the polygon and a point on the surface of the polygon.
In this paper, we compare three kinds of generalised barycentric coordinates. The first are Wachspress coordinates, secondly mean-value coordinates and lastly discrete harmonic coordinates. These coordinates are all part of a one-parameter family called three-point coordinates.
We will compare the outcomes of the techniques by looking at contour plots that describe the influence of individual barycentric coordinates on the surface of the polygon. Here, two types of polygons are of interest: convex and non-convex polygons. In addition to this, we will compare the effects of color interpolation over the polygon using these different types of coordinates.
Time for some coffee again.
Nowadays research into smart/sustainable building is starting to get more attention, since it is a way to conserve energy and to improve user comfort. In solving this problem room occupancy detection plays a vital role. For instance, knowing about room occupancy allows for fine grained automatic control of heating, ventilation and air conditioning (HVAC) systems.
We review three different approaches to room occupancy detection. In the first approach only stationary sensors are used. It uses the sensor data to approximate the occupancy of a room. The second approach involves using the sensors from the smartphones of the users for occupancy detection. The result is an infrastructure-less occupancy detection method. The third approach is a hybrid approach combining the two previous methods. In this method smartphones are used to detect beacons which denote the different rooms. All these methods use different machine learning techniques in order to detect occupancy.
We expect that there is no clear answer for what method works best in the general case, since some approaches are better suited for some problems then others. Therefore we will discuss different trade-offs between the methods.
In medical practice, it can be important to analyze the patients skin to scan for disease and certain forms of cancer. Skin is usually covered with hair, which can make it difficult to properly analyze it. An image that only shows the skin is therefore preferred. For doctors and computer programs, it is important to get an image with the least amount of distraction and distortion as possible to deliver the best test results. Digital hair removal tries to remove the hair from an image to prevent distraction and give an image of only the skin underneath. To do this, several digital hair removal (DHR) algorithms have been developed using various techniques producing different outcomes.
Recently Koehoorn et al. have published a paper that suggests a new method for digital hair removal. They did some comparisons on various techniques, but that only constitutes of a few sample images. The comparison of Koehoorn et al. also lacks an objective approach determined before the comparison was made.
We are going to compare the method descibed by Koehoorn et al. with other methods and give an objective comparison between the digital hair removal methods. The methods that will be used for this comparison are DullRazor®, VirtualShave, E-shave and methods describe by Abbas et al. and Huang et al. For this comparison we will compare the methods on time needed, accuracy when processing various kinds of hair and robustness.
Last updated: 06-04-2016
Every year, students of the Computing Science Master's programme at the University of
Groningen will organize a student colloquium conference sc@RUG, bringing together
students from Computing Science and its staff.
This year, the 13th iteration of the conference will take place. It will be held on the 7th of April 2016.
sc@RUG is devoted to research in computing science. Previous sc@RUG have had a broad range of presentations in the field of surveys, tutorials and case studies, and we hope to even extend that range this year.
This years organizing committee consists of the following members: