reference
stringlengths 376
444k
| target
stringlengths 31
68k
|
---|---|
A survey of power management techniques in mobile computing operating systems <s> I <s> Recent advances in hardware and communication technology have made mobile computing possible. It is expected, [BIV92], that in the near future, tens of millions of users will carry a portable computer with a wireless connection to a worldwide information network. This rapidly expanding technology poses new challenging problems. The mobile computing environment is an environment characterized by frequent disconnections, significant limitations of bandwidth and power, resource restrictions and fast-changing locations. The peculiarities of the new environment make old software systems inadequate and raise new challenging research questions. In this report we attempt to investigate the impact of mobility on the todays software systems, report on how research starts dealing with mobility and state some problems that remain open. <s> BIB001 </s> A survey of power management techniques in mobile computing operating systems <s> I <s> We consider wireless broadcasting of data as a way of disseminating information to a massive number of users. Organizing and accessing information on wireless communication channels is different from the problem of organizing and accessing data on the disk. We describe two methods, (1, m ) Indexing and Distributed Indexing , for organizing and accessing broadcast data. We demonstrate that the proposed algorithms lead to significant improvement of battery life, while retaining a low access time. <s> BIB002 | The motivation behind searching for and exploiting unique organization and access methods stems from the potential savings in power resulting from being able to wait for expected incoming data while in a "doze" mode BIB001 . When the mobile computer is receiving, it (its CPU) must be in the "active" mode. As was argued in section 1.1, the power consurr/ed by the CPU and memory (in active mode) is not trivial. As pointed out by Imielinski et. al, the/'atio of power consumption in active mode to that in doze mode is on the order of 5000 for the Hobbit chip from AT&T BIB002 . The question is how to organize the data to be broadcast so that it can be accessed by a mobile receiver in a manner that provides for optimal switching between active and doze modes• Due to the dynamic nature of mobile computing in terms of wireless communication cell migration, changing information content, and the multiplexing of mat~y different files over the same communication channels, the authors propose broadcasting the directory of a broadcasted data file along with the data file in the form of an index. Without an index, the client would have to filter (listen to) the entire broadcast in the worst case, or half the broadcast on average. This is undesirable because such filtering requires the mobile unit to be in its active mode, consuming power unnecessarily. Therefore every broadcast (channel) contains all of the information needed--the file and the index. Again, the question is how to organize the data to be broadcast for optimal access by a mobile receiver. For use in evaluating potential methods of organization and access, the authors introduce the two parameters access time and tuning time. The access time is the average time between identification of desired data in the index portion of the broadcast, and download of the data in the data file portion of the broadcast. The tuning time is the amount of time spent by a mobile client actually listening to a broadcast channel. The goal of the authors was to find algorithms for allocating the index together with the data on a broadcast channel, and to do so in a means that struck a balance between the optimal access time algorithm and the optimal tuning time algorithm. The first organization method, called "(1,m) Indexing", broadcasts the entire index m times (equally spaced) during the broadcast of one version of the data file. In other words, the entire in-dex is broadcast every 1/m fraction of the data file. In the second method, "Distributed Indexing", the (1,m) method is improved upon by eliminating much of the redundancy in the m broadcasts of the data file index. Their key observation is that only certain portions of the index tree are necessary between broadcasts of particular segments of the data file. Specifically, the periodically broadcasted index segments need only index the data file segment that follows it. By using fixed-sized "buckets" of data (both index and file data) the two methods allow the mobile client to "doze" for deterministic amounts of time, only to awake just prior to the next necessary listening event, what the authors call a "probe". In their evaluations, the authors found that both schemes achieve tuning times that are almost as good as that of an algorithm that had optimal tuning time. In terms of access time, both algorithms exhibit a savings that is a respectable compromise between the two extremes of an algorithm with optimal access time and an algorithm with optimal tuning time. The Distributed Indexing scheme is always better than the (1,m) scheme. In examples of practical implementations, the authors again compare their algorithms to the extreme cases of an optimal tuning time algorithm, and an optimal access time algorithm. In one example for the (1,m) algorithm, they show a per query reduction of power by a factor of 120 over the optimal access algorithm, but a 45 % increase in access time. For the same (1 ,m) example, they found that the power consumption was very similar to that of the optimal tuning algorithm, but that the access time had improved to 70% of that in the optimal tuning algorithm. In looking at an example of the distributed indexing scheme, they found a per query power reduction of 100 times smaller than that of an optimal access algorithm, while the access time increased by only 10%. Again when compared to an optimal tuning algorithm, they found similar power consumption, but again improved access time of 53% of that for an optimal tuning algorithm. The conclusion of the authors is that by using their distributed indexing scheme in periodic broadcasts, a savings of 100 times less energy can be realized. The fruits of this savings can of course be used for other purposes such as extended battery life, or extra queries. |
Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> INTRODUCTION <s> Discover BIM: A better way to build better buildings. Building Information Modeling (BIM) is a new approach to design, construction, and facility management in which a digital representation of the building process is used to facilitate the exchange and interoperability of information in digital format. BIM is beginning to change the way buildings look, the way they function, and the ways in which they are designed and built. BIM Handbook: A Guide to Building Information Modeling for Owners,Managers, Designers, Engineers, and Contractors provides an in-depth understanding of BIM technologies, the business and organizational issues associated with its implementation, and the profound advantages that effective use of BIM can provide to all members of a project team. The Handbook: Introduces Building Information Modeling and the technologies that support it Reviews BIM and its related technologies, in particular parametric and object-oriented modeling, its potential benefits, its costs, and needed infrastructure Explains how designing, constructing, and operating buildings with BIM differs from pursuing the same activities in the traditional way using drawings, whether paper or electronic Discusses the present and future influences of BIM on regulatory agencies; legal practice associated with the building industry; and manufacturers of building products Presents a rich set of BIM case studies and describes various BIM tools and technologies Shows how specific disciplines?owners, designers, contractors, and fabricators?can adopt and implement BIM in their companies Explores BIM's current and future impact on industry and society Painting a colorful and thorough picture of the state of the art in Building Information Modeling, the BIM Handbook guides readers to successful implementations, helping them to avoid needless frustration and costs and take full advantage of this paradigm-shifting approach to build better buildings, that consume fewer materials, and require less time, labor, and capital resources. <s> BIB001 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> INTRODUCTION <s> Abstract Building information modelling (BIM) has been a dominant topic in information technology in construction research since this memorable acronym replaced the boring “product modelling in construction” and the academic “conceptual modelling of buildings”. The ideal of having a complete, coherent, true digital representation of buildings has become a goal of scientific research, software development and industrial application. In this paper, the author asks and answers ten key questions about BIM, including what it is, how it will develop, how real are the promises and fears of BIM and what is its impact. The arguments in the answers are based on an understanding of BIM that considers BIM in the frame of structure-function-behavior paradigm. As a structure, BIM is a database with many remaining database challenges. The function of BIM is building information management. Building information was managed before the invention of digital computers and is managed today with computers. The goal is efficient support of business processes, such as with database-management systems. BIM behaves as a socio-technical system; it changes institutions, businesses, business models, education, workplaces and careers and is also changed by the environment in which it operates. Game theory and institutional theory provide a good framework to study its adoption. The most important contribution of BIM is not that it is a tool of automation or integration but a tool of further specialization. Specialization is a key to the division of labor, which results in using more knowledge, in higher productivity and in greater creativity. <s> BIB002 | Building Information Modelling (BIM) has received much attention in academia and architecture/engineering construction sector BIB001 . BIM is defined by the US National BIM Standard as "A digital representation of physical and functional characteristics BIB001 Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions@acm.org. CSAE '18, October 22-24, 2018 of a facility and a shared knowledge resource for information about a facility forming a reliable basis for decisions during its life-cycle; defined as existing from earliest conception to demolition". In broader terms, "BIM refers to a combination or a set of technologies and organizational solutions that are expected to increase inter-organizational and disciplinary collaboration in the construction industry and to improve the productivity and quality of the design, construction, and maintenance of buildings" BIB002 . From the report about construction industry informatization development of China, currently BIM involves many kinds of technology such as 3D scanning, Internet of things (IoT), Geographic Information System (GIS), 3D-printing etc. and is applicable in lots of aspects in building management. According to Isikdag [3] , the first evolution of BIM was from being a shared warehouse of information to an information management strategy. Now the BIM is evolving from being an information management strategy to being a construction management method; sensor networks and IoT are technologies needed in this evolution. The information provided by the sensors integrating with the building information, becomes valuable in transforming the building information into meaningful and full state information that is more accurate and up-to-date . Therefore, this paper provides a brief review to evaluate and clarify the state-of-art in the integration of BIM and sensor technology. A systematic approach was adopted in reviewing related publications. Methods of integrating the two technologies were reviewed. A brief summary is given to highlight research gaps, and recommend future research. |
Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Integration Methods <s> Only very few constructed facilities today have a complete record of as-built information. Despite the growing use of Building Information Modelling and the improvement in as-built records, several more years will be required before guidelines that require as-built data modelling will be implemented for the majority of constructed facilities, and this will still not address the stock of existing buildings. A technical solution for scanning buildings and compiling Building Information Models is needed. However, this is a multidisciplinary problem, requiring expertise in scanning, computer vision and videogrammetry, machine learning, and parametric object modelling. This paper outlines the technical approach proposed by a consortium of researchers that has gathered to tackle the ambitious goal of automating as-built modelling as far as possible. The top level framework of the proposed solution is presented, and each process, input and output is explained, along with the steps needed to validate them. Preliminary experiments on the earlier stages (i.e. processes) of the framework proposed are conducted and results are shown; the work toward implementation of the remainder is ongoing. <s> BIB001 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Integration Methods <s> Rehabilitation of the existing building stock is a key measure for reaching the proposed reduction in energy consumption and CO2 emissions in all countries. Building Information Models stand as an optimal solution for works management and decision-making assessment, due to their capacity to coordinate all the information needed for the diagnosis of the building and the planning of the rehabilitation works. If these models are generated from laser scanning point clouds automatically textured with thermographic and RGB images, their capacities are exponentially increased, since also their visualization and not only the consultation of their data increases the information available from the building. Since laser scanning, infrared thermography and photography are techniques that acquire information of the object as-is, the resulting BIM includes information on the real condition of the building in the moment of inspection, consequently helping to a more efficient planning of the rehabilitation works, enabling the repair of the most severe faults. This paper proposes a methodology for the automatic generation of textured as-built models, starting with data acquisition and continuing with geometric and thermographic data processing. <s> BIB002 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Integration Methods <s> This paper explores how gamification can provide the platform for integrating Building Information Modeling (BIM) together with the emergent Internet of Things (IoT). The goal of the research is to foster the creation of a testable and persistent virtual building via gaming technology that combines both BIM and IoT. The author discusses the features of each subject area in brief, and points towards the advantages and challenges of integration via gaming technology. Hospitals are the specific architectural typology discussed in the paper, as hospitals have particular properties which make them good candidates for study. <s> BIB003 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Integration Methods <s> Background The emerging Building Information Modelling (BIM) in the Architectural, Engineering and Construction (AEC) / Facility Management (FM) industry promotes life cycle process and collaborative way of working. Currently, many efforts have been contributed for professional integrated design / construction / maintenance process, there are very few practical methods that can enable a professional designer to effectively interact and collaborate with end-users/clients on a functional level. Method This paper tries to address the issue via the utilisation of computer game software combined with Building Information Modelling (BIM). Game-engine technology is used due to its intuitive controls, immersive 3D technology and network capabilities that allow for multiple simultaneous users. BIM has been specified due to the growing trend in industry for the adoption of the design method and the 3D nature of the models, which suit a game engine's capabilities. Results The prototype system created in this paper is based around a designer creating a structure using BIM and this being transferred into the game engine automatically through a two-way data transferring channel. This model is then used in the game engine across a number of network connected client ends to allow end-users to change/add elements to the design, and those changes will be synchronized back to the original design conducted by the professional designer. The system has been tested for its robustness and functionality against the development requirements, and the results showed promising potential to support more collaborative and interactive design process. Conclusion It was concluded that this process of involving the end-user could be very useful in certain circumstances to better elaborate the end user's requirement to design team in real time and in an efficient way. <s> BIB004 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Integration Methods <s> Building Information Modelling (BIM) has become a very important part of the construction industry, and it has not yet reached its full potential, but the use of BIM is not just limited to the construction industry. The aim of BIM is to provide a complete solution for the life cycle of the built environment from the design stage to construction and then to operation. One of the biggest challenges faced by the facility managers is to manage the operation of the infrastructure sustainably; however, this can be achieved through the installation of a Building Management System (BMS). Currently, the use of BIM in facilities management is limited because it does not offer real-time building data integration, which is vital for infrastructure operation. This chapter investigates the integration of real-time data from the BMS system into a BIM model, which would aid facility managers to interact with the real-world environment inside the BIM model. We present the use of web socket functionality to transmit and receive data in real-time over the Internet in a 3D game environment to provide a user-friendly system for the facility managers to help them operate their infrastructure more effectively. This novel and interactive approach would provide rich information about the built environment, which would not have been possible without the integration of BMS with BIM. <s> BIB005 | This part is about how BIM can be integrated with sensor technology and mainly discussed three subthemes: what kind of sensor should be chosen; how the sensors should be arranged and distributed in the building; and how to integrate BIM with data collected from sensors, which includes data processing, analysis and presentation technology. The first two subthemes are introduced in different application studies that are mainly focused on information integration technology. Brilakis et al. BIB001 developed an automated algorithm for generating parametric BIM using data acquired by LiDAR (Light Detection and Ranging) or photogrammetry. The algorithm established a classification of building material prototype, shape and relationships to each other. Then the algorithm will recognize the exact element form the classification that fits special and visual descriptions. Modelers are only responsible for model checking and special elements. On this basis, Lagüela et al. BIB002 put forward an automated method for generating textured model. While Xiong et al. proposed another integrating method, which is to learn different elements' surface features and background relationships between objects, then mark them into walls, ceilings or floors, and finally conduct detailed analysis and locate openings to the surface. Further, Isikdag explained in detail about the integration methods of information provided by IoT and sensors, and integration methods of BIM and the information, in which several technical problems have been solved and a complete framework was presented. The above subthemes provide a macro and theoretical technical framework, while some other researchers were more focused on actual implementation. Rowland BIB003 put forward that gamification is the future integrate direction of BIM and IoT through a research towards hospitals, this research proved that gamification can realize better interaction between people and building. Edwards et al. BIB004 proposed a prototype realized by using a game engine to improve the terminal clients' participation of design works, it has been proved that using game engine to realize information interaction is convenient. Moreover, Khalid et al. BIB005 conducted a more detailed research about evaluations of databases and data formats. In this research, two kind of database: MongoDB, which is noSQL database and MySQL, which is SQL database were compared, and two kinds of data format: XML and JSON were evaluated. What's more, Unity 3D game engine is proved to be efficient in dealing with scenes which have large number of vertices. |
Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Sustainable Building <s> Evaluating a building's performance usually requires a high number of sensors especially if individual rooms are analyzed. This paper introduces a simple and scalable model-based virtual sensor that allows analysis of a buildings' heat consumption down to room level using mainly simple temperature sensors. The approach is demonstrated with different sensor models for a case study of a building that contains a hybrid HVAC system and uses fossil and renewable energy-sources. The results show that, even with simple sensor models, reasonable estimations of rooms' heat consumption are possible and that rooms with high heat consumption are identified. Further, the paper illustrates how virtual sensors for thermal comfort can support the decision making to identify the best ways to optimize building system efficiency while reducing the building monitoring cost. <s> BIB001 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Sustainable Building <s> Opportunities for improving energy efficiency can be recognized in many different ways. Energy benchmarking is a critical step of building retrofit projects because it provides baseline energy information that could help building stakeholders identify energy performance, understand energy requirements, and prioritize potential retrofit opportunities. Sub-metering is one of the important energy benchmarking options for owners of aging commercial buildings in order to obtain critical energy information and develop an energy baseline model; however, it oftentimes lacks baseline energy models collecting granular energy information. This paper discusses the implementation of cost effective energy baseline models supported by wireless sensor networks and Building Information Modeling (BIM). The research team focused on integrating the theories and technologies of BIM, wireless sensor networks, and energy simulations that can be employed and adopted in building retrofit practices. The research activities conducted in this project provide an understanding of the current status and investigate the potentials of the system that would impact the future implementation. The result from a proof of concept project is summarized in order to demonstrate the effectiveness of the proposed system. <s> BIB002 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Sustainable Building <s> Abstract The increase in data center operating costs is driving innovation to improve their energy efficiency. Previous research has investigated computational and physical control intervention strategies to alleviate the competition between energy consumption and thermal performance in data center operation. This study contributes to the body of knowledge by proposing a cyber-physical systems (CPS) approach to innovatively integrate building information modeling (BIM) and wireless sensor networks (WSN). In the proposed framework, wireless sensors are deployed strategically to monitor thermal performance parameters in response to runtime server load distribution. Sensor data are collected and contextualized in reference to the building information model that captures the geometric and functional characteristics of the data center, which will be used as inputs of continuous simulations aiming to predict real-time thermal performance of server working environment. Comparing the simulation results against historical performance data via machine learning and data mining, facility managers can quickly pinpoint thermal hot zones and actuate intervention procedures to improve energy efficiency. This BIM-WSN integration also facilitates smarter power management by capping runtime power demand within peak power capacity of data centers and alerting power outage emergencies. This paper lays out the BIM-WSN integration framework, explains the working mechanism, and discusses the feasibility of implementation in future work. <s> BIB003 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Sustainable Building <s> The research presents a methodology and tool development which delineates the performance-based building active control system. We demonstrates the integration of environment sensors, the parametric engine, and interactive facade components by using the BIM-based system called Sync-BIM. It is developed by the BIM-based parametric engine called Dynamo. The Dynamo engine works as the building brain to determine the interactive control scenarios between buildings and surroundings micro-climate conditions. There are three sequent procedures including 1. data input, 2. scenario process, and command output to loop the interactive control scenarios. The kinetic facade prototype embedded with Sync-BIM system adopts the daylight values as the parameter to control the transformation of facade units. The kinetic facade units dynamically harvest the daylight via opening ratios for the sake of higher building energy performance. <s> BIB004 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Sustainable Building <s> The building sector releases 36% of global CO2 emissions, with 66% of emissions occurring during the operation stage of the life cycle of a building. While current research focuses on using Building Information Modelling (BIM) for energy management of a building, there is little research on the visualizing building carbon emission data in BIM to support decision makings during operation phase. This paper proposes an approach for gathering, analyzing and visualizing building carbon emissions data by integrating BIM and carbon estimation models, to assist the building operation management teams in discovering carbon emissions problems and reducing total carbon emission. Data requirements, carbon emission estimation algorithms, integration mechanism with BIM are investigated in this paper. A case is used to demonstrate the proposed approach. The approach described in this paper provides the inhabitants important graphical representation of data to determine a buildings sustainability performance, and can allow policy makers and building managers to make informed decisions and respond quickly to emergency situations. <s> BIB005 | This theme is concerned with two parts: energy consumption and environment protection. Building energy consumption research focuses on energy monitoring and the establishment of a method to improve energy performance or save energy, while environmental protection research concerns about saving resources and carbon emission. Using sensors to monitor and record the use of energy/resources in buildings is the basis of the methods. There are two methods to monitor building energy consumption. One is to embed various sensors into buildings to collect related data such as capture temperature, humidity, CO2, and power consumption data; another is to conduct external scanning to building to acquire its thermal conditions. Woo and Gleason BIB002 established a wireless sensor network (WSN) to collect various data related to energy usage in building, then use these data to assist building retrofit design with the participation of BIM. Different from establishing a building retrofit design assist system, Dong et al. focused on an energy Fault Detection and Diagnostics (FDD) system, a building energy management system (BEMS) integrated FDD and BIM were established to save energy. Ploennigs et al. BIB001 BIB003 used WSN to monitor the operation energy consumption of data center, introducing BIM to predict real time thermal performance of work environment of sever, comparing predict outcomes and historical data, the operators can quickly discover thermal hot zones and conduct intervening to improve energy efficiency. In contrast to the above research directions, Shen and Wu BIB004 was concerned with adjusting building kinetic façade to gain higher energy performance through acquiring sunshine data via sensors. In environment protection subtheme, Howell et al. concerned about the rational use and conservation about natural resources. They used sensor network and BIM to monitor the usage of water resources and established an intelligent management system to manage water resource smartly. Similarly, Mousa et al. BIB005 concerned about carbon emission from buildings. They established a quantitative relationship between carbon and energy consumption data and natural gas consumption data collected by sensors, and founded a carbon emission model via BIM, which can assist carbon emission management and related decision making. |
Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Site Management <s> Abstract Tower crane operators often operate a tower crane with blind spots. To solve this problem, video camera systems and anti-collision systems are often deployed. However, the current video camera systems do not provide accurate distance and understanding of the crane's surroundings. A collision-detection system provides location information only as numerical data. This study introduces a newly developed tower crane navigation system that provides three-dimensional information about the building and surroundings and the position of the lifted object in real time using various sensors and a building information modeling (BIM) model. The system quality was evaluated in terms of two aspects, “ease of use” and “usefulness,” based on the Technology Acceptance Model (TAM) theory. The perceived ease of use of the system was improved from the initial 3.2 to 4.4 through an iterative design process. The tower crane navigation system was deployed on an actual construction site for 71 days, and the use patterns were video recorded. The results clearly indicated that the tower crane operators relied heavily on the tower crane navigation system during blind lifts (93.33%) compared to the text-based anti-collision system (6.67%). <s> BIB001 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Site Management <s> Display Omitted We introduce a formalized ontology modeling construction sequencing rationale.The presented ontology with BIM can infer state of progress in case of limited visibility.And also in case of higher LoDs and more detailed WBS compared to the underlying 4D BIM.The ontology and classification mechanisms are validated using Charrette test.Their application is shown together with BIM and as-built data on real-world projects. Over the last few years, new methods that detect construction progress deviations by comparing laser scanning or image-based point clouds with 4D BIM are developed. To create complete as-built models, these methods require the visual sensors to have proper line-of-sight and field-of-view to building elements. For reporting progress deviations, they also require Building Information Modeling (BIM) and schedule Work-Breakdown-Structure (WBS) with high Level of Development (LoD). While certain logics behind sequences of construction activities can augment 4D BIM with lower LoDs to support making inferences about states of progress under limited visibility, their application in visual monitoring systems has not been explored. To address these limitations, this paper formalizes an ontology that models construction sequencing rationale such as physical relationships among components. It also presents a classification mechanism that integrates this ontology with BIM to infer states of progress for partially and fully occluded components. The ontology and classification mechanism are validated using a Charrette test and by presenting their application together with BIM and as-built data on real-world projects. The results demonstrate the effectiveness and generality of the proposed ontology. It also illustrates how the classification mechanism augments 4D BIM at lower LoDs and WBS to enable visual progress assessment for partially and fully occluded BIM elements and provide detailed operational-level progress information. <s> BIB002 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Site Management <s> Building information models (BIMs) provide opportunities to serve as an information repository to store and deliver as-built information. Since a building is not always constructed exactly as the design information specifies, there will be discrepancies between a BIM created in the design phase (called as-designed BIM) and the as-built conditions. Point clouds captured by laser scans can be used as a reference to update an as-designed BIM into an as-built BIM (i.e., the BIM that captures the as-built information). Occlusions and construction progress prevent a laser scan performed at a single point in time to capture a complete view of building components. Progressively scanning a building during the construction phase and combining the progressively captured point cloud data together can provide the geometric information missing in the point cloud data captured previously. However, combining all point cloud data will result in large file sizes and might not always guarantee additional building component information. This paper provides the details of an approach developed to help engineers decide on which progressively captured point cloud data to combine in order to get more geometric information and eliminate large file sizes due to redundant point clouds. <s> BIB003 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Site Management <s> According to US Bureau of Labor Statistics (BLS), in 2013 around three hundred deaths occurred in the US construction industry due to exposure to hazardous environment. A study of international safety regulations suggests that lack of oxygen and temperature extremes contribute to hazardous work environments particularly in confined spaces. Real-time monitoring of these confined work environments through wireless sensor technology is useful for assessing their thermal conditions. Moreover, Building Information Modeling (BIM) platform provides an opportunity to incorporate sensor data for improved visualization through new add-ins in BIM software. In an attempt to reduce Health and Safety (HS notifies HS and ultimately attempts to analyze sensor data to reduce emergency situations encountered by workers operating in confined environments. However, fusing the BIM data with sensor data streams will challenge the traditional approaches to data management due to huge volume of data. This work reports upon these challenges encountered in the prototype system. <s> BIB004 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Site Management <s> Background ::: Deep excavations in urban areas have the potential to cause unfavorable effects on ground stability and nearby structures. Thus, it is necessary to evaluate and monitor the environmental impact during deep excavation construction processes. Generally, construction project teams will set up monitoring instruments to control and monitor the overall environmental status, especially during the construction of retaining walls, main excavations, and when groundwater is involved. Large volumes of monitoring data and project information are typically created as the construction project progresses, making it increasingly difficult to manage them comprehensively. <s> BIB005 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Site Management <s> Building Information Modelling (BIM) has become a very important part of the construction industry, and it has not yet reached its full potential, but the use of BIM is not just limited to the construction industry. The aim of BIM is to provide a complete solution for the life cycle of the built environment from the design stage to construction and then to operation. One of the biggest challenges faced by the facility managers is to manage the operation of the infrastructure sustainably; however, this can be achieved through the installation of a Building Management System (BMS). Currently, the use of BIM in facilities management is limited because it does not offer real-time building data integration, which is vital for infrastructure operation. This chapter investigates the integration of real-time data from the BMS system into a BIM model, which would aid facility managers to interact with the real-world environment inside the BIM model. We present the use of web socket functionality to transmit and receive data in real-time over the Internet in a 3D game environment to provide a user-friendly system for the facility managers to help them operate their infrastructure more effectively. This novel and interactive approach would provide rich information about the built environment, which would not have been possible without the integration of BMS with BIM. <s> BIB006 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Site Management <s> AbstractConstruction sites need to be monitored continuously to detect unsafe conditions and protect workers from potential injuries and fatal accidents. In current practices, construction-safety monitoring relies heavily on manual observation, which is labor-intensive and error-prone. Due to the complex environment of construction sites, it is extremely challenging for safety inspectors to continuously monitor and manually identify all incidents that may expose workers to safety risks. There exist many research efforts applying sensing technologies to construction sites to reduce the manual efforts associated with construction-safety monitoring. However, several bottlenecks are identified in applying these technologies to the onsite safety monitoring process, including (1) recognition and registration of potential hazards, (2) real-time detection of unsafe incidents, and (3) reporting and sharing of the detected incidents with relevant participants in a timely manner. The objective of this study was to c... <s> BIB007 | This theme includes many aspects, such as operation of site equipment, monitoring site environment, site security management and construction quality management. Various types of sensor are used in site management because it involves many aspects. Alizadehsalehia and Yitmen conducted a survey of construction companies and found that in automated construction project progress monitoring (ACCPM), in terms of popularity, Global positioning system (GPS) and WSN are important for the ACCPM. Moreover, Siddiqui introduced site distribute scheme and management strategy of sensors. The use of 3D laser scanning to generate point clouds is helpful for project progress monitoring, but several questions were raised by Han et al. BIB002 , which are lack of details in asplanned BIM, high-level work breakdown structure (WBS) in construction schedules, and static and dynamic occlusions and incomplete data collection. Another research conducted by Gao et al. BIB003 concerned about BIM update according to as-built BIM generated by scanning devices, a progressively captured point cloud method was developed to evaluate the repeated information in the data cloud and make decisions about which point cloud should be merged, this update method can help project managers acquire actual BIM so that they can make reasonable decisions. Rather than focused on generating actual as-built BIM and progress management, some other researchers concerned about site safety. Riaz et al. BIB004 discussed the focus point of CoSMoS (Confined Space Monitoring System) for real time safety management to reduce the harmful effects of harmful environmental hazards in the construction industry. They proposed that using sensors to collect real time site environment data, and stored the data in a SQL server, and CoSMoS is invoked as a software Revit Add In from Revit Architecture software GUI to realize data visualization, which is different form the conclusion of Khalid et al. BIB006 that a noSQL database is preferred. In addition, Park et al. BIB007 conducted site safety management in a different way, which demonstrated an automated safety monitoring approach that integrates Bluetooth Low Energy (BLE) based location tracking, BIM, and cloud-based communication. Wu et al. BIB005 not only concerned about safety management, but also focused on environment protection, which suggest that using various of sensors to monitor the project, then integrate project data, 3D model, stratum data, analysis data and monitoring data into BIM to establish a BIM-based monitoring system for urban deep excavation projects. In site management, sensors can not only conduct process management and safety control, but also helpful for equipment operation. Lee et al. BIB001 proposed a BIM and sensor based tower crane navigation system for helping cranes with blind spots. |
Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Operation and Maintenance <s> This paper presents results of the first phase of the research project ''Serious Human Rescue Game'' at Technische Universitat Darmstadt. It presents a new serious gaming approach based on Building Information Modeling (BIM) for the exploration of the effect of building condition on human behavior during the evacuation process. In reality it is impossible to conduct rescue tests in burning buildings to study the human behavior. Therefore, the current methods of data-collecting for existing evacuation simulation models have limitations regarding the individual human factors. To overcome these limitations the research hypothesis is that the human behavior can be explored with a serious computer game: The decisions of a person during the game should be comparable to decisions during an extreme situation in the real world. To verify this hypothesis, this paper introduces a serious gaming approach for analyzing the human behavior in extreme situations. To implement a serious game, developers generally make use of 3D-modeling software to generate the game content. After this, the game logic needs to be added to the content with special software development kits for computer games. Every new game scenario has to be built manually from scratch. This is time-consuming and a great share of modeling work needs to be executed twice (e.g., 3D-modeling), at first by the architect for the parametric building model and the second time by the game designer for the 3D-game content. The key idea of the presented approach is to use the capabilities of BIM together with engineering simulations (fire, smoke) to build realistic serious game scenarios in a new and efficient way. This paper presents the first phase results of the research project mainly focusing on the conceptual design of the serious game prototype. The validation concept is also presented. The inter-operability between building information modeling applications and serious gaming platforms should allow different stakeholders to simulate building-related scenarios in a new, interactive and efficient way. <s> BIB001 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Operation and Maintenance <s> Abstract Rapid transit systems are considered a sustainable mode of transportation compared to other modes of transportation taking into consideration number of passengers, energy consumed and amount of pollution emitted. Building Information Modeling (BIM) is utilized in this research along with a global ranking system to monitor Indoor Environmental Quality (IEQ) in subway stations. The research is concerned with developing global rating system for subway stations' networks. The developed framework is capable of monitoring indoor temperature and Particulate Matter (PM) concentration levels in subway stations. A rating system is developed using Simos' ranking method in order to determine the weights of different components contributing to the whole level of service of a subway station as well as maintenance priority indices. A case study is presented to illustrate the use of the proposed system. The developed ranking system showed its effectiveness in ranking maintenance actions globally. <s> BIB002 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Operation and Maintenance <s> Abstract The ability to locate people quickly and accurately in buildings is critical to the success of building fire emergency response operations, and can potentially contribute to the reduction of various building fire-caused casualties and injuries. This paper introduces an environment aware beacon deployment algorithm designed by the authors to support a sequence based localization schema for locating first responders and trapped occupants at building fire emergency scenes. The algorithm is designed to achieve dual objectives of improving room-level localization accuracy and reducing the effort required to deploy an ad-hoc sensor network, as the required sensing infrastructure is presumably unavailable at most emergency scenes. The deployment effort is measured by the number of beacons to deploy, and the location accessibility to deploy the beacons. The proposed algorithm is building information modeling (BIM) centered, where BIM is integrated to provide the geometric information of the sensing area as input to the algorithm for computing space division quality, a metric that measures the likelihood of correct room-level estimations and associated deployment effort. BIM also provides a graphical interface for user interaction. Metaheuristics are integrated to efficiently search for a satisfactory solution in order to reduce the computational time, which is crucial for the success of emergency response operations. The algorithm was evaluated by simulation, where two building fire emergency scenarios were simulated. The tabu search, which employs dynamically generated constraints to guide the search for optimum solutions, was found to be the most efficient among three tuned tested metaheuristics. The algorithm yielded an average room-level accuracy of 87.1% and 32.1% less deployment effort on average compared with random beacon placements. The robustness of the algorithm was also examined as the deployed ad-hoc sensor network is subject to various hazards at emergency scenes. Results showed that the room-level accuracy could remain above 80% when up to 54% of all deployed nodes were damaged. The tradeoff between the space division quality and deployment effort was also examined, which revealed the relationship between the total deployment effort and localization accuracy. <s> BIB003 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Operation and Maintenance <s> Abstract Increasing size and complexity of indoor structures and increased urbanization has led to much more complication in urban disaster management. Contrary to outdoor environments, first responders and planners have limited information regarding the indoor areas in terms of architectural and semantic information as well as how they interact with their surroundings in case of indoor disasters. Availability of such information could help decision makers interact with building information and thus make more efficient planning prior to entering the disaster site. In addition as the indoor travel times required to reach specific areas in the building could be much longer compared to outdoor, visualizing the exact location of building rooms and utilities in 3D helps in visually communicating the indoor spatial information which could eventually result in decreased routing uncertainty inside the structures and help in more informed navigation strategies. This work aims at overcoming the insufficiencies of existing indoor modelling approaches by proposing a new Indoor Emergency Spatial Model (IESM) based on IFC. The model integrates 3D indoor architectural and semantic information required by first responders during indoor disasters with outdoor geographical information to improve situational awareness about both interiors of buildings as well as their interactions with outdoor components. The model is implemented and tested using the Esri GIS platform. The paper discusses the effectiveness of the model in both decision making and navigation by demonstrating the model's indoor spatial analysis capabilities and how it improves destination travel times. <s> BIB004 | This direction includes many aspects, such as indoor environment monitoring and conditioning, user experience optimisation, emergency management and facility management. Marzouk and Abdelaty BIB002 used WSN to collect PM10, PM2.5, temperature and humidity data and proposed a global ranking system integrated with BIM to monitor environment quality of subway station, through which a maintenance priority indices (MPIs) has been developed to help managers with allocating funds. Additionally, Costa et al. also concerned about indoor environment monitoring by integrating saving energy, improving indoor environment and users' experience, where CO2, COV's, humidity level, temperature and occupancy rate were detected via sensor. Other researchers also concerned about emergency management, but in a more detailed way. Li et al. BIB003 developed a BIM-centred position algorithm based on Sequence Based Localization (SBL), which is aimed to rationalise sensors' cost when obtaining higher positioning accuracy. Different from concerning about positioning, Tashakkori et al. BIB004 established an outdoor/indoor 3D emergency spatial model to help rescuers understanding building and surroundings, optimize the rescue route and realize indoor navigation, where the dynamic and semantic building information will be collected by indoor environment sensors. Similarly, Rüppel and Schatz BIB001 established a serious human rescue game and choosed agentbased simulation software FDS+Evac to conduct further consideration, using camera and Radio-Frequency Identification (RFID) to test and verify the game. |
Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Structural Health Monitoring <s> AbstractBuilding information modelling (BIM) represents the process of development and use of a computer generated model to simulate the planning, design, construction and operation of a building. The utilisation of building information models has increased in recent years due to their economic benefits in design and construction phases and in building management. BIM has been widely applied in the design and construction of new buildings but rarely in the management of existing ones. The point of creating a BIM model for an existing building is to produce accurate information related to the building, including its physical and functional characteristics, geometry and inner spatial relationships. The case study provides a critical appraisal of the process of both collecting accurate survey data using a terrestrial laser scanner combined with a total station and creating a BIM model as the basis of a digital management model. The case study shows that it is possible to detect and define facade damage by in... <s> BIB001 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Structural Health Monitoring <s> In the construction process, real-time quality control and early defects detection are still the most significant approach to reducing project schedule and cost overrun. Current approaches for quality control on construction sites are time-consuming and ineffective since they only provide data at specific locations and times to represent the work in place, which limit a quality manager’s abilities to easily identify and manage defects. The purpose of this paper is to develop an integrated system of Building Information Modelling (BIM) and Light Detection and Ranging (LiDAR) to come up with real-time onsite quality information collecting and processing for construction quality control. Three major research activities were carried out systematically, namely, literature review and investigation, system development and system evaluation. The proposed BIM and LiDAR-based construction quality control system were discussed in five sections: LiDAR-based real-time tracking system, BIM-based real-time checking system, quality control system, point cloud coordinate transformation system, and data processing system. Then, the system prototype was developed for demonstrating the functions of flight path control and real-time construction quality deviation analysis. Finally, three case studies or pilot projects were selected to evaluate the developed system. The results show that the system is able to efficiently identify potential construction defects and support real-time quality control. <s> BIB002 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Structural Health Monitoring <s> Abstract Approximately 20% of accidents in construction industry occur while workers are moving through a construction site. Current construction hazard identification mostly relies on safety managers' capabilities to detect hazards. Consequently, numerous hazards remain unidentified, and unidentified hazards mean that hazards are not included in the safety management process. To enhance the capability of hazard identification, this paper proposes an automated hazardous area identification model based on the deviation between the optimal route (shortest path)—which is determined by extracting nodes from objects in a building information model (BIM)—and the actual route of a laborer collected from the real-time location system (RTLS). The hazardous area identification framework consists of six DBs and three modules. The unidentified hazardous area identification module identifies potentially hazardous areas (PHAs) in laborers' paths. The filtering hazardous area module reduces the range of possible hazardous areas to improve the efficiency of safety management. The monitoring and output generation module provides reports including hazardous area information. The suggested model can identify a hazard automatically and decrease the time laborers are exposed to a hazard. This can help improve both the effectiveness of the hazard identification process and enhance the safety for laborers. <s> BIB003 | This theme is concerned about the monitoring of mechanics situation of structures and the discovery of structure defects. Structural defect can be divided into two types: structural partial defect, such as crack and over deflection of concrete elements, and structural integral defect, such as poor verticality and flatness of structural elements. Kim et al. BIB003 researched about manual dangerous examinations, but using RFID to tracing workers and recording their routes, then plan an optimal route of entering into the construction site, reducing potential dangerous areas that workers may pass by and generate a report about information of dangerous area. Rather than concerning about manual structural examinations, some other researchers concerned about automated structural health examination. While Mill et al. BIB001 not only use laser scanning to conduct outdoor building survey, but also conduct indoor building survey using total stations, establishing geodetic network system, founding BIM by importing and merging data. This model can examine and define damage degree of façade. Different from the above methods, a detect method was put forward by Wang et al. BIB002 , which a system integrating BIM and LiDAR for real-time controlling the construction quality has been put forward. Additionally, Zhang and Bai proposed an approach by using breakage-triggered strain sensor via RFID tags to check whether the structural deformation exceeded threshold, where the responding power to the RFID reader/antenna would change by modifying the RFID tag. This method can help engineers to recognize the strain status of structure and making decisions. |
Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Positioning and Tracing <s> The purposes of this research are to develop and evaluate a framework that utilizes the integration of commercially-available Radio Frequency Identification (RFID) and a BIM model for real-time resource location tracking within an indoor environment. A focus of this paper is to introduce the framework and explain why building models currently lack the integration of sensor data. The need will be explained with potential applications in construction and facility management. Algorithms to process RFID signals and integrate the generated information in BIM will be presented. Furthermore, to demonstrate the benefits of location tracking technology and its integration in BIM, the paper provides a preliminary demonstration on tracking valuable assets inside buildings in real-time. The preliminary results provided the feasibility of integrating passive RFID with BIM for indoor settings. <s> BIB001 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Positioning and Tracing <s> Abstract The ability to locate people quickly and accurately in buildings is critical to the success of building fire emergency response operations, and can potentially contribute to the reduction of various building fire-caused casualties and injuries. This paper introduces an environment aware beacon deployment algorithm designed by the authors to support a sequence based localization schema for locating first responders and trapped occupants at building fire emergency scenes. The algorithm is designed to achieve dual objectives of improving room-level localization accuracy and reducing the effort required to deploy an ad-hoc sensor network, as the required sensing infrastructure is presumably unavailable at most emergency scenes. The deployment effort is measured by the number of beacons to deploy, and the location accessibility to deploy the beacons. The proposed algorithm is building information modeling (BIM) centered, where BIM is integrated to provide the geometric information of the sensing area as input to the algorithm for computing space division quality, a metric that measures the likelihood of correct room-level estimations and associated deployment effort. BIM also provides a graphical interface for user interaction. Metaheuristics are integrated to efficiently search for a satisfactory solution in order to reduce the computational time, which is crucial for the success of emergency response operations. The algorithm was evaluated by simulation, where two building fire emergency scenarios were simulated. The tabu search, which employs dynamically generated constraints to guide the search for optimum solutions, was found to be the most efficient among three tuned tested metaheuristics. The algorithm yielded an average room-level accuracy of 87.1% and 32.1% less deployment effort on average compared with random beacon placements. The robustness of the algorithm was also examined as the deployed ad-hoc sensor network is subject to various hazards at emergency scenes. Results showed that the room-level accuracy could remain above 80% when up to 54% of all deployed nodes were damaged. The tradeoff between the space division quality and deployment effort was also examined, which revealed the relationship between the total deployment effort and localization accuracy. <s> BIB002 </s> Integration of Building Information Modelling (BIM) and Sensor Technology: A Review of Current Developments and Future Outlooks <s> Positioning and Tracing <s> AbstractConstruction sites need to be monitored continuously to detect unsafe conditions and protect workers from potential injuries and fatal accidents. In current practices, construction-safety monitoring relies heavily on manual observation, which is labor-intensive and error-prone. Due to the complex environment of construction sites, it is extremely challenging for safety inspectors to continuously monitor and manually identify all incidents that may expose workers to safety risks. There exist many research efforts applying sensing technologies to construction sites to reduce the manual efforts associated with construction-safety monitoring. However, several bottlenecks are identified in applying these technologies to the onsite safety monitoring process, including (1) recognition and registration of potential hazards, (2) real-time detection of unsafe incidents, and (3) reporting and sharing of the detected incidents with relevant participants in a timely manner. The objective of this study was to c... <s> BIB003 | This direction is to develop a method to locate or trace facilities or people inside a building by using sensors. Positioning and tracing can be applied in many occasions, such as emergency management, site security management, user experience optimization and facility management. Costin et al. BIB001 put forward that it is possible to realise resource location tracking using BIM and passive RFID, and their research in 2015 discussed the method further, in which Tekla Structures software is chosen as BIM platform and Trimble ThingMagic is selected to realize RFID technology. Through the use of Application Programing Interface (API) of software and hardware to integrate BIM and RFID, an algorithm was developed to conduct indoor positioning and tracing. This method can help with reducing 64% of wrong readings to achieve the best accuracy 1.66m. The differences between the Costin et al.'s research and the research mentioned above in Li et al. BIB002 is the detail of algorithm. Rather than using RFID technologies, BLE is used in positioning and tracing. The above research done by Park et al. BIB003 used BLE to locate object, through which a self-corrective knowledge based hybrid tracking system is developed. The system uses BLE bacon to acquire absolute position, and motion sensors to acquire relative position, which also integrates BIM to get the geometric information of the building to improve robustness of tracing. The result shows that this hybrid system can reduce positioning mistake rate by 42%. |
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Methodological Background <s> This book primarily discusses issues related to the mining aspects of data streams and it is unique in its primary focus on the subject. This volume covers mining aspects of data streams comprehensively: each contributed chapter contains a survey on the topic, the key ideas in the field for that particular topic, and future research directions. The book is intended for a professional audience composed of researchers and practitioners in industry. This book is also appropriate for advanced-level students in computer science. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Methodological Background <s> Fixations are widely analysed in human vision, gaze-based interaction, and experimental psychology research. However, robust fixation detection in mobile settings is profoundly challenging given the prevalence of user and gaze target motion. These movements feign a shift in gaze estimates in the frame of reference defined by the eye tracker's scene camera. To address this challenge, we present a novel fixation detection method for head-mounted eye trackers. Our method exploits that, independent of user or gaze target motion, target appearance remains about the same during a fixation. It extracts image information from small regions around the current gaze position and analyses the appearance similarity of these gaze patches across video frames to detect fixations. We evaluate our method using fine-grained fixation annotations on a five-participant indoor dataset (MPIIEgoFixation) with more than 2,300 fixations in total. Our method outperforms commonly used velocity- and dispersion-based algorithms, which highlights its significant potential to analyse scene image information for eye movement detection. <s> BIB002 | In this section, we introduce the basics of stream clustering. Most importantly, we describe how data streams are typically aggregated and how algorithms adapt to changes over time. For a consistent notation, we denote vectors by boldface symbols and formally define a data stream as an infinite sequence X = (x 1 , x 1 , . . . , x N ) where x t is a single observation with d dimensions at time t. To calculate the distance between clusters, an appropriate distance measure needs to be used. For numerical data, the Euclidean distance between the centroids of the clusters is common. However, for binary, ordinal, nominal or text data, appropriate distance measures such as the Jaccard index, simple matching coefficient or Cosine similarity could be used. In general, finding a good clustering solution is defined as an optimization task. The underlying goal is to maximize intra-cluster homogeneity while simultaneously maximizing inter-cluster heterogeneity. This ensures that objects within the same cluster are similar but different clusters are well separated. There are various strategies that aim to achieve this task. Popular strategies include minimizing intra-cluster distances, minimizing radii of clusters or finding maximum likelihood estimates. A popular example is the k-means algorithm which minimizes the within-cluster sum of squares, i.e., the distance from data points to their cluster centroids. In a streaming scenario, these optimization objectives are subject to several restrictions regarding availability and order of the data as well as resource and time limitations. For example, the large volume of data makes it undesirable or infeasible to store all observations of the stream. Typically, observations can only evaluated once and are discarded afterwards. This requires to extract sufficient information from observations before discarding them. Similarly, the Figure 1 Stream of eye tracking data BIB002 at three different points in time. Grey points denote the normalized pupil centers and their opacity and size is relative to their recency. Circles mark the centers of micro-clusters and crosses the centers of macro-clusters. Both are scaled relative to the number of observations assigned to them. Figure 2 Categories of stream clustering algorithms order of observations cannot be influenced. As an illustrative example, let us consider the case of eye tracking which is typically used in order to analyze how people perceive content such as websites or advertisements. It records the movement of the eye and detects where a person is looking. An example of a stream of eye tracking data is visualized in Figure 1 , showing the pupil positions at three different points in times (grey points) BIB002 . In this context, stream clustering can be applied in order to find the areas of interest or subjects that the person is looking at. Throughout this paper, we discuss common strategies that can be used to identify clusters under the streaming restrictions. For example, we could use similarity thresholds in order to decide whether an observation fits into an existing cluster (Figure 2(a) ). Alternatively, we could split the data space into a grid and only store the location of densely populated cells (Figure 2(b) ). Other approaches include fitting a model to represent the observed data (Figure 2(c) ) or projecting high-dimensional data to a lower dimensional space (Figure 2(d) ). Generally, these strategies allow to capture the location of dense areas in the data space. These regions can be considered clusters and they can even be merged when they become too similar over time. However, it is not possible to data stream online 'micro-clusters' 'macro-clusters' offline Figure 3 Exemplary two-phase stream clustering using a grid-based approach ever split a clusters again since the underlying data was discarded and only the centre of the dense region was stored BIB001 . To avoid this problem, many stream clustering algorithms divide the process in two phases: an online and an offline component ]. |
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Time Window Models <s> Consider the problem of monitoring tens of thousands of time series data streams in an online fashion and making decisions based on them. In addition to single stream statistics such as average and standard deviation, we also want to find high correlations among all pairs of streams. A stock market trader might use such a tool to spot arbitrage opportunities. This paper proposes efficient methods for solving this problem based on Discrete Fourier Transforms and a three level time interval hierarchy. Extensive experiments on synthetic data and real world financial trading data show that our algorithm beats the direct computation approach by several orders of magnitude. It also improves on previous Fourier Transform approaches by allowing the efficient computation of time-delayed correlation over any size sliding window and any time delay. Correlation also lends itself to an efficient grid-based data structure. The result is the first algorithm that we know of to compute correlations over thousands of data streams in real time. The algorithm is incremental, has fixed response time, and can monitor the pairwise correlations of 10,000 streams on a single PC. The algorithm is embarrassingly parallelizable. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Time Window Models <s> Existing data-stream clustering algorithms such as CluStream arebased on k-means. These clustering algorithms are incompetent tofind clusters of arbitrary shapes and cannot handle outliers. Further, they require the knowledge of k and user-specified time window. To address these issues, this paper proposes D-Stream, a framework for clustering stream data using adensity-based approach. The algorithm uses an online component which maps each input data record into a grid and an offline component which computes the grid density and clusters the grids based on the density. The algorithm adopts a density decaying technique to capture the dynamic changes of a data stream. Exploiting the intricate relationships between the decay factor, data density and cluster structure, our algorithm can efficiently and effectively generate and adjust the clusters in real time. Further, a theoretically sound technique is developed to detect and remove sporadic grids mapped to by outliers in order to dramatically improve the space and time efficiency of the system. The technique makes high-speed data stream clustering feasible without degrading the clustering quality. The experimental results show that our algorithm has superior quality and efficiency, can find clusters of arbitrary shapes, and can accurately recognize the evolving behaviors of real-time data streams. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Time Window Models <s> Clustering real-time stream data is an important and challenging problem. Existing algorithms such as CluStream are based on the k-means algorithm. These clustering algorithms have difficulties finding clusters of arbitrary shapes and handling outliers. Further, they require the knowledge of k and user-specified time window. To address these issues, this article proposes D-Stream, a framework for clustering stream data using a density-based approach. Our algorithm uses an online component that maps each input data record into a grid and an offline component that computes the grid density and clusters the grids based on the density. The algorithm adopts a density decaying technique to capture the dynamic changes of a data stream and a attraction-based mechanism to accurately generate cluster boundaries. Exploiting the intricate relationships among the decay factor, attraction, data density, and cluster structure, our algorithm can efficiently and effectively generate and adjust the clusters in real time. Further, a theoretically sound technique is developed to detect and remove sporadic grids mapped by outliers in order to dramatically improve the space and time efficiency of the system. The technique makes high-speed data stream clustering feasible without degrading the clustering quality. The experimental results show that our algorithm has superior quality and efficiency, can find clusters of arbitrary shapes, and can accurately recognize the evolving behaviors of real-time data streams. <s> BIB003 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Time Window Models <s> Data stream in a popular research topic in big data era. There are many research results on data stream clustering domain. This paper firstly has a brief introduction to data stream methodologies, such as sampling, sliding windows, etc. Finally, it presents a survey on data streams clustering techniques. <s> BIB004 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Time Window Models <s> As more and more applications produce streaming data, clustering data streams has become an important technique for data and knowledge engineering. A typical approach is to summarize the data stream in real-time with an online process into a large number of so called micro-clusters. Micro-clusters represent local density estimates by aggregating the information of many data points in a defined area. On demand, a (modified) conventional clustering algorithm is used in a second offline step to recluster the micro-clusters into larger final clusters. For reclustering, the centers of the micro-clusters are used as pseudo points with the density estimates used as their weights. However, information about density in the area between micro-clusters is not preserved in the online process and reclustering is based on possibly inaccurate assumptions about the distribution of data within and between micro-clusters (e.g., uniform or Gaussian). This paper describes DBSTREAM, the first micro-cluster-based online clustering component that explicitly captures the density between micro-clusters via a shared density graph. The density information in this graph is then exploited for reclustering based on actual density between adjacent micro-clusters. We discuss the space and time complexity of maintaining the shared density graph. Experiments on a wide range of synthetic and real data sets highlight that using shared density improves clustering quality over other popular data stream clustering methods which require the creation of a larger number of smaller micro-clusters to achieve comparable results. <s> BIB005 | As shown in our eye tracking example, the underlying distribution of the stream will often change over time. This is also known as drift or concept-shift. To handle this, algorithms can employ time window models. This approach aims to 'forget' older data to avoid that historic data is biasing the analysis to outdated patterns. There exist four main types of time window models (Figure 4) [Silva et al., 2013 . Figure 4 Overview of time window models BIB001 , Silva et al., 2013 The damped time window assigns a weight to each micro-cluster based on the number of observations assigned to it. In each iteration, the weight is faded by a factor such as 2 −λ , where decay factor λ influences the rate of decay. Since fading the weight in every iteration is computationally costly, the weight can either be updated in fixed time intervals or whenever a cluster is updated BIB002 . In this case, the fading can be performed with respect to the elapsed time ω(∆t) = 2 −λ∆t , where ∆t denotes the time since the cluster was last updated. In Figure 1 , we applied the same fading function to reduce the size and opacity of older data. In some cases, clusters are implicitly decayed over time by considering their weight relative to the total number of observations ]. An alternative is the sliding time window which only considers the most recent observations or micro-clusters in the stream. This is usually based on a First-In-First-Out (FIFO) principle, where the oldest data point in the window is removed once a new data point becomes available. The size of this window can be of fixed or variable length. While a small window size can adapt quickly to concept drift, a larger window size considers more observations and can be more accurate for stable streams. In addition, a landmark time window is a very simple approach which separates the data stream into disjunct chunks based on events. Landmarks can either be defined based on the passed time or other occurrences. The landmark time window summarizes all data points that arrive after the landmark. Whenever a new landmark occurs, all the data in the window is removed and new data is captured. This category also includes algorithms that do not specifically consider changes over time and therefore require the user to regularly restart the clustering. Finally, the pyramidal time model or tilted time window uses different granularity levels based on the recency of data. This approach summarizes recent data more accurately whereas older data is gradually aggregated. Due to the increasing relevance of stream clustering, a number of survey papers began to summarize and structure the field. Most notably provide an overview of the two largest research threads, namely distance-based and grid-based algorithms. In total, the authors discuss ten distance-based approaches, mostly extensions of DenStream , and nine gridbased approaches, mostly extensions of D-Stream BIB002 BIB003 . The authors describe the algorithms, name input parameters and also empirically evaluate some of the algorithms. In addition, the authors highlight interrelations between the algorithms in a timeline. We utilize this timeline and extend it with more algorithms and additional categories. However, their paper focusses only on distance and grid-based algorithms while we have taken more categories and more algorithms into account. Additionally, [Silva et al., 2013] introduced a taxonomy that allows to categorize stream clustering algorithms, e.g., regarding the reclustering algorithm or used time window model. The authors describe a total of 13 stream clustering algorithms and categorize them according to their taxonomy. In addition, application scenarios, data sources and available toolsets are presented. However, a drawback is that many of the discussed algorithms are one-pass clustering algorithms that need extensions to suit the streaming case. In ] the authors discuss 19 algorithms and are among the first to highlight the research area of Neural Gas (NG) for stream clustering. However, only a single grid-based algorithm is discussed and other popular algorithms are missing. Further, the authors in focus on stream clustering and stream classification and present a total of 17 algorithms. Considerably shorter overviews are also provided in , BIB004 , , and . In this survey, we cover a total of 51 different stream clustering algorithms. This makes our survey much more exhaustive than all comparable studies. In addition, our paper identifies four common work streams and how they developed over time. We also focus on common problems when applying stream clustering. As an example, we point to a total of 26 available algorithm implementations, as well as three different frameworks for data stream clustering. Furthermore, we address the problem of configuring stream clustering algorithms and present automatic algorithm configuration as an approach to address this problem. Table 1 briefly summarizes the relevant dimensions of our survey. In previous work , we have also performed a rigorous empirical comparison of the most popular stream clustering algorithms. In total, we evaluated ten algorithms on four synthetic and three real-world data sets. In order to obtain the best results, we performed extensive parameter configuration. Our results have shown that DBSTREAM BIB005 produces the highest cluster quality and is able to detect arbitrarily shaped clusters. However, it is sensitive to the insertion order and has many parameters which makes it difficult to apply in practice. As an alternative, D-Stream BIB002 BIB003 can produce competitive results, but often requires more micro-clusters due to its grid based approach. |
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> Finding useful patterns in large datasets has attracted considerable interest recently, and one of the most widely studied problems in this area is the identification of clusters, or densely populated regions, in a multi-dimensional dataset. Prior work does not adequately address the problem of large datasets and minimization of I/O costs.This paper presents a data clustering method named BIRCH (Balanced Iterative Reducing and Clustering using Hierarchies), and demonstrates that it is especially suitable for very large databases. BIRCH incrementally and dynamically clusters incoming multi-dimensional metric data points to try to produce the best quality clustering with the available resources (i.e., available memory and time constraints). BIRCH can typically find a good clustering with a single scan of the data, and improve the quality further with a few additional scans. BIRCH is also the first clustering algorithm proposed in the database area to handle "noise" (data points that are not part of the underlying pattern) effectively.We evaluate BIRCH's time/space efficiency, data input order sensitivity, and clustering quality through several experiments. We also present a performance comparisons of BIRCH versus CLARANS, a clustering method proposed recently for large datasets, and show that BIRCH is consistently superior. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> Data clustering is an important technique for exploratory data analysis, and has been studied for several years. It has been shown to be useful in many practical domains such as data classification and image processing. Recently, there has been a growing emphasis on exploratory analysis of very large datasets to discover useful patterns and/or correlations among attributes. This is called data mining, and data clustering is regarded as a particular branch. However existing data clustering methods do not adequately address the problem of processing large datasets with a limited amount of resources (e.g., memory and cpu cycles). So as the dataset size increases, they do not scale up well in terms of memory requirement, running time, and result quality. ::: ::: In this paper, an efficient and scalable data clustering method is proposed, based on a new in-memory data structure called CF-tree, which serves as an in-memory summary of the data distribution. We have implemented it in a system called BIRCH (Balanced Iterative Reducing and Clustering using Hierarchies), and studied its performance extensively in terms of memory requirements, running time, clustering quality, stability and scalability; we also compare it with other available methods. Finally, BIRCH is applied to solve two real-life problems: one is building an iterative and interactive pixel classification tool, and the other is generating the initial codebook for image compression. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> Data stream clustering is an important task in data stream mining. In this paper, we propose SDStream, a new method for performing density-based data streams clustering over sliding windows. SDStream adopts CluStream clustering framework. In the online component, the potential core-micro-cluster and outlier micro-cluster structures are introduced to maintain the potential clusters and outliers. They are stored in the form of Exponential Histogram of Cluster Feature (EHCF) in main memory and are maintained by the maintenance of EHCFs. Outdated micro-clusters which need to be deleted are found by the value of t in Temporal Cluster Feature (TCF). In the offline component, the final clusters of arbitrary shape are generated according to all the potential core-micro-clusters maintained online by DBSCAN algorithm. Experimental results show that SDStream which can generate clusters of arbitrary shape has a much higher clustering quality than CluStream which generates spherical clusters. <s> BIB003 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> Clustering streaming data requires algorithms which are capable of updating clustering results for the incoming data. As data is constantly arriving, time for processing is limited. Clustering has to be performed in a single pass over the incoming data and within the possibly varying inter-arrival times of the stream. Likewise, memory is limited, making it impossible to store all data. For clustering, we are faced with the challenge of maintaining a current result that can be presented to the user at any given time. In this work, we propose a parameter free algorithm that automatically adapts to the speed of the data stream. It makes best use of the time available under the current constraints to provide a clustering of the objects seen up to that point. Our approach incorporates the age of the objects to reflect the greater importance of more recent data. Moreover, we are capable of detecting concept drift, novelty and outliers in the stream. For efficient and effective handling, we introduce the ClusTree, a compact and self-adaptive index structure for maintaining stream summaries. Our experiments show that our approach is capable of handling a multitude of different stream characteristics for accurate and scalable anytime stream clustering. <s> BIB004 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> Clustering of streaming sensor data aims at providing online summaries of the observed stream. This task is mostly done under limited processing and storage resources. This makes the sensed stream speed (data per time) a sensitive restriction when designing stream clustering algorithms. Additionally, the varying speed of the stream is a natural characteristic of sensor data, e.g. changing the sampling rate upon detecting an event or for a certain time. In such cases, most clustering algorithms have to heavily restrict their model size such that they can handle the minimal time allowance. Recently the first anytime stream clustering algorithm has been proposed that flexibly uses all available time and dynamically adapts its model size. However, the method was not designed to precisely cluster sensor data which are usually noisy and extremely evolving. In this paper we detail the LiarTree algorithm that provides precise stream summaries and effectively handles noise, drift and novelty. We prove that the runtime of the LiarTree is logarithmic in the size of the maintained model opposed to a linear time complexity often observed in previous approaches. We demonstrate in an extensive experimental evaluation using synthetic and real sensor datasets that the LiarTree outperforms competing approaches in terms of the quality of the resulting summaries and exposes only a logarithmic time complexity. <s> BIB005 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> Evolution-based stream clustering method supports the monitoring and the change detection of clustering structures. E-Stream is an evolution-based stream clustering method that supports different types of clustering structure evolution which are appearance, disappearance, self-evolution, merge and split. This paper presents HUE-Stream which extends E-Stream in order to support uncertainty in heterogeneous data. A distance function, cluster representation and histogram management are introduced for the different types of clustering structure evolution. We evaluate effectiveness of HUE-Stream on real-world dataset KDDCup 1999 Network Intruision Detection. Experimental results show that HUE-Stream gives better cluster quality compared with UMicro. <s> BIB006 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> In this paper we propose a data stream clustering algorithm, called Self Organizing density based clustering over data Stream (SOStream). This algorithm has several novel features. Instead of using a fixed, user defined similarity threshold or a static grid, SOStream detects structure within fast evolving data streams by automatically adapting the threshold for density-based clustering. It also employs a novel cluster updating strategy which is inspired by competitive learning techniques developed for Self Organizing Maps (SOMs). In addition, SOStream has built-in online functionality to support advanced stream clustering operations including merging and fading. This makes SOStream completely online with no separate offline components. Experiments performed on KDD Cup'99 and artificial datasets indicate that SOStream is an effective and superior algorithm in creating clusters of higher purity while having lower space and time requirements compared to previous stream clustering algorithms. <s> BIB007 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> We design a data stream algorithm for the k-means problem, called BICO, that combines the data structure of the SIGMOD Test of Time award winning algorithm BIRCH [27] with the theoretical concept of coresets for clustering problems. The k-means problem asks for a set C of k centers minimizing the sum of the squared distances from every point in a set P to its nearest center in C. In a data stream, the points arrive one by one in arbitrary order and there is limited storage space. <s> BIB008 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> Clustering evolving data streams is ::: important to be performed in a limited time with a reasonable quality. The ::: existing micro clustering based methods do not consider the distribution of ::: data points inside the micro cluster. We propose LeaDen-Stream (Leader Density-based ::: clustering algorithm over evolving data Stream), a density-based ::: clustering algorithm using leader clustering. The algorithm is based on a ::: two-phase clustering. The online phase selects the proper mini-micro or ::: micro-cluster leaders based on the distribution of data points in the micro ::: clusters. Then, the leader centers are sent to the offline phase to form final ::: clusters. In LeaDen-Stream, by carefully choosing between two kinds of micro ::: leaders, we decrease time complexity of the clustering while maintaining the ::: cluster quality. A pruning strategy is also used to filter out real data from ::: noise by introducing dense and sparse mini-micro and micro-cluster leaders. Our ::: performance study over a number of real and synthetic data sets demonstrates ::: the effectiveness and efficiency of our method. <s> BIB009 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> Streaming data clustering is becoming the most efficient way to cluster a very large data set. In this paper we present a new approach, called G-Stream, for topological clustering of evolving data streams. G-Stream allows one to discover clusters of arbitrary shape without any assumption on the number of clusters and by making one pass over the data. The topological structure is represented by a graph wherein each node represents a set of “close” data points and neighboring nodes are connected by edges. The use of the reservoir, to hold, temporarily, the very distant data points from the current prototypes, avoids needless movements of the nearest nodes to data points and therefore, improving the quality of clustering. The performance of the proposed algorithm is evaluated on both synthetic and real-world data sets. <s> BIB010 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> BIRCH algorithm is a clustering algorithm suitable for very large data sets. In the algorithm, a CF-tree is built whose all entries in each leaf node must satisfy a uniform threshold T, and the CF-tree is rebuilt at each stage by different threshold. But using a single threshold cause many shortcomings in the birch algorithm, in this paper to propose a solution to this shortcoming by using multiple thresholds instead of a single threshold. <s> BIB011 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> As more and more applications produce streaming data, clustering data streams has become an important technique for data and knowledge engineering. A typical approach is to summarize the data stream in real-time with an online process into a large number of so called micro-clusters. Micro-clusters represent local density estimates by aggregating the information of many data points in a defined area. On demand, a (modified) conventional clustering algorithm is used in a second offline step to recluster the micro-clusters into larger final clusters. For reclustering, the centers of the micro-clusters are used as pseudo points with the density estimates used as their weights. However, information about density in the area between micro-clusters is not preserved in the online process and reclustering is based on possibly inaccurate assumptions about the distribution of data within and between micro-clusters (e.g., uniform or Gaussian). This paper describes DBSTREAM, the first micro-cluster-based online clustering component that explicitly captures the density between micro-clusters via a shared density graph. The density information in this graph is then exploited for reclustering based on actual density between adjacent micro-clusters. We discuss the space and time complexity of maintaining the shared density graph. Experiments on a wide range of synthetic and real data sets highlight that using shared density improves clustering quality over other popular data stream clustering methods which require the creation of a larger number of smaller micro-clusters to achieve comparable results. <s> BIB012 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Clustering Feature <s> Clustering algorithms are recently regaining attention with the availability of large datasets and the rise of parallelized computing architectures. However, most clustering algorithms do not scale well with increasing dataset sizes and require proper parametrization for correct results. In this paper we present A-BIRCH, an approach for automatic threshold estimation for the BIRCH clustering algorithm using Gap Statistic. This approach renders the global clustering step of BIRCH unnecessary and does not require knowledge on the expected number of clusters beforehand. This is achieved by analyzing a small representative subset of the data to extract attributes such as the cluster radius and the minimal cluster distance. These attributes are then used to compute a threshold that results, with high probability, in the correct clustering of elements. For the analysis of the representative subset we parallelized Gap Statistic to improve performance and ensure scalability. <s> BIB013 | BIRCH (Balanced Iterative Reducing and Clustering using Hierarchies) BIB001 BIB002 is one of the earliest algorithms applicable to stream clustering. It reduces the information maintained about a cluster to only a few summary statistics stored in a so called Clustering Feature (CF). The CF consists of three components: (n, LS, SS), where n is the number of data points in the cluster, LS is a d-dimensional vector that contains the linear sum of all data points for each dimension and SS is a scalar that contains the sum of squares for all data points over all dimensions. Some variations of this concept also store the sum of squares per dimension, i.e., as a vector SS. A CF provides sufficient information to calculate the centroid LS/n and also a radius, i.e., a measure of deviation from the centroid. In addition, a CF can be easily updated and merged with another CF by summing the individual components. To maintain the CFs, BIRCH incrementally builds a balanced-tree as illustrated in Figure 6 , where each node can contain a fixed number of CFs. Each Offline Clustering BIRCH BIB001 1996 landmark hierarchical clustering ScaleKM ] 1998 landmark -Single-pass k-means [Farnstrom et al., 2000] 2000 landmark - BIB003 2009 pyramidal DBSCAN ClusTree BIB004 2009 damped not specified LiarTree BIB005 2011 damped not specified HUE-Stream BIB006 2011 damped -SOStream BIB007 2012 damped -StreamKM++ 2012 pyramidal k-means FlockStream 2013 damped -BICO BIB008 2013 landmark k-means LeaDen-Stream BIB009 2013 damped DBSCAN G-Stream BIB010 2014 damped -Improved BIRCH BIB011 2014 landmark hierarchical clustering DBSTREAM BIB012 2016 damped shared density A-BIRCH BIB013 2017 landmark hierarchical clustering evoStream 2018 damped Evolutionary Algorithm Table 2 Overview of distance-based stream clustering algorithms new observation descends the tree by following the child of its closest CF until a leaf node is reached. The observation is either merged with its closest leaf-CF or used to create a new leaf-CF. For reclustering, all leaf-CF can be used as an input to a traditional algorithm such as k-means or hierarchical clustering. Improved BIRCH BIB011 ] is an extension which uses different distance thresholds per CF which are increased based on entries close to the radius boundary. Similarly, A-BIRCH BIB013 estimates the threshold parameters by using the Gap Statistics ] on a sample of the stream. ScaleKM ] is an incremental algorithm to cluster large databases which uses the concept of CFs. The algorithm fills a buffer with initial points and initializes k clusters as with standard k-means. The algorithm then decides for every point whether to discard, summarize or retain it. First, based on a distance threshold to the cluster centers and by creating a worst case perturbation of cluster centers, the algorithm identifies points that are unlikely to ever change their cluster assignments. These points are summarised in a CF per cluster and then discarded. Second, the remaining points are used to identify a larger number of micro-clusters by applying k-means and merging clusters using agglomerative hierarchical clustering. Each cluster is again summarised using a CF. All remaining points are kept as individual points. The freed space in the buffer is then filled with new points to repeat the process. Single pass k-means [Farnstrom et al., 2000 ] is a simplified version of scaleKM where the algorithm discards all data points with every iteration and only the k CFs are maintained. |
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Extended Clustering Feature <s> Recently, the continuously arriving and evolving data stream has become a common phenomenon in many fields, such as sensor networks, web click stream and internet traffic flow. One of the most important mining tasks is clustering. Clustering has attracted extensive research by both the community of machine learning and data mining. Many stream clustering methods have been proposed. These methods have proven to be efficient on specific problems. However, most of these methods are on continuous clustering and few of them are about to solve the heterogeneous clustering problems. In this paper, we propose a novel approach based on the CluStream framework for clustering data stream with heterogeneous features. The centroid of continuous attributes and the histogram of the discrete attributes are used to represent the Micro clusters, and k-prototype clustering algorithm is used to create the Micro clusters and Macro clusters. Experimental results on both synthetic and real data sets show its efficiency. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Extended Clustering Feature <s> Mining data streams poses great challenges due to the limited memory availability and real-time query response requirement. Clustering an evolving data stream is especially interesting because it captures not only the changing distribution of clusters but also the evolving behaviors of individual clusters. In this paper, we present a novel method for tracking the evolution of clusters over sliding windows. In our SWClustering algorithm, we combine the exponential histogram with the temporal cluster features, propose a novel data structure, the Exponential Histogram of Cluster Features (EHCF). The exponential histogram is used to handle the in-cluster evolution, and the temporal cluster features represent the change of the cluster distribution. Our approach has several advantages over existing methods: (1) the quality of the clusters is improved because the EHCF captures the distribution of recent records precisely; (2) compared with previous methods, the mechanism employed to adaptively maintain the in-cluster synopsis can track the cluster evolution better, while consuming much less memory; (3) the EHCF provides a flexible framework for analyzing the cluster evolution and tracking a specific cluster efficiently without interfering with other clusters, thus reducing the consumption of computing resources for data stream clustering. Both the theoretical analysis and extensive experiments show the effectiveness and efficiency of the proposed method. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Extended Clustering Feature <s> Data stream clustering is an important task in data stream mining. In this paper, we propose SDStream, a new method for performing density-based data streams clustering over sliding windows. SDStream adopts CluStream clustering framework. In the online component, the potential core-micro-cluster and outlier micro-cluster structures are introduced to maintain the potential clusters and outliers. They are stored in the form of Exponential Histogram of Cluster Feature (EHCF) in main memory and are maintained by the maintenance of EHCFs. Outdated micro-clusters which need to be deleted are found by the value of t in Temporal Cluster Feature (TCF). In the offline component, the final clusters of arbitrary shape are generated according to all the potential core-micro-clusters maintained online by DBSCAN algorithm. Experimental results show that SDStream which can generate clusters of arbitrary shape has a much higher clustering quality than CluStream which generates spherical clusters. <s> BIB003 | CluStream ] extends the CF from BIRCH which allows to perform clustering over different time-horizons rather than the entire data stream. The extended CF is defined as (LS, SS, LS (t) , SS (t) , n), where LS (t) and SS (t) are the linear and squared sum of all timestamps of a cluster. The online algorithm is initialized by collecting a chunk of data and using the k-means algorithm to create q clusters. When a new data point arrives, it is absorbed by its closest micro-cluster if it lies within an adaptive radius threshold. Otherwise, it is used to create a new cluster. In order to keep the number of micro-clusters constant, outdated clusters are removed based on a threshold on their average time stamp. If this is not possible, the two closest micro-clusters are merged. To support different time-horizons, the algorithm regularly stores snapshots of the current CFs following a pyramidal scheme. While some snapshots are regularly updated, others are less frequently updated to maintain information about historic data. A desired portion of the stream can be approximated by subtracting the current CFs from a stored snapshot of previous CFs. The extracted micro-clusters are then used to run a variant of k-means to generate the macro-clusters. HCluStream BIB001 extends CluStream for categorical data by storing the frequency of attribute-levels for all categorical features. Based on this, it defines a separate categorical distance measure which is combined with the traditional distance measure for continuous attributes. SWClustering BIB002 uses the extended CF and pyramidal time window from CluStream. The algorithm maintains CFs in an Exponential Histogram of Cluster Features (EHCF) which stores data in different levels of granularity, depending on their recency. While the most recent observation is always stored individually, older observations are grouped and summarized. In particular, this step is organized in granularity levels. Once more than 1/ + 1 CFs of a granularity level exist, the next CF contains twice as many entries (cf. Figure 7) . A new observation is either inserted into its closest CF or used to initialize a new one based on a radius threshold, similar to BIRCH. If the initialization creates too many individual CFs, the oldest two individual CFs are merged and this process cascades down the different granularity levels. An old CF is removed if its time-stamp is older than the last N observed time stamps. To generate the final clustering all CFs are used for reclustering, similar to BIRCH. SDStream BIB003 ] combines the EHCF from SWClustering to represent the potential core and outlier micro-clusters from DenStream. The algorithm also enforces an upper limit on the number of micro-clusters by either Figure 7 Granularity levels in an EHCF with = 1. Recent observations are stored individually, whereas older data points are iteratively summarized merging the two most similar micro-clusters or deleting outlier micro-clusters. The offline component applies DBSCAN to the centers of the potential core-micro clusters. |
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Time-Faded Clustering Feature <s> Clustering algorithms are attractive for the task of class identification in spatial databases. However, the application to large spatial databases rises the following requirements for clustering algorithms: minimal requirements of domain knowledge to determine the input parameters, discovery of clusters with arbitrary shape and good efficiency on large databases. The well-known clustering algorithms offer no solution to the combination of these requirements. In this paper, we present the new clustering algorithm DBSCAN relying on a density-based notion of clusters which is designed to discover clusters of arbitrary shape. DBSCAN requires only one input parameter and supports the user in determining an appropriate value for it. We performed an experimental evaluation of the effectiveness and efficiency of DBSCAN using synthetic data and real data of the SEQUOIA 2000 benchmark. The results of our experiments demonstrate that (1) DBSCAN is significantly more effective in discovering clusters of arbitrary shape than the well-known algorithm CLAR-ANS, and that (2) DBSCAN outperforms CLARANS by a factor of more than 100 in terms of efficiency. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Time-Faded Clustering Feature <s> Data streams have recently attracted attention for their applicability to numerous domains including credit fraud detection, network intrusion detection, and click streams. Stream clustering is a technique that performs cluster analysis of data streams that is able to monitor the results in real time. A data stream is continuously generated sequences of data for which the characteristics of the data evolve over time. A good stream clustering algorithm should recognize such evolution and yield a cluster model that conforms to the current data. In this paper, we propose a new technique for stream clustering which supports five evolutions that are appearance, disappearance, self-evolution, merge and split. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Time-Faded Clustering Feature <s> For mining new pattern from evolving data streams, most algorithms are inherited from DenStream framework which is realized via a sliding window. So at the early stage of a pattern emerges, its knowledge points can be easily mistaken as outliers and dropped. In most cases, these points can be ignored, but in some special applications which need to quickly and precisely master the emergence rule of some patterns, these points must play their rules. Based on DenStream, this paper proposes a three-step clustering algorithm, rDenStream, which presents the concept of outlier retrospect. In rDenStream clustering, dropped micro-clusters are stored on outside memory temporarily, and will be given new chance to attend clustering to improve the clustering accuracy. Experiments modeled the arrival of data stream in Poisson process, and the results over standard data set showed its advantage over other methods in the early phase of new pattern discovery. <s> BIB003 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Time-Faded Clustering Feature <s> Data stream clustering is an importance issue in data stream mining. In most of the existing algorithms, only the continuous features are used for clustering. In this paper, we introduce an algorithm HDenStream for clustering data stream with heterogeneous features. The HDenstream is also a density-based algorithm, so it is capable enough to cluster arbitrary shapes and handle outliers. Theoretic analysis and experimental results show that HDenStream is effective and efficient. <s> BIB004 | DenStream ] presents a temporal extension of the CFs from BIRCH. It maintains two types of clusters: Potential core micro-clusters are stable structures that are denoted using a time-faded CF LS (ω) , SS (ω) , n (ω) . The superscript (ω) denotes that each entry of the CF is decayed over time using a decay function ω(∆t) = β −λ∆t . In addition, their weight n (ω) is required to be greater than a threshold value. Outlier micro-clusters are unstable structures whose weight is less than the threshold and they additionally maintain their creation time. At first, DBSCAN BIB001 ] is used to initialize a set of potential core micro-clusters. Similar to BIRCH, a new observation is assigned to its closest potential core micro-cluster if the addition does not increase the radius beyond a threshold. If it does, the same attempt is made for the closest outlier-cluster and the outlier-cluster is promoted to a potential core if it satisfies the weight threshold. If both cannot absorb the point, a new outlier-cluster is initialized. In regular intervals, the weight of all micro-clusters is evaluated. Potential core-micro clusters that no longer have enough weight are degraded to outlier micro-clusters and outlier micro-clusters that decayed below a threshold based on their creation time are removed. Macro-clusters are generated by applying a variant of DBSCAN BIB001 ] to potential core micro-clusters. C-DenStream ] is an extension of DenStream which allows to include domain knowledge in the form of instance-level constraints into the clustering process. Instance-level constraints describe observations that must or cannot belong to the same cluster. Another extension is rDenStream BIB003 . Instead of discarding outlier micro-clusters which cannot be converted into potential core microclusters, the algorithm temporarily stores them away in an outlier buffer. After the offline component, the algorithm attempts to relearn the data points that have been cached in the buffer in order to refine the clustering. HDenStream BIB004 ] combines D-Stream with the categorical distance measure of HCluStream to make it applicable to categorical features. BIB002 uses the time-faded CF from DenStream in combination with a histogram which bins the data points. New observations are either added to their closest cluster or used to initialize a new one. Existing clusters are split if one of the dimensions shows a significant valley in their histogram. When a cluster is split along a dimension, the other dimensions are weighted by the size of the split. Additionally, clusters can be merged if they move into close proximity. |
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> E-Stream <s> Clustering streaming data requires algorithms which are capable of updating clustering results for the incoming data. As data is constantly arriving, time for processing is limited. Clustering has to be performed in a single pass over the incoming data and within the possibly varying inter-arrival times of the stream. Likewise, memory is limited, making it impossible to store all data. For clustering, we are faced with the challenge of maintaining a current result that can be presented to the user at any given time. In this work, we propose a parameter free algorithm that automatically adapts to the speed of the data stream. It makes best use of the time available under the current constraints to provide a clustering of the objects seen up to that point. Our approach incorporates the age of the objects to reflect the greater importance of more recent data. Moreover, we are capable of detecting concept drift, novelty and outliers in the stream. For efficient and effective handling, we introduce the ClusTree, a compact and self-adaptive index structure for maintaining stream summaries. Our experiments show that our approach is capable of handling a multitude of different stream characteristics for accurate and scalable anytime stream clustering. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> E-Stream <s> Clustering streaming data requires algorithms that are capable of updating clustering results for the incoming data. As data is constantly arriving, time for processing is limited. Clustering has to be performed in a single pass over the incoming data and within the possibly varying inter-arrival times of the stream. Likewise, memory is limited, making it impossible to store all data. For clustering, we are faced with the challenge of maintaining a current result that can be presented to the user at any given time. In this work, we propose a parameter-free algorithm that automatically adapts to the speed of the data stream. It makes best use of the time available under the current constraints to provide a clustering of the objects seen up to that point. Our approach incorporates the age of the objects to reflect the greater importance of more recent data. For efficient and effective handling, we introduce the ClusTree, a compact and self-adaptive index structure for maintaining stream summaries. Additionally we present solutions to handle very fast streams through aggregation mechanisms and propose novel descent strategies that improve the clustering result on slower streams as long as time permits. Our experiments show that our approach is capable of handling a multitude of different stream characteristics for accurate and scalable anytime stream clustering. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> E-Stream <s> Evolution-based stream clustering method supports the monitoring and the change detection of clustering structures. E-Stream is an evolution-based stream clustering method that supports different types of clustering structure evolution which are appearance, disappearance, self-evolution, merge and split. This paper presents HUE-Stream which extends E-Stream in order to support uncertainty in heterogeneous data. A distance function, cluster representation and histogram management are introduced for the different types of clustering structure evolution. We evaluate effectiveness of HUE-Stream on real-world dataset KDDCup 1999 Network Intruision Detection. Experimental results show that HUE-Stream gives better cluster quality compared with UMicro. <s> BIB003 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> E-Stream <s> In stream data mining, stream clustering algorithms provide summaries of the relevant data objects that arrived in the stream. The model size of the clustering, i.e. the granularity, is usually determined by the speed (data per time) of the data stream. For varying streams, e.g. daytime or seasonal changes in the amount of data, most algorithms have to heavily restrict their model size such that they can handle the minimal time allowance. Recently the first anytime stream clustering algorithm has been proposed that flexibly uses all available time and dynamically adapts its model size. However, the method exhibits several drawbacks, as no noise detection is performed, since every point is treated equally, and new concepts can only emerge within existing ones. In this paper we propose the LiarTree algorithm, which is capable of anytime clustering and at the same time robust against noise and novelty to deal with arbitrary data streams. <s> BIB004 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> E-Stream <s> Clustering of streaming sensor data aims at providing online summaries of the observed stream. This task is mostly done under limited processing and storage resources. This makes the sensed stream speed (data per time) a sensitive restriction when designing stream clustering algorithms. Additionally, the varying speed of the stream is a natural characteristic of sensor data, e.g. changing the sampling rate upon detecting an event or for a certain time. In such cases, most clustering algorithms have to heavily restrict their model size such that they can handle the minimal time allowance. Recently the first anytime stream clustering algorithm has been proposed that flexibly uses all available time and dynamically adapts its model size. However, the method was not designed to precisely cluster sensor data which are usually noisy and extremely evolving. In this paper we detail the LiarTree algorithm that provides precise stream summaries and effectively handles noise, drift and novelty. We prove that the runtime of the LiarTree is logarithmic in the size of the maintained model opposed to a linear time complexity often observed in previous approaches. We demonstrate in an extensive experimental evaluation using synthetic and real sensor datasets that the LiarTree outperforms competing approaches in terms of the quality of the resulting summaries and exposes only a logarithmic time complexity. <s> BIB005 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> E-Stream <s> Clustering evolving data streams is ::: important to be performed in a limited time with a reasonable quality. The ::: existing micro clustering based methods do not consider the distribution of ::: data points inside the micro cluster. We propose LeaDen-Stream (Leader Density-based ::: clustering algorithm over evolving data Stream), a density-based ::: clustering algorithm using leader clustering. The algorithm is based on a ::: two-phase clustering. The online phase selects the proper mini-micro or ::: micro-cluster leaders based on the distribution of data points in the micro ::: clusters. Then, the leader centers are sent to the offline phase to form final ::: clusters. In LeaDen-Stream, by carefully choosing between two kinds of micro ::: leaders, we decrease time complexity of the clustering while maintaining the ::: cluster quality. A pruning strategy is also used to filter out real data from ::: noise by introducing dense and sparse mini-micro and micro-cluster leaders. Our ::: performance study over a number of real and synthetic data sets demonstrates ::: the effectiveness and efficiency of our method. <s> BIB006 | HUE-Stream BIB003 is an extension of E-Stream which also supports categorical data and can also handle uncertain data streams. To model uncertainty, each observation is assumed to follow a probability distribution. In this case, the vectors of linear and squared sum become the sum of expectation, faded over time. ClusTree BIB001 BIB002 uses the time-faded CF and applies it to the tree structure of BIRCH. Additionally, it allows to handle data streams where entries arrive faster than they can be processed. A new entry descends into its closest leaf where it is inserted as a new CF. Whenever a node is full, it is split and its entries combined in two groups such that the intra-cluster distance is minimized. However, if a new observation arrives before a node could be split, the new entry is merged with its closest CFs instead. If a new observation arrives while an entry descends the tree, that entry is temporarily stored in a buffer at its current location. It remains there until another entry descends into the same branch and is then carried further down the tree as a 'hitchhiker'. Again, the leafs can be used as an input to a traditional algorithm to generate the macro-clusters. LiarTree BIB004 BIB005 is an extension of ClusTree with better noise and novelty handling. It does so by adding a timeweighted CF to each node of the tree which serves as a buffer for noise. Data points are considered noise with respect to a node based on a threshold on their distance to the node's mean, relative to the node's standard deviation. The noise buffer is promoted to a regular cluster when its density is comparable to other CFs in the node. FlockStream ] employs a flocking behavior inspired by nature to identify emerging flocks and swarms of objects. Similar to DenStream, the algorithm distinguishes between potential core and outlier micro-clusters and uses a time-faded CF. It projects a batch of data onto a two-dimensional grid where each data point is represented by a basic agent. Each agent then makes movement decisions solely based on other agents in close proximity. The movement of agents is similar to the behavior of a flock of birds in flight: (1) Agents steer in the same direction as their neighbors; (2) Agents steer towards the location of their neighbors; (3) Agents avoid collisions with neighbors. When agents meet, they can be merged depending on a distance or radius threshold. After a number of flocking steps, the next batch of data is used to fill the grid with new agents in order to repeat the process. LeaDen-Stream BIB006 (Leader Density-based clustering algorithm over evolving data Stream) can choose multiple representatives per cluster to increase accuracy when clusters are not uniformly distributed. To do so, the algorithm maintains two different granularity levels. First, Micro Leader Clusters (MLC) correspond to the concept of traditional micro-clusters. However, they maintain a list of more fine granular information in the form of Mini Micro Leader Clusters (MMLC). These mini micro-clusters contain more detailed information and are represented by a time-faded CF. For new observations, the algorithm finds the closest MLC using the Mahalanobis distance. If the distance is within a threshold, the closest MMLC within the MLC is identified. If it is also within a distance threshold, the point is added to the MMLC. If one of the thresholds is violated, a new MLC or MMLC is created respectively. For reclustering all selected leaders are used to run DBSCAN. |
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Medoids <s> The k-means method is a widely used clustering technique that seeks to minimize the average squared distance between points in the same cluster. Although it offers no accuracy guarantees, its simplicity and speed are very appealing in practice. By augmenting k-means with a very simple, randomized seeding technique, we obtain an algorithm that is Θ(logk)-competitive with the optimal clustering. Preliminary experiments show that our augmentation improves both the speed and the accuracy of k-means, often quite dramatically. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Medoids <s> Advances in data acquisition have allowed large data collections of millions of time varying records in the form of data streams. The challenge is to effectively process the stream data with limited resources while maintaining sufficient historical information to define the changes and patterns over time. This paper describes an evidence-based approach that uses representative points to incrementally process stream data by using a graph based method to cluster points based on connectivity and density. Critical cluster features are archived in repositories to allow the algorithm to cope with recurrent information and to provide a rich history of relevant cluster changes if analysis of past data is required. We demonstrate our work with both synthetic and real world data sets. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Medoids <s> We present an incremental graph-based clustering algorithm whose design was motivated by a need to extract and retain meaningful information from data streams produced by applications such as large scale surveillance, network packet inspection and financial transaction monitoring. To this end, the method we propose utilises representative points to both incrementally cluster new data and to selectively retain important cluster information within a knowledge repository. The repository can then be subsequently used to assist in the processing of new data, the archival of critical features for off-line analysis, and in the identification of recurrent patterns. <s> BIB003 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Medoids <s> We design a data stream algorithm for the k-means problem, called BICO, that combines the data structure of the SIGMOD Test of Time award winning algorithm BIRCH [27] with the theoretical concept of coresets for clustering problems. The k-means problem asks for a set C of k centers minimizing the sum of the squared distances from every point in a set P to its nearest center in C. In a data stream, the points arrive one by one in arbitrary order and there is limited storage space. <s> BIB004 | An alternative to storing Clustering Features is to maintain medoids of clusters, i.e., representatives. RepStream BIB002 BIB003 , for example, incrementally updates a graph of nearest neighbors to identify suitable cluster representatives. New observations are inserted as a new node in the graph and edges are inserted between the node and its nearestneighbors. The point is assigned to an existing cluster if it is mutually connected to a representative of that cluster. Otherwise it is used as a representative to initialize a new cluster. Representatives are also inserted in a separate representative graph which maintains the nearest neighbors only between representatives. To split and merge existing clusters, the distance between them is compared to the average distance to their nearest neighbors in the representative graph. In order to reduce space requirements, non-representative points are discarded using a sliding time window. In addition, if a new representative is found but space limitations prevent it from being added to the representative graph, it can replace an existing representative depending on its age and number of nearest neighbors. streamKM++ ] is a variant of k-means++ BIB001 which computes a small weighted sample that represents the data called coreset. The coreset is constructed in a binary tree by using a divisive clustering approach. The tree is initialized by selecting a random representative point from the data. To split an existing cluster, the algorithm starts at the root node and iteratively chooses a child node relative to their weights until a leaf is reached. From the selected leaf, a data point is chosen as a second centre based on its distance to the initial centre of the cluster. Finally, the cluster is split by assigning each data point to the closest of the two centers. To handle data streams, the algorithm uses a similar approach as SWClustering (see Section 4.2). First, new observations are inserted into a coreset tree. Once the tree is full, all its points are moved to the next tree. If the next tree already contains points, the coreset between the points in both trees is computed. This cascades further until an empty tree is found. For reclustering, the union of all points is used to compute a coreset and the representatives are used to apply the k-means++ algorithm BIB001 . BICO BIB004 combines the data structure of BIRCH (see Section 4.1) with the coreset of streamKM++. BICO maintains the coreset in a tree structure where each node represents one CF. The algorithm is initialized by using the first data point in the stream to open a CF on the first level of the empty tree and the data point is kept as a representative for the CF. For every consecutive point, the algorithm attempts to insert it into an existing CF, starting on the first level. The insertion fails if the distance of the new point to the representative of its closest CF is larger than a threshold. In this case, a new CF is opened on the same level, using the new point as the reference point. Additionally, the insertion fails if the cluster's deviation from the mean would grow beyond a threshold. In this case the algorithm attempts to insert the point into the children of the closest CF. The final clustering is generated by applying k-means++ to the representatives of the leafs. |
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Centroids <s> Streaming data analysis has recently attracted attention in numerous applications including telephone records, Web documents and click streams. For such analysis, single-pass algorithms that consume a small amount of memory are critical. We describe such a streaming algorithm that effectively clusters large data streams. We also provide empirical evidence of the algorithm's performance on synthetic and real data streams. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Centroids <s> The data stream model has recently attracted attention for its applicability to numerous types of data, including telephone records, Web documents, and clickstreams. For analysis of such data, the ability to process the data in a single pass, or a small number of passes, while using little memory, is crucial. We describe such a streaming algorithm that effectively clusters large data streams. We also provide empirical evidence of the algorithm's performance on synthetic and real data streams. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Centroids <s> A machine learning approach that is capable of treating data streams presents new challenges and enables the analysis of a variety of real problems in which concepts change over time. In this scenario, the ability to identify novel concepts as well as to deal with concept drift are two important attributes. This paper presents a technique based on the k-means clustering algorithm aimed at considering those two situations in a single learning strategy. Experimental results performed with data from various domains provide insight into how clustering algorithms can be used for the discovery of new concepts in streams of data. <s> BIB003 | A simpler approach to maintain clusters is to store their centroids directly. However, this makes it generally more difficult to update clusters over time. As an example, STREAM BIB001 BIB002 ] only stores the centroids of k clusters. Its core idea is to treat the k-Median clustering problem as a facility planning problem. To do so, distances from data points to their closest cluster have associated costs. This reduces the clustering task to a cost minimization problem in order to find the number and position of facilities that yield the lowest costs. In order to generate a certain number of clusters, the algorithm adjusts the facility costs in each iteration by using a binary search for the costs that yield the desired number of centers k. To deal with streaming data, the algorithm processes the stream in chunks and solves the k-Median problem for each chunk individually. Assuming n different chunks, a total of nk clusters are created. To generate the final clustering or if available storage is exceeded, these intermediate clusters are again clustered using the same approach. OLINDDA BIB003 (Online Novelty and Drift Detection Algorithm) relies on cluster centroids to identify new and drifting clusters in a data stream. Initially, k-means is used to generate a set of clusters. For each cluster the distance from its centre to its furthest observation is considered a boundary. Points that do not fall into the boundary of any cluster are considered as an unknown concept and kept in a buffer. This buffer is regularly scanned for emerging structures using k-means. If an emerging cluster is of similar variance as the existing cluster, it is considered valid. To distinguish a new cluster from a drifting cluster, the algorithm assumes that drifts occur close to the existing clusters whereas new clusters form further away from the existing model. |
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Competitive Learning <s> Clustering algorithms are attractive for the task of class identification in spatial databases. However, the application to large spatial databases rises the following requirements for clustering algorithms: minimal requirements of domain knowledge to determine the input parameters, discovery of clusters with arbitrary shape and good efficiency on large databases. The well-known clustering algorithms offer no solution to the combination of these requirements. In this paper, we present the new clustering algorithm DBSCAN relying on a density-based notion of clusters which is designed to discover clusters of arbitrary shape. DBSCAN requires only one input parameter and supports the user in determining an appropriate value for it. We performed an experimental evaluation of the effectiveness and efficiency of DBSCAN using synthetic data and real data of the SEQUOIA 2000 benchmark. The results of our experiments demonstrate that (1) DBSCAN is significantly more effective in discovering clusters of arbitrary shape than the well-known algorithm CLAR-ANS, and that (2) DBSCAN outperforms CLARANS by a factor of more than 100 in terms of efficiency. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Competitive Learning <s> This work contains a theoretical study and computer simulations of a new self-organizing process. The principal discovery is that in a simple network of adaptive physical elements which receives signals from a primary event space, the signal representations are automatically mapped onto a set of output responses in such a way that the responses acquire the same topological order as that of the primary events. In other words, a principle has been discovered which facilitates the automatic formation of topologically correct maps of features of observable events. The basic self-organizing system is a one- or two-dimensional array of processing units resembling a network of threshold-logic units, and characterized by short-range lateral feedback between neighbouring units. Several types of computer simulations are used to demonstrate the ordering process as well as the conditions under which it fails. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Competitive Learning <s> In this paper we propose a data stream clustering algorithm, called Self Organizing density based clustering over data Stream (SOStream). This algorithm has several novel features. Instead of using a fixed, user defined similarity threshold or a static grid, SOStream detects structure within fast evolving data streams by automatically adapting the threshold for density-based clustering. It also employs a novel cluster updating strategy which is inspired by competitive learning techniques developed for Self Organizing Maps (SOMs). In addition, SOStream has built-in online functionality to support advanced stream clustering operations including merging and fading. This makes SOStream completely online with no separate offline components. Experiments performed on KDD Cup'99 and artificial datasets indicate that SOStream is an effective and superior algorithm in creating clusters of higher purity while having lower space and time requirements compared to previous stream clustering algorithms. <s> BIB003 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Competitive Learning <s> Streaming data clustering is becoming the most efficient way to cluster a very large data set. In this paper we present a new approach, called G-Stream, for topological clustering of evolving data streams. G-Stream allows one to discover clusters of arbitrary shape without any assumption on the number of clusters and by making one pass over the data. The topological structure is represented by a graph wherein each node represents a set of “close” data points and neighboring nodes are connected by edges. The use of the reservoir, to hold, temporarily, the very distant data points from the current prototypes, avoids needless movements of the nearest nodes to data points and therefore, improving the quality of clustering. The performance of the proposed algorithm is evaluated on both synthetic and real-world data sets. <s> BIB004 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Competitive Learning <s> As more and more applications produce streaming data, clustering data streams has become an important technique for data and knowledge engineering. A typical approach is to summarize the data stream in real-time with an online process into a large number of so called micro-clusters. Micro-clusters represent local density estimates by aggregating the information of many data points in a defined area. On demand, a (modified) conventional clustering algorithm is used in a second offline step to recluster the micro-clusters into larger final clusters. For reclustering, the centers of the micro-clusters are used as pseudo points with the density estimates used as their weights. However, information about density in the area between micro-clusters is not preserved in the online process and reclustering is based on possibly inaccurate assumptions about the distribution of data within and between micro-clusters (e.g., uniform or Gaussian). This paper describes DBSTREAM, the first micro-cluster-based online clustering component that explicitly captures the density between micro-clusters via a shared density graph. The density information in this graph is then exploited for reclustering based on actual density between adjacent micro-clusters. We discuss the space and time complexity of maintaining the shared density graph. Experiments on a wide range of synthetic and real data sets highlight that using shared density improves clustering quality over other popular data stream clustering methods which require the creation of a larger number of smaller micro-clusters to achieve comparable results. <s> BIB005 | More recently, algorithms also use competitive learning strategies in order to adapt the centroids of clusters over time. This is inspired by Self-Organizing Maps (SOMs) BIB002 where clusters compete to represent an observation, typically by moving cluster centers towards new observations based on their proximity. SOStream BIB003 (Self Organizing density based clustering over data Stream) combines DBSCAN BIB001 with Self-Organizing Maps (SOMs) BIB002 for stream clustering. It stores a time-faded weight, radius and centre for the cluster directly. A new observation is merged into its closest cluster if it lies within its radius. Following the idea of competitive learning, the algorithm also moves the k-nearest neighbors of the absorbing cluster in its direction. If clusters move into close proximity during this step, they are also merged. DBSTREAM BIB005 (Density-based Stream Clustering) is based on SOStream (see Section 4.6) but uses the shared density between two micro-clusters in order to decide whether micro-clusters belong to the same macro-cluster. A new observation x is merged into micro-clusters if it falls within the radius from their centre. Subsequently, the centers of all clusters that absorb the observation are updated by moving the centre towards x. If no cluster absorbs the point, it is used to initialize a new micro-cluster. Additionally, the algorithm maintains the shared density between two microclusters as the density of points in the intersection of their radii, relative to the size of the intersection area. In regular intervals it removes micro-clusters and shared densities whose weight decayed below a respective threshold. In the offline component, micro-clusters with high shared density are merged into the same cluster. evoStream (Evolutionary Stream Clustering) makes use of an evolutionary algorithm in order to bridge the gap between the online and offline component. Evolutionary algorithms are inspired by natural evolution where promising solutions are combined and slightly modified to create offsprings which can yield an improved solution. By iteratively selecting the best solutions, an evolutionary pressure is created which improves the result over time. evoStream uses this concept in order to iteratively improve the macroclusters through recombinations and small variations. Since macro-clusters are created incrementally, the evolutionary steps can be performed while the online components waits for new observations, i.e., when the algorithm would usually idle. As a result, the computational overhead of the offline component is removed and clusters are available at any time. The online component is similar to DBSTREAM but does not maintain a shared-density since it is not necessary for reclustering. G-Stream BIB004 (Growing Neural Gas over Data Streams) utilizes the concept of Neural Gas for data streams. The algorithm maintains a graph where each node represents a cluster. Nodes that share similar data points are connected by edges. Each edge has an associated age and nodes maintain an error term denoting the cluster's deviation. For a new observation x the two nearest clusters C 1 and C 2 are identified. If x does not fit into the radius of its closest cluster C 1 , it is temporarily stored away and later re-inserted. Otherwise, it is inserted into C 1 . Additionally, the centre of C 1 and all its connected neighbors are moved in the direction of x. Next, the age of all outgoing edges of C 1 are incremented and an edge from C 1 to C 2 is either inserted or its weight is reset to zero. The age of edges serves a similar purpose as a fading function. Edges who have grown too old, are removed as they contain outdated information. In regular intervals, the algorithm inserts new nodes between the node with the largest deviation and its neighbor with the largest deviation. |
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Summary <s> This work contains a theoretical study and computer simulations of a new self-organizing process. The principal discovery is that in a simple network of adaptive physical elements which receives signals from a primary event space, the signal representations are automatically mapped onto a set of output responses in such a way that the responses acquire the same topological order as that of the primary events. In other words, a principle has been discovered which facilitates the automatic formation of topologically correct maps of features of observable events. The basic self-organizing system is a one- or two-dimensional array of processing units resembling a network of threshold-logic units, and characterized by short-range lateral feedback between neighbouring units. Several types of computer simulations are used to demonstrate the ordering process as well as the conditions under which it fails. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Summary <s> Conceptual clustering is an important way of summarizing and explaining data. However, the recent formulation of this paradigm has allowed little exploration of conceptual clustering as a means of improving performance. Furthermore, previous work in conceptual clustering has not explicitly dealt with constraints imposed by real world environments. This article presents COBWEB, a conceptual clustering system that organizes data so as to maximize inference ability. Additionally, COBWEB is incremental and computationally economical, and thus can be flexibly applied in a variety of domains. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Summary <s> In data clustering, many approaches have been proposed such as K-means method and hierarchical method. One of the problems is that the results depend heavily on initial values and criterion to combine clusters. In this investigation, we propose a new method to cluster stream data while avoiding this deficiency. Here we assume there exists aspects of local regression in data. Then we develop our theory to combine clusters using F values by regression analysis as criterion and to adapt to stream data. We examine experiments and show how well the theory works. <s> BIB003 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Summary <s> Tools for automatically clustering streaming data are becoming increasingly important as data acquisition technology continues to advance. In this paper we present an extension of conventional kernel density clustering to a spatio-temporal setting, and also develop a novel algorithmic scheme for clustering data streams. Experimental results demonstrate both the high efficiency and other benefits of this new approach. <s> BIB004 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Summary <s> Clustering data streams has been attracting a lot of research efforts recently. However, this problem has not received enough consideration when the data streams are generated in a distributed fashion, whereas such a scenario is very common in real life applications. There exist constraining factors in clustering the data streams in the distributed environment: the data records generated are noisy or incomplete due to the unreliable distributed system; the system needs to on-line process a huge volume of data; the communication is potentially a bottleneck of the system. All these factors pose great challenge for clustering the distributed data streams. In this paper, we proposed an EM-based (Expectation Maximization) framework to effectively cluster the distributed data streams, with the above fundamental challenges in mind. In the presence of noisy or incomplete data records, our algorithms learn the distribution of underlying data streams by maximizing the likelihood of the data clusters. A test-and-cluster strategy is proposed to reduce the average processing cost, which is especially effective for online clustering over large data streams. Our extensive experimental studies show that the proposed algorithms can achieve a high accuracy with less communication cost, memory consumption and CPU time. <s> BIB005 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Summary <s> In this paper, we propose a novel data stream clustering algorithm, termed SVStream, which is based on support vector domain description and support vector clustering. In the proposed algorithm, the data elements of a stream are mapped into a kernel space, and the support vectors are used as the summary information of the historical elements to construct cluster boundaries of arbitrary shape. To adapt to both dramatic and gradual changes, multiple spheres are dynamically maintained, each describing the corresponding data domain presented in the data stream. By allowing for bounded support vectors (BSVs), the proposed SVStream algorithm is capable of identifying overlapping clusters. A BSV decaying mechanism is designed to automatically detect and remove outliers (noise). We perform experiments over synthetic and real data streams, with the overlapping, evolving, and noise situations taken into consideration. Comparison results with state-of-the-art data stream clustering methods demonstrate the effectiveness and efficiency of the proposed method. <s> BIB006 | Distance-based algorithms are by far the most common and popular approaches in stream clustering. They allow to create accurate summaries of the entire stream with rather simple insertion rules. Since it is infeasible to store all observations within the clusters, distance-based algorithms usually summarize the observations associated with a cluster. A popular example of this are Clustering Features which only store the information required to calculate the location and radius of a cluster. Alternatively, some algorithms maintain medoids, i.e., representatives of clusters or store the cluster centroids directly. In order to update cluster centroids over time, some algorithms also make use of competitive learning strategies, similar to Self-Organizing Maps (SOM) BIB001 . Generally, distance-based algorithms are computationally inexpensive and will suit the majority of stream clustering scenarios well. However, they often rely on many parameters such as distance and weight thresholds, radii or cleanup intervals. This makes it more difficult to apply them in practice and requires either expert knowledge or extensive parameter configuration. Another common issue is that distance-based algorithms can often only find spherical clusters. However, this is usually due to the choice of offline component which can be easily replaced by other approaches that can detect arbitrary clusters such as DBSCAN or hierarchical clustering with single linkage. While the popular algorithms BIRCH, CluStream and DenStream face many problems, either due to lack of fading or complicated maintenance steps, we find newer algorithms such as DBSTREAM particularly interesting due to their simpler design. Grid-based approaches are a popular alternative to density-based algorithms due to their simple design and support for arbitrarily shaped clusters. While many distance-based algorithms are only able to detect spherical clusters, almost all grid-based algorithms can identify cluster of arbitrary shape. This is mostly because the grid-structure allows an easy design of an offline-component where dense cells with common faces form clusters. The majority of grid-based algorithms partition the data space once into cells of fixed size. However, some algorithms do this recursively to create a more adaptive grid. Less common are algorithms where the size of cells is determined dynamically, mostly because of the increased computational costs. Lastly, some algorithms employ a hybrid strategy where a grid is used to establish distance-based approaches. Generally, the grid structure is less efficient than distance-based approaches due to its inflexible structure. For this reason, grid-based approaches often have higher memory requirements and need more micro-clusters to achieve the same quality as distance-based approaches. Empirical evidence has also shown this to be true for the most popular grid-based algorithm D-Stream. COBWEB BIB002 1987 landmark ICFR BIB003 2004 damped WStream BIB004 2006 damped CluDistream BIB005 2007 landmark SWEM 2009 sliding SVStream BIB006 2013 damped Table 4 Overview of model-based stream clustering algorithms Maximization (EM) algorithm. Table 4 gives an overview of 6 model-based algorithms. This class of algorithms is highly diverse and few interdependencies exist between the presented algorithms. CluDistream BIB005 uses the EM algorithm to process distributed data streams. At each location, it maintains a number of Gaussian mixture distributions and a coordinator node is used to combine the distributions. For each location, the stream is processed in chunks and the first chunk is used to initialize a new clustering using EM. For subsequent chunks, the algorithm checks whether the current models can represent the chunk sufficiently well. This is done by calculating the difference between the average log-likelihood of the existing model and the average log-likelihood of the chunk under the existing model. If the difference is less than a threshold, the weight of the model is incremented. Else, the current model is stored and a new model is initialized by applying EM to the current chunk. Whenever the weight of a model is updated or a new model is initialized, the coordinator receives the update and incorporates the new information into a global model by merging or splitting the Gaussian distributions. SWEM (Sliding Window with Expectation Maximization) applies the EM to chunks of data. Starting with random initial parameters, a set of m distributions is calculated for the first chunk and points are assigned to their most likely cluster. Each cluster is then summarized using a CF and k macro-cluster are generated by applying EM again. For a new chunk, the algorithm sets the initial values to the converged values of the previous chunk and incrementally applies EM to generate m new distributions. If a cluster grows too large or too small during this phase, the corresponding distributions can be split or merged. Finally the m new clusters are summarized in CFs and used with the existing k clusters to apply EM again. COBWEB BIB002 maintains a classification tree where each node describes a cluster. The tree is built incrementally by descending a new entry x from the root to a leaf. On each level the algorithm makes one of four clustering decisions that yields the highest clustering quality: (1) Insert x into most fitting child, (2) Create a new cluster for x, (3) Combine the two nodes that can best absorb x and add existing nodes as children of the new node, (4) Split the two nodes that can best absorb x and move its children up one level. The quality of each decision is evaluated using a measure called Category Utility (CU) which defines a trade-off between intra-class similarity and inter-class distance. ICFR BIB003 (Incremental Clustering using F-value by Regression analysis) uses concepts from linear regression in order to cluster data streams. The algorithm assigns points to existing clusters based on their cosine similarity. To merge clusters the algorithm finds the two closest clusters based on the Mahalanobis distance. If the merged clusters yield a greater F -value than the sum of individual F -values, the clusters are merged. The F -value is a measure of model validity in linear regressions. If the clusters cannot be merged, the next closest two clusters are evaluated until the closest pair exceeds a distance threshold. WStream BIB004 uses multivariate kernel density estimates to maintain a number of rectangular windows in the data space. The idea is to use local maxima of a density estimate as cluster centers and the local minima as cluster boundaries. WStream transfers this approach to data streams. New data points are either assigned to an existing window and their centre is moved towards the new point or it is used to initialize a new window of default size. Windows can enlarge or contract depending on the ratio of points close to their centre and close to their border. SVStream BIB006 (Support Vector based Stream Clustering) is based on Support Vector Clustering(SVC) . SVC transforms the data into a higher dimensional space and identifies the smallest sphere that encloses most points. When mapping the sphere back to the input space, the sphere forms a number of contour lines that represent clusters. SVStream iteratively maintains a number of spheres. The stream is processed in chunks and the first chunk is used to run SVC. For each subsequent chunk, the algorithm evaluates what portion of the chunk does not fall into the radius of existing spheres. If too many do not fit the current spheres, these values are used to initialize a new sphere. The remaining values are used to update the existing spheres. Model-based stream clustering algorithms are far less common than distancebased and grid-based approaches. Typically strategies try to find a mixture of distributions that fits the data stream, e.g. CluDiStream or SWEM. Unfortunately, no implementation of model-based algorithms is readily available which limits their usefulness in practice. In addition, they are often more computationally complex than comparable algorithms from the other categories. Projected stream clustering algorithms serve a niche for high dimensional data streams where it is not possible to perform prior feature selection in order to reduce the dimensionality. In general, these algorithms have added complexity associated with the selection of subspaces for each cluster. In return, they can identify clusters in very high dimensional space and can gracefully handle the curse of dimensionality . The most influential and popular algorithm of this category has been HPStream. |
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> D-Stream <s> In the field of data stream analysis,conventional methods seem not quite useful.Because neither they can adapt to the dynamic environment of data stream,nor the mining models and results can meet users' needs.A grid and density based clustering method is proposed to effectively address the problem.With this method,the mining procedure is divided into online and offline two parts and grid and density based clustering method is used to get final clusters for data stream. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> D-Stream <s> Clustering real-time stream data is an important and challenging problem. Existing algorithms such as CluStream are based on the k-means algorithm. These clustering algorithms have difficulties finding clusters of arbitrary shapes and handling outliers. Further, they require the knowledge of k and user-specified time window. To address these issues, this article proposes D-Stream, a framework for clustering stream data using a density-based approach. Our algorithm uses an online component that maps each input data record into a grid and an offline component that computes the grid density and clusters the grids based on the density. The algorithm adopts a density decaying technique to capture the dynamic changes of a data stream and a attraction-based mechanism to accurately generate cluster boundaries. Exploiting the intricate relationships among the decay factor, attraction, data density, and cluster structure, our algorithm can efficiently and effectively generate and adjust the clusters in real time. Further, a theoretically sound technique is developed to detect and remove sporadic grids mapped by outliers in order to dramatically improve the space and time efficiency of the system. The technique makes high-speed data stream clustering feasible without degrading the clustering quality. The experimental results show that our algorithm has superior quality and efficiency, can find clusters of arbitrary shapes, and can accurately recognize the evolving behaviors of real-time data streams. <s> BIB002 | In BIB002 , the authors extended their concept by a measure of attraction that incorporates positional information of data within a grid-cell. This variant only merges neighboring cells if they share many points at the cell border. BIB001 ] is a small extension on how to handle points that lie exactly on the grid boundaries. For such a point, the distance to adjacent cell centers is computed and the point is assigned to its closest cell. If the observation has the same distance to multiple cells, it is assigned to the one with higher density. If this also does not break the tie, it is inserted into cell that has been updated more recently. |
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> DD-Stream <s> Clustering is a widely used knowledge discovery technique. It helps uncovering structures in data that were not previously known. The clustering of large data sets has received a lot of attention in recent years, however, clustering is a still a challenging task since many published algorithms fail to do well in scaling with the size of the data set and the number of dimensions that describe the points, or in finding arbitrary shapes of clusters, or dealing effectively with the presence of noise. In this paper, we present a new clustering algorithm, based in self-similarity properties of the data sets. Self-similarity is the property of being invariant with respect to the scale used to look at the data set. While fractals are self-similar at every scale used to look at them, many data sets exhibit self-similarity over a range of scales. Self-similarity can be measured using the fractal dimension. The new algorithm which we call Fractal Clustering (FC) places points incrementally in the cluster for which the change in the fractal dimension after adding the point is the least. This is a very natural way of clustering points, since points in the same cluster have a great degree of self-similarity among them (and much less self-similarity with respect to points in other clusters). FC requires one scan of the data, is suspendable at will, providing the best answer possible at that point, and is incremental. We show via experiments that FC effectively deals with large data sets, high-dimensionality and noise and is capable of recognizing clusters of arbitrary shape. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> DD-Stream <s> Clustering for evolving data stream demands that the algorithm should be capable of adapting the discovered clustering model to the changes in data characteristics. ::: ::: In this paper we propose an algorithm for exclusive and complete clustering of data streams. We explain the concept of completeness of a stream clustering algorithm and show that the proposed algorithm guarantees detection of cluster if one exists. The algorithm has an on-line component with constant order time complexity and hence delivers predictable performance for stream processing. The algorithm is capable of detecting outliers and change in data distribution. Clustering is done by growing dense regions in the data space, honouring recency constraint. The algorithm delivers complete description of clusters facilitating semantic interpretation. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> DD-Stream <s> In recent years, the uncertain data stream which is related in many real applications attracts more and more attention of researchers. As one aspect of uncertain character, existence-uncertainty can affect the clustering process and results significantly. The lately reported clustering algorithms are all based on K-Means algorithm with the inhere shortage. DCUStream algorithm which is density-based clustering algorithm over uncertain data stream is proposed in this paper. It can find arbitrary shaped clusters with less time cost in high dimension data stream. In the meantime, a dynamic density threshold is designed to accommodate the changing density of grids with time in data stream. The experiment results show that DCUStream algorithm can acquire more accurate clustering result and execute the clustering process more efficiently on progressing uncertain data stream. <s> BIB003 | ExCC BIB002 (Exclusive and Complete Clustering) constructs a grid where the number of cells and grid boundaries are chosen by the user. This allows to handle categorical data, where the number of cells is chosen to be equal to the number of attribute levels. Clusters are identified as neighboring dense cells. Cells of numeric variables are considered neighbors if they share a common vertex. Cells of categorical variables employ a threshold on a similarity function between the attribute levels. To form macro-clusters, the algorithm iteratively chooses an unvisited dense cell and initializes a new cluster. Each neighboring grid-cell is then placed in the same cluster. This is repeated until all cells have been visited. DCUStream BIB003 (Density-based Clustering algorithm of Uncertain data Stream) aims to handle uncertain data streams, similar to HUE-Stream (see Section 4.3), where each observation is assumed to have an existence probability. The algorithm is initialized by collecting a batch of data and mapping it to a grid of fixed size. The density of a cell is defined as the sum of all existence probabilities faded over time. A grid is considered dense when its density is above a dynamic threshold. To generate a clustering, the algorithm selects the dense-cell with highest density and assigns all its neighboring cells to the same cluster. neighboring sparse-cells are considered the boundary of a cluster. This is repeated for all dense cells. DENGRIS-Stream (Density Grid-based algorithm for clustering data streams over Sliding window) is a grid-based algorithm that uses a sliding window model. New observations are mapped into a fixed size grid and the cell's densities are maintained. Densities are implicitly decayed by considering them relative to the total number of observations in the stream. In regular intervals, cells whose density decayed below a threshold or cells that are no longer inside the sliding window are removed. Macro-clusters are formed by grouping neighboring dense cells into the same cluster. Fractal Clustering BIB001 ] follows an usual grid-based approach. It uses the concept of fractal dimensions as a measure of size for a set of points. A common way to calculate the fractal dimension is by dividing the space into grid-cells of size and counting the number of cells that are occupied by points in the data N (r). Then, the fractal dimension can be calculated as: Fractal Clustering is first initialized with a sample by recursively placing close points into the same cluster (similar to DBSCAN). For a new observation, the algorithm then evaluates which influence in fractal dimension the addition of the point would have for each cluster. It then inserts the point into the cluster whose fractal dimension changes the least. However, if the change in fractal dimension is too large, the observation is considered noise instead. |
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Recursive Partitioning <s> A data stream is a massive unbounded sequence of data elements continuously generated at a rapid rate. Due to this reason, most algorithms for data streams sacrifice the correctness of their results for fast processing time. The processing time is greatly influenced by the amount of information that should be maintained. This paper proposes a statistical grid-based approach to clustering data elements of a data stream. Initially, the multidimensional data space of a data stream is partitioned into a set of mutually exclusive equal-size initial cells. When the support of a cell becomes high enough, the cell is dynamically divided into two mutually exclusive intermediate cells based on its distribution statistics. Three different ways of partitioning a dense cell are introduced. Eventually, a dense region of each initial cell is recursively partitioned until it becomes the smallest cell called a unit cell. A cluster of a data stream is a group of adjacent dense unit cells. In order to minimize the number of cells, a sparse intermediate or unit cell is pruned if its support becomes much less than a minimum support. Furthermore, in order to confine the usage of memory space, the size of a unit cell is dynamically minimized such that the result of clustering becomes as accurate as possible. The proposed algorithm is analyzed by a series of experiments to identify its various characteristics. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Recursive Partitioning <s> To effectively trace the clusters of recently generated data elements in an on-line data stream, a sibling list and a cell tree are proposed in this paper. Initially, the multi-dimensional data space of a data stream is partitioned into mutually exclusive equal-sized grid-cells. Each grid-cell monitors the recent distribution statistics of data elements within its range. The old distribution statistics of each grid-cell are diminished by a predefined decay rate as time goes by, so that the effect of the obsolete information on the current result of clustering can be eliminated without maintaining any data element physically. Given a partitioning factor h, a dense grid-cell is partitioned into h equal-size smaller grid-cells. Such partitioning is continued until a grid-cell becomes the smallest one called a unit grid-cell. Conversely, a set of consecutive sparse grid-cells can be merged into a single grid-cell. A sibling list is a structure to manage the set of all grid-cells in a one-dimensional data space and it acts as an index for locating a specific grid-cell. Upon creating a dense unit grid-cell on a one-dimensional data space, a new sibling list for another dimension is created as a child of the grid-cell. In such a way, a cell tree is created. By repeating this process, a multi-dimensional dense unit grid-cell is identified by a path of a cell tree. Furthermore, in order to confine the usage of memory space, the size of a unit grid-cell is adaptively minimized such that the result of clustering becomes as accurate as possible at all times. The proposed method is comparatively analyzed by a series of experiments to identify its various characteristics. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Recursive Partitioning <s> In data stream clustering, it is desirable to have algorithms that are able to detect clusters of arbitrary shape, clusters that evolve over time, and clusters with noise. Existing stream data clustering algorithms are generally based on an online-offline approach: The online component captures synopsis information from the data stream (thus, overcoming real-time and memory constraints) and the offline component generates clusters using the stored synopsis. The online-offline approach affects the overall performance of stream data clustering in various ways: the ease of deriving synopsis from streaming data; the complexity of data structure for storing and managing synopsis; and the frequency at which the offline component is used to generate clusters. In this article, we propose an algorithm that (1) computes and updates synopsis information in constant time; (2) allows users to discover clusters at multiple resolutions; (3) determines the right time for users to generate clusters from the synopsis information; (4) generates clusters of higher purity than existing algorithms; and (5) determines the right threshold function for density-based clustering based on the fading model of stream data. To the best of our knowledge, no existing data stream algorithms has all of these features. Experimental results show that our algorithm is able to detect arbitrarily shaped, evolving clusters with high quality. <s> BIB003 | Stats-Grid BIB001 is an early algorithm which recursively partitions grid-cells. The algorithm begins by splitting the data into grid-cells of fixed size. Each cell maintains its density, mean and standard deviation. The algorithm then recursively partitions grid-cells until cells become sufficiently small unit cells. The aim is to find adjacent unit cells with large density which can be used to form macro-clusters. The algorithm splits a cell in two subcells whenever it has reached sufficient density. The size of the subcells is dynamically adapted based on the distribution of data within the cell. The authors propose three separate splitting strategies, for example choosing the dimension where Figure 9 Tree structure in MR-Stream the cell's standard deviation is the largest and splitting at the mean. Since the weight of cells is calculated relative to the total number of observations, outdated cells can be removed and their statistics returned to the parent cell. Cell-Tree BIB002 is an extension of Stats-Grid which also tries to find adjacent unit cells of sufficient density. In contrast to Stats-Grid, subcells are not dynamically sized based on the distribution of the cell. Instead, they are split into a pre-defined number of evenly sized subcells. The summary statistics of the subcells are initialized by distributing the statistics of the parent cell following the normal distribution. To efficiently maintain the cells, the authors propose a siblings list. The siblings list is a linear list where each node contains a number of grid-cells along one dimension as well as a link to the next node. Whenever a cell is split, the created subcells replace their parent cell in its node. To maintain a siblings list over multiple dimensions, a first-child / next-sibling tree can be used where subsequent dimensions are added as children of the list-nodes. The splitting strategy of MR-Stream BIB003 ] is similar but splits each dimension in half, effectively creating a tree of cells as shown in Figure 9 . New observations start at the root cell and are recursively assigned to the appropriate child-cell. If a child does not exist yet, it is created until a maximum depth is reached. If the insertion causes a parent to only contain children of high density, the children are discarded since the parent node is able to represent this information already. Additionally, the tree is regularly pruned by removing leafs with insufficient weight and removing children of nodes that only contain dense or only sparse children. To generate the macro-clusters, the user can choose a desired height of the tree. For every unclustered cell, the algorithm initializes a new macro-cluster and adds all neighboring dense cells. If the size and weight of the cluster is too low, it is considered noise. PKSStream ] is similar to MR-Stream but does not require a subcell on all heights of the tree. It only maintains intermediate nodes when there are more than K − 1 non-empty children. Each observation is iteratively descended down the tree until either a leaf is reached or the child does not exist. In the latter case a new cell is initialized. In regular intervals, the algorithm evaluates all leaf nodes and removes those with insufficient weight. The offline component is the same as in MR-Stream for the leafs of the tree. |
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Hybrid Grid-Approaches <s> Clustering algorithms are attractive for the task of class identification in spatial databases. However, the application to large spatial databases rises the following requirements for clustering algorithms: minimal requirements of domain knowledge to determine the input parameters, discovery of clusters with arbitrary shape and good efficiency on large databases. The well-known clustering algorithms offer no solution to the combination of these requirements. In this paper, we present the new clustering algorithm DBSCAN relying on a density-based notion of clusters which is designed to discover clusters of arbitrary shape. DBSCAN requires only one input parameter and supports the user in determining an appropriate value for it. We performed an experimental evaluation of the effectiveness and efficiency of DBSCAN using synthetic data and real data of the SEQUOIA 2000 benchmark. The results of our experiments demonstrate that (1) DBSCAN is significantly more effective in discovering clusters of arbitrary shape than the well-known algorithm CLAR-ANS, and that (2) DBSCAN outperforms CLARANS by a factor of more than 100 in terms of efficiency. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Hybrid Grid-Approaches <s> Density-based method has emerged as a worthwhile class for clustering data streams. Recently, a number of density-based algorithms have been developed for clustering data streams. However, existing density-based data stream clustering algorithms are not without problem. There is a dramatic decrease in the quality of clustering when there is a range in density of data. In this paper, a new method, called the MuDi-Stream, is developed. It is an online-offline algorithm with four main components. In the online phase, it keeps summary information about evolving multi-density data stream in the form of core mini-clusters. The offline phase generates the final clusters using an adapted density-based clustering algorithm. The grid-based method is used as an outlier buffer to handle both noises and multi-density data and yet is used to reduce the merging time of clustering. The algorithm is evaluated on various synthetic and real-world datasets using different quality metrics and further, scalability results are compared. The experimental results show that the proposed method in this study improves clustering quality in multi-density environments. <s> BIB002 | HDCStream (hybrid density-based clustering for data stream) first combined grid-based algorithms with the concept of distancebased algorithms. In particular, it maintains a grid where dense cells can be promoted to become micro-clusters as known from distanced-based algorithms (see Section 4). Each observation in the stream is assigned to its closest microcluster if it lies within a radius threshold. Otherwise, it is inserted into the grid instead. Once a grid-cell has accumulated sufficient density, its points are used to initialize a new micro-cluster. Finally, the cell is no longer maintained, as its information has been transferred to the micro-cluster. In regular intervals, all micro-clusters and cells are evaluated and removed if their density decayed below a respective threshold. Whenever a clustering request arrives, the microclusters are considered virtual points in order to apply DBSCAN. Mudi-Stream BIB002 (Multi Density Data Stream) is an extension of HDCStream that can handle varying degrees of density within the same data stream. It uses the same insertion strategy as HDCStream with both, grid-cells and micro-clusters. However, the offline component applies a variant of DBSCAN BIB001 called M-DBSCAN to all micro-clusters. M-DBSCAN only requires a M inP ts parameter and then estimates the parameter from the mean and standard deviation around the centre. |
Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Projected Approaches <s> Many clustering algorithms tend to break down in high-dimensional feature spaces, because the clusters often exist only in specific subspaces (attribute subsets) of the original feature space. Therefore, the task of projected clustering (or subspace clustering) has been defined recently. As a solution to tackle this problem, we propose the concept of local subspace preferences, which captures the main directions of high point density. Using this concept, we adopt density-based clustering to cope with high-dimensional data. In particular, we achieve the following advantages over existing approaches: Our proposed method has a determinate result, does not depend on the order of processing, is robust against noise, performs only one single scan over the database, and is linear in the number of dimensions. A broad experimental evaluation shows that our approach yields results of significantly better quality than recent work on clustering high-dimensional data. <s> BIB001 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Projected Approaches <s> A real-life data stream usually contains many dimensions and some dimensional values of its data elements may be missing. In order to effectively extract the on-going change of a data stream with respect to all the subsets of the dimensions of the data stream, a grid-based subspace clustering algorithm is proposed in this paper. Given an n-dimensional data stream, the on-going distribution statistics of data elements in each one-dimension data space is firstly monitored by a list of grid-cells called a sibling list. Once a dense grid-cell of a first-level sibling list becomes a dense unit grid-cell, new second-level sibling lists are created as its child nodes in order to trace any cluster in all possible two-dimensional rectangular subspaces. In such a way, a sibling tree grows up to the nth level at most and a k-dimensional subcluster can be found in the kth level of the sibling tree. The proposed method is comparatively analyzed by a series of experiments to identify its various characteristics. <s> BIB002 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Projected Approaches <s> To effectively trace the clusters of recently generated data elements in an on-line data stream, a sibling list and a cell tree are proposed in this paper. Initially, the multi-dimensional data space of a data stream is partitioned into mutually exclusive equal-sized grid-cells. Each grid-cell monitors the recent distribution statistics of data elements within its range. The old distribution statistics of each grid-cell are diminished by a predefined decay rate as time goes by, so that the effect of the obsolete information on the current result of clustering can be eliminated without maintaining any data element physically. Given a partitioning factor h, a dense grid-cell is partitioned into h equal-size smaller grid-cells. Such partitioning is continued until a grid-cell becomes the smallest one called a unit grid-cell. Conversely, a set of consecutive sparse grid-cells can be merged into a single grid-cell. A sibling list is a structure to manage the set of all grid-cells in a one-dimensional data space and it acts as an index for locating a specific grid-cell. Upon creating a dense unit grid-cell on a one-dimensional data space, a new sibling list for another dimension is created as a child of the grid-cell. In such a way, a cell tree is created. By repeating this process, a multi-dimensional dense unit grid-cell is identified by a path of a cell tree. Furthermore, in order to confine the usage of memory space, the size of a unit grid-cell is adaptively minimized such that the result of clustering becomes as accurate as possible at all times. The proposed method is comparatively analyzed by a series of experiments to identify its various characteristics. <s> BIB003 </s> Optimizing Data Stream Representation: An Extensive Survey on Stream Clustering Algorithms <s> Projected Approaches <s> In this paper, we have proposed, developed and experimentally validated our novel subspace data stream clustering, termed PreDeConStream. The technique is based on the two phase mode of mining streaming data, in which the first phase represents the process of the online maintenance of a data structure, that is then passed to an offline phase of generating the final clustering model. The technique works on incrementally updating the output of the online phase stored in a micro-cluster structure, taking into consideration those micro-clusters that are fading out over time, speeding up the process of assigning new data points to existing clusters. A density based projected clustering model in developing PreDeConStream was used. With many important applications that can benefit from such technique, we have proved experimentally the superiority of the proposed methods over state-of-the-art techniques. <s> BIB004 | A special category of stream clustering algorithms deals with high dimensional data streams. These types of algorithms address the curse of dimensionality , i.e., the problem that almost all points have an equal distance in very high dimensional space. In such scenarios, clusters are defined Offline Clustering HPStream 2004 damped k-means SiblingTree BIB002 2007 damped -HDDStream 2012 damped PreDeCon BIB001 PreDeConStream BIB004 2012 damped PreDeCon BIB001 Table 5 according to a subset of dimensions where each cluster has an associated set of dimensions in which it exists. Even though these algorithms often use concepts from distance and grid-based algorithms their application scenarios and strategies are unique and deserve their own category. Table 5 summarizes 4 projected clustering algorithms and Figure 10 shows the relationship between the algorithms. Despite their similarity, HDDStream and PreDeConStream have been developed independently. HPStream (High-dimensional Projected Stream clustering) is an extension of CluStream (see Section 4.2) for high dimensional data. The algorithm uses a time-faded CF with an additional bit vector that denotes the associated dimensions of a cluster. The algorithm normalizes each dimension by regularly sampling the current standard deviation and adjusting the existing clusters accordingly. The algorithm initializes with k-means and associates each cluster with the l dimensions in which it has the smallest radius. The cluster assignment is then updated by only considering the associated dimensions for each cluster. Finally, the process is repeated until the cluster and dimensions converge. A new data point is tentatively added to each cluster to update the dimension association and added to its closest cluster if it does not increase the cluster radius above a threshold. SiblingTree BIB002 is an extension of CellTree BIB003 ] (see Section 5.2). It uses the same tree-structure but allows for subspace clusters. To do so, the algorithm creates a siblings list for each dimension as children of the root. New data points are recursively assigned to the grid-cells using a depth first approach. If a cell's density increases beyond a threshold, it is split as in CellTree. If a unit cell's density increases beyond a threshold, new sibling lists for each remaining dimension are created as children of the cell. Additionally, if a cell's density decays below a density threshold, its children are removed and it is merged with consecutive sparse cells. Clusters in the tree are defined as adjacent unit-grid-cells with enough density. HDDStream (Density-based Projected Clustering over High Dimensional Data Streams) is initialized by collecting a batch of observations and applying PreDeCon BIB001 . PreDeCon can be considered a subspace version of DBSCAN. The update procedure is similar to DenStream (see Section 4.3): A new observation is assigned to its closest potential core micro-cluster if its projected radius does not increase beyond a threshold. Else, the same attempt is made for the closest outlier-cluster. If both cannot absorb the observation, a new cluster is initialized. Periodically, the algorithm downgrades potential core micro clusters if their weight is too low or if the number of associated dimensions is too large. Outlier-clusters are removed as in DenStream. To generate the macro-clusters a variant of PreDeCon BIB001 is used. PreDeConStream BIB004 (Subspace Preference weighted Density Connected clustering of Streaming data) was developed simultaneously to HDDStream (see Section 7) and both share many concepts. The algorithm is also initialized using the PreDeCon BIB001 algorithm and the insertion strategy is the same as in DenStream (see Section 4.3). Additionally, the algorithm adjusts the clustering in regular intervals using a modified part of the PreDeCon algorithm on the micro-clusters that were changed during the online phase. |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 75