_id
stringlengths
40
40
text
stringlengths
0
10k
05378af0c67c59505067e2cbeb9ca29ed5f085e4
We propose an algorithm for the hierarchical aggregation of observations in dissemination-based, distributed traffic information systems. Instead of carrying specific values (e.g., the number offree parking places in a given area), our aggregates contain a modified Flajolet-Martin sketch as a probabilistic approximation. The main advantage of this approach is that the aggregates are duplicate insensitive. This overcomes two central problems of existing aggregation schemes for VANET applications. First, when multiple aggregates of observations for the same area are available, it is possible to combine them into an aggregate containing all information from the original aggregates. This is fundamentally different from existing approaches where typically one of the aggregates is selected for further use while the rest is discarded. Second, any observation or aggregate can be included into higher level aggregates, regardless if it has already been previously - directly or indirectly - added. As a result of those characteristics the quality of the aggregates is high, while their construction is very flexible. We demonstrate these traits of our approach by a simulation study.
2f83d2294d44b44ad07d327635a34276abe1ec55
This paper introduces an antenna design based on MCM-D manufacturing technology to realize an antenna-integrated package for IEEE 802.11b/g application. Co-design guidelines are employed to include the parasitic effects caused by the integration of the antenna and the RF module. The loop antenna is located on the second layer of the MCM-D substrate. The antenna incorporates the capacitively feed strip which is fed by the coplanar waveguide (CPW). By the coupling feed technique, the size of the proposed antenna is only 3.8 mm × 4.7 mm over the WLAN band (2.4–2.484 GHz). Furthermore, the resonant frequency can be adjusted by tuning the length of the coupling strip. The results show that the coupling-fed loop antenna achieved a gain of 1.6 dBi and radiation efficiency of 85 % at 2.45 GHz in a very compact size (0.03 λ0 × 0.04 λ0). In addition, the occupied area of the antenna is very small (4.4%) compared to the overall area of the package; therefore, the proposed method is very useful for package antenna design. The detailed parameters studies are presented, which demonstrate the feasibility of the proposed method.
cf08bf7bcf3d3ec926d0cedf453e257e21cc398a
Device testing represents the single largest manufacturing expense in the semiconductor industry, costing over $40 million a year. The most comprehensive and wide ranging book of its kind, Testing of Digital Systems covers everything you need to know about this vitally important subject. Starting right from the basics, the authors take the reader through automatic test pattern generation, design for testability and built-in self-test of digital circuits before moving on to more advanced topics such as IDDQ testing, functional testing, delay fault testing, memory testing, and fault diagnosis. The book includes detailed treatment of the latest techniques including test generation for various fault modes, discussion of testing techniques at different levels of integrated circuit hierarchy and a chapter on system-on-a-chip test synthesis. Written for students and engineers, it is both an excellent senior/graduate level textbook and a valuable reference.
44c3dac2957f379e7646986f593b9a7db59bd714
5a4bb08d4750d27bd5a2ad0a993d144c4fb9586c
Recent widely publicized data breaches have exposed the personal information of hundreds of millions of people. Some reports point to alarming increases in both the size and frequency of data breaches, spurring institutions around the world to address what appears to be a worsening situation. But, is the problem actually growing worse? In this paper, we study a popular public dataset and develop Bayesian Generalized Linear Models to investigate trends in data breaches. Analysis of the model shows that neither size nor frequency of data breaches has increased over the past decade. We find that the increases that have attracted attention can be explained by the heavy-tailed statistical distributions underlying the dataset. Specifically, we find that data breach size is log-normally distributed and that the daily frequency of breaches is described by a negative binomial distribution. These distributions may provide clues to the generative mechanisms that are responsible for the breaches. Additionally, our model predicts the likelihood of breaches of a particular size in the future. For example, we find that in the next year there is only a 31% chance of a breach of 10 million records or more in the US. Regardless of any trend, data breaches are costly, and we combine the model with two different cost models to project that in the next three years breaches could cost up to $55 billion.
e8e2c3d884bba807bcf7fbfa2c27f864b20ceb80
This memo provides information for the Internet community. This memo does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Abstract This document describes HMAC, a mechanism for message authentication using cryptographic hash functions. HMAC can be used with any iterative cryptographic hash function, e.g., MD5, SHA-1, in combination with a secret shared key. The cryptographic strength of HMAC depends on the properties of the underlying hash function.
f01d369becb42ff69d156d5e19d8af18dadacc6e
As wireless sensor networks (WSN) continue to grow, so does the need for effective security mechanisms. Since sensor networks can interact with sensitive data and/or operate in hostile unattended environments, it is imperative that these security concerns be addressed from the beginning of the system design. This paper aims at describing security solutions for collecting and processing data in WSNs. Adequate security capabilities for medium and large scale WSNs are a hard but necessary goal to achieve to prepare these networks for the market. The paper includes an overview on WSNs space security solutioins and reliability challenges for
18f5593d6082b1ba3c02cf64d64eb9d969db3e6b
Vector space word representations are useful for many natural language processing applications. The diversity of techniques for computing vector representations and the large number of evaluation benchmarks makes reliable comparison a tedious task both for researchers developing new vector space models and for those wishing to use them. We present a website and suite of offline tools that that facilitate evaluation of word vectors on standard lexical semantics benchmarks and permit exchange and archival by users who wish to find good vectors for their applications. The system is accessible at: www.wordvectors.org.
cc383d9308c38e36d268b77bd6acee7bcd79fc10
Current progress in wearable and implanted health monitoring technologies has strong potential to alter the future of healthcare services by enabling ubiquitous monitoring of patients. A typical health monitoring system consists of a network of wearable or implanted sensors that constantly monitor physiological parameters. Collected data are relayed using existing wireless communication protocols to a base station for additional processing. This article provides researchers with information to compare the existing low-power communication technologies that can potentially support the rapid development and deployment of WBAN systems, and mainly focuses on remote monitoring of elderly or chronically ill patients in residential environments.
b3e326f56fd2e32f33fd5a8f3138c6633da25786
This paper would present the system integration and overview of the autonomous cleaning robot Roboking. Roboking is self-propelled autonomously navigating vacuum cleaning robot. It uses several sensors in order to protect indoor environments and itself while cleaning. In this paper, we would describe the principle of operation along with system structure, sensors, functions and integrated subsystems.
85b3cd74945cc6517aa3a7017f89d8857c3600da
The importance of accurate estimation of student’s future performance is essential in order to provide the student with adequate assistance in the learning process. To this end, this research aimed at investigating the use of Bayesian networks for predicting performance of a student, based on values of some identified attributes. We presented empirical experiments on the prediction of performance with a data set of high school students containing 8 attributes. The paper demonstrates an application of the Bayesian approach in the field of education and shows that the Bayesian network classifier has a potential to be used as a tool for prediction of student performance.
86aa83ebab0f72ef84f8e6d62379c71c04cb6b68
05e5e58edead6167befb089444d35fbd17b13414
8c63d23cc29dc6221ed6bd0704fccc03baf20ebc
There has been a recent explosion in applications for dialogue interaction ranging from direction-giving and tourist information to interactive story systems. Yet the natural language generation (NLG) component for many of these systems remains largely handcrafted. This limitation greatly restricts the range of applications; it also means that it is impossible to take advantage of recent work in expressive and statistical language generation that can dynamically and automatically produce a large number of variations of given content. We propose that a solution to this problem lies in new methods for developing language generation resources. We describe the ES-TRANSLATOR, a computational language generator that has previously been applied only to fables, and quantitatively evaluate the domain independence of the EST by applying it to personal narratives from weblogs. We then take advantage of recent work on language generation to create a parameterized sentence planner for story generation that provides aggregation operations, variations in discourse and in point of view. Finally, we present a user evaluation of different personal narrative retellings.
43dfdf71c82d7a61367e94ea927ef1c33d4ac17a
The rapid increase of sensitive data and the growing number of government regulations that require longterm data retention and protection have forced enterprises to pay serious attention to storage security. In this paper, we discuss important security issues related to storage and present a comprehensive survey of the security services provided by the existing storage systems. We cover a broad range of the storage security literature, present a critical review of the existing solutions, compare them, and highlight potential research issues.
b594a248218121789e5073a90c31b261610478e0
This paper presents a strategy for large-scale SLAM through solving a sequence of linear least squares problems. The algorithm is based on submap joining where submaps are built using any existing SLAM technique. It is demonstrated that if submaps coordinate frames are judiciously selected, the least squares objective function for joining two submaps becomes a quadratic function of the state vector. Therefore, a linear solution to large-scale SLAM that requires joining a number of local submaps either sequentially or in a more efficient Divide and Conquer manner, can be obtained. The proposed Linear SLAM technique is applicable to both feature-based and pose graph SLAM, in two and three dimensions, and does not require any assumption on the character of the covariance matrices or an initial guess of the state vector. Although this algorithm is an approximation to the optimal full nonlinear least squares SLAM, simulations and experiments using publicly available datasets in 2D and 3D show that Linear SLAM produces results that are very close to the best solutions that can be obtained using full nonlinear optimization started from an accurate initial value. The C/C++ and MATLAB source codes for the proposed algorithm are available on OpenSLAM.
69ab8fe2bdc2b1ea63d86c7fd64142e5d3ed88ec
We explore the relation between classical probabilistic models of information retrieval and the emerging language modeling approaches. It has long been recognized that the primary obstacle to effective performance of classical models is the need to estimate a relevance model: probabilities of words in the relevant class. We propose a novel technique for estimating these probabilities using the query alone. We demonstrate that our technique can produce highly accurate relevance models, addressing important notions of synonymy and polysemy. Our experiments show relevance models outperforming baseline language modeling systems on TREC retrieval and TDT tracking tasks. The main contribution of this work is an effective formal method for estimating a relevance model with no training data.
2a894be44d07a963c28893cc6f45d29fbfa872f7
Machine learning (ML) algorithms are commonly applied to big data, using distributed systems that partition the data across machines and allow each machine to read and update all ML model parameters --- a strategy known as data parallelism. An alternative and complimentary strategy, model parallelism, partitions the model parameters for non-shared parallel access and updates, and may periodically repartition the parameters to facilitate communication. Model parallelism is motivated by two challenges that data-parallelism does not usually address: (1) parameters may be dependent, thus naive concurrent updates can introduce errors that slow convergence or even cause algorithm failure; (2) model parameters converge at different rates, thus a small subset of parameters can bottleneck ML algorithm completion. We propose scheduled model parallelism (SchMP), a programming approach that improves ML algorithm convergence speed by efficiently scheduling parameter updates, taking into account parameter dependencies and uneven convergence. To support SchMP at scale, we develop a distributed framework STRADS which optimizes the throughput of SchMP programs, and benchmark four common ML applications written as SchMP programs: LDA topic modeling, matrix factorization, sparse least-squares (Lasso) regression and sparse logistic regression. By improving ML progress per iteration through SchMP programming whilst improving iteration throughput through STRADS we show that SchMP programs running on STRADS outperform non-model-parallel ML implementations: for example, SchMP LDA and SchMP Lasso respectively achieve 10x and 5x faster convergence than recent, well-established baselines.
bdc6acc8d11b9ef1e8f0fe2f0f41ce7b6f6a100a
Traditional text similarity measures consider each term similar only to itself and do not model semantic relatedness of terms. We propose a novel discriminative training method that projects the raw term vectors into a common, low-dimensional vector space. Our approach operates by finding the optimal matrix to minimize the loss of the pre-selected similarity function (e.g., cosine) of the projected vectors, and is able to efficiently handle a large number of training examples in the highdimensional space. Evaluated on two very different tasks, cross-lingual document retrieval and ad relevance measure, our method not only outperforms existing state-of-the-art approaches, but also achieves high accuracy at low dimensions and is thus more efficient.
50988101501366324c11e9e7a199e88a9a899bec
b2e68ca577636aaa6f6241c3af7478a3ae1389a7
AIM To analyse the concept of transformational leadership in the nursing context. BACKGROUND Tasked with improving patient outcomes while decreasing the cost of care provision, nurses need strategies for implementing reform in health care and one promising strategy is transformational leadership. Exploration and greater understanding of transformational leadership and the potential it holds is integral to performance improvement and patient safety. DESIGN Concept analysis using Walker and Avant's (2005) concept analysis method. DATA SOURCES PubMed, CINAHL and PsychINFO. METHODS This report draws on extant literature on transformational leadership, management, and nursing to effectively analyze the concept of transformational leadership in the nursing context. IMPLICATIONS FOR NURSING This report proposes a new operational definition for transformational leadership and identifies model cases and defining attributes that are specific to the nursing context. The influence of transformational leadership on organizational culture and patient outcomes is evident. Of particular interest is the finding that transformational leadership can be defined as a set of teachable competencies. However, the mechanism by which transformational leadership influences patient outcomes remains unclear. CONCLUSION Transformational leadership in nursing has been associated with high-performing teams and improved patient care, but rarely has it been considered as a set of competencies that can be taught. Also, further research is warranted to strengthen empirical referents; this can be done by improving the operational definition, reducing ambiguity in key constructs and exploring the specific mechanisms by which transformational leadership influences healthcare outcomes to validate subscale measures.
bdcdc95ef36b003fce90e8686bfd292c342b0b57
Reinforcement learning has shown great potential in generalizing over raw sensory data using only a single neural network for value optimization. There are several challenges in the current state-of-the-art reinforcement learning algorithms that prevent them from converging towards the global optima. It is likely that the solution to these problems lies in shortand long-term planning, exploration and memory management for reinforcement learning algorithms. Games are often used to benchmark reinforcement learning algorithms as they provide a flexible, reproducible, and easy to control environment. Regardless, few games feature a state-space where results in exploration, memory, and planning are easily perceived. This paper presents The Dreaming Variational Autoencoder (DVAE), a neural network based generative modeling architecture for exploration in environments with sparse feedback. We further present Deep Maze, a novel and flexible maze engine that challenges DVAE in partial and fully-observable state-spaces, long-horizon tasks, and deterministic and stochastic problems. We show initial findings and encourage further work in reinforcement learning driven by generative exploration.
7e5af1cf715305fc394b5d24fc1caf17643a9205
The nature of the relationship between information technology (IT) and organizations has been a long-standing debate in the Information Systems literature. Does IT shape organizations, or do people in organisations control how IT is used? To formulate the question a little differently: does agency (the capacity to make a difference) lie predominantly with machines (computer systems) or humans (organisational actors)? Many proposals for a middle way between the extremes of technological and social determinism have been put advanced; in recent years researchers oriented towards social theories have focused on structuration theory and (lately) actor network theory. These two theories, however, adopt different and incompatible views of agency. Thus, structuration theory sees agency as exclusively a property of humans, whereas the principle of general symmetry in actor network theory implies that machines may also be agents. Drawing on critiques of both structuration theory and actor network theory, this paper develops a theoretical account of the interaction between human and machine agency: the double dance of agency. The account seeks to contribute to theorisation of the relationship between technology and organisation by recognizing both the different character of human and machine agency, and the emergent properties of their interplay.
d7cbedbee06293e78661335c7dd9059c70143a28
We present a class of extremely efficient CNN models, MobileFaceNets, which use less than 1 million parameters and are specifically tailored for high-accuracy real-time face verification on mobile and embedded devices. We first make a simple analysis on the weakness of common mobile networks for face verification. The weakness has been well overcome by our specifically designed MobileFaceNets. Under the same experimental conditions, our MobileFaceNets achieve significantly superior accuracy as well as more than 2 times actual speedup over MobileNetV2. After trained by ArcFace loss on the refined MS-Celeb-1M, our single MobileFaceNet of 4.0MB size achieves 99.55% accuracy on LFW and 92.59% TAR@FAR1e-6 on MegaFace, which is even comparable to state-of-the-art big CNN models of hundreds MB size. The fastest one of MobileFaceNets has an actual inference time of 18 milliseconds on a mobile phone. For face verification, MobileFaceNets achieve significantly improved efficiency over previous state-of-the-art mobile CNNs.
44f18ef0800e276617e458bc21502947f35a7f94
Marker-based and marker-less optical skeletal motion-capture methods use an outside-in arrangement of cameras placed around a scene, with viewpoints converging on the center. They often create discomfort with marker suits, and their recording volume is severely restricted and often constrained to indoor scenes with controlled backgrounds. Alternative suit-based systems use several inertial measurement units or an exoskeleton to capture motion with an inside-in setup, i.e. without external sensors. This makes capture independent of a confined volume, but requires substantial, often constraining, and hard to set up body instrumentation. Therefore, we propose a new method for real-time, marker-less, and egocentric motion capture: estimating the full-body skeleton pose from a lightweight stereo pair of fisheye cameras attached to a helmet or virtual reality headset - an optical inside-in method, so to speak. This allows full-body motion capture in general indoor and outdoor scenes, including crowded scenes with many people nearby, which enables reconstruction in larger-scale activities. Our approach combines the strength of a new generative pose estimation framework for fisheye views with a ConvNet-based body-part detector trained on a large new dataset. It is particularly useful in virtual reality to freely roam and interact, while seeing the fully motion-captured virtual body.
094ca99cc94e38984823776158da738e5bc3963d
This article introduces a class of incremental learning procedures specialized for prediction-that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.
86955608218ab293d41b6d1c0bf9e1be97f571d8
03aca587f27fda3cbdad708aa69c07fc71b691d7
Automated tissue characterization is one of the most crucial components of a computer aided diagnosis (CAD) system for interstitial lung diseases (ILDs). Although much research has been conducted in this field, the problem remains challenging. Deep learning techniques have recently achieved impressive results in a variety of computer vision problems, raising expectations that they might be applied in other domains, such as medical image analysis. In this paper, we propose and evaluate a convolutional neural network (CNN), designed for the classification of ILD patterns. The proposed network consists of 5 convolutional layers with 2 × 2 kernels and LeakyReLU activations, followed by average pooling with size equal to the size of the final feature maps and three dense layers. The last dense layer has 7 outputs, equivalent to the classes considered: healthy, ground glass opacity (GGO), micronodules, consolidation, reticulation, honeycombing and a combination of GGO/reticulation. To train and evaluate the CNN, we used a dataset of 14696 image patches, derived by 120 CT scans from different scanners and hospitals. To the best of our knowledge, this is the first deep CNN designed for the specific problem. A comparative analysis proved the effectiveness of the proposed CNN against previous methods in a challenging dataset. The classification performance ( ~ 85.5%) demonstrated the potential of CNNs in analyzing lung patterns. Future work includes, extending the CNN to three-dimensional data provided by CT volume scans and integrating the proposed method into a CAD system that aims to provide differential diagnosis for ILDs as a supportive tool for radiologists.
e7ad8adbb447300ecafb4d00fb84efc3cf4996cf
Synthetic likelihood is an attractive approach to likelihood-free inference when an approximately Gaussian summary statistic for the data, informative for inference about the parameters, is available. The synthetic likelihood method derives an approximate likelihood function from a plug-in normal density estimate for the summary statistic, with plug-in mean and covariance matrix obtained by Monte Carlo simulation from the model. In this article, we develop alternatives to Markov chain Monte Carlo implementations of Bayesian synthetic likelihoods with reduced computational overheads. Our approach uses stochastic gradient variational inference methods for posterior approximation in the synthetic likelihood context, employing unbiased estimates of the log likelihood. We compare the new method with a related likelihood free variational inference technique in the literature, while at the same time improving the implementation of that approach in a number of ways. These new algorithms are feasible to implement in situations which are challenging for conventional approximate Bayesian computation (ABC) methods, in terms of the dimensionality of the parameter and summary statistic.
27a8f746f43876dbd1019235ad8e302ea838a499
Micro aerial vehicles, such as multirotors, are particular well suited for the autonomous monitoring, inspection, and surveillance of buildings, e.g., for maintenance in industrial plants. Key prerequisites for the fully autonomous operation of micro aerial vehicles in restricted environments are 3D mapping, real-time pose tracking, obstacle detection, and planning of collision-free trajectories. In this article, we propose a complete navigation system with a multimodal sensor setup for omnidirectional environment perception. Measurements of a 3D laser scanner are aggregated in egocentric local multiresolution grid maps. Local maps are registered and merged to allocentric maps in which the MAV localizes. For autonomous navigation, we generate trajectories in a multi-layered approach: from mission planning over global and local trajectory planning to reactive obstacle avoidance. We evaluate our approach in a GNSS-denied indoor environment where multiple collision hazards require reliable omnidirectional perception and quick navigation reactions.
e80d9d10956310d4ea926c2105c74de766c22345
This paper presents the architecture of digital array radar and analysis the key technologies, digital T/R modules, waveform generation and amplitude-phase control module based on DDS, frequency up/down convert, high efficiency power amplifier, hybrid digital/microwave multilayer circuit and high performance computing are described as the main technologies. The correlation between the microsystems technologies and digital array architectures trends is also discussed.
6f6e10b229a5a9eca2a2f694143632191d4c5e0c
Technological approaches for detecting and monitoring fatigue levels of driver fatigue continue to emerge and many are now in the development, validation testing, or early implementation stages. Previous studies have reviewed available fatigue detection and prediction technologies and methodologies. As the name indicates this project is about advanced technologies in cars for making it more intelligent and interactive for avoiding accidents on roads. By using ARM7 this system becomes more efficient, reliable & effective. There are very less number of systems implemented on human behaviour detection in or with cars. In this paper, we describe a real-time online safety prototype that controls the vehicle speed under driver fatigue. The purpose of such a model is to advance a system to detect fatigue symptoms in drivers and control the speed of vehicle to avoid accidents. The main components of the system consist of number of real time sensors like gas, eye blink, alcohol, fuel, impact sensors and a software interface with GPS and Google Maps APIs for location.
593bdaa21941dda0b8c888ee88bbe730c4219ad6
Outlier detection is an integral part of data mining and has attracted much attention recently [BKNS00, JTH01, KNT00]. In this paper, we propose a new method for evaluating outlier-ness, which we call theLocal Correlation Integral(LOCI). As with the best previous methods, LOCI is highly effective for detecting outliers and groups of outliers ( a.k.a.micro-clusters). In addition, it offers the following advantages and novelties: (a) It provides an automatic, data-dictated cut-off to determine whether a point is an outlier—in contrast, previous methods force users to pick cut-offs, without any hints as to what cut-off value is best for a given dataset. (b) It can provide a LOCI plot for each point; this plot summarizes a wealth of information about the data in the vicinity of the point, determining clusters, micro-clusters, their diameters and their inter-cluster distances. None of the existing outlier-detection methods can match this feature, because they output only a single number for each point: its outlierness score.(c) Our LOCI method can be computed as quickly as the best previous methods. (d) Moreover, LOCI leads to a practically linear approximate method, aLOCI (for approximate LOCI ), which provides fast highly-accurate outlier detection. To the best of our knowledge, this is the first work to use approximate computations to speed up outlier detection. Experiments on synthetic and real world data sets show that LOCI and aLOCI can automatically detect outliers and micro-clusters, without user-required cut-offs, and that they quickly spot both expected and unexpected outliers.
c1cfb9b530daae4dbb89f96a9bff415536aa7e4b
Style transfer aims to transfer arbitrary visual styles to content images. We explore algorithms adapted from two papers that try to solve the problem of style transfer while generalizing on unseen styles or compromised visual quality. Majority of the improvements made focus on optimizing the algorithm for real-time style transfer while adapting to new styles with considerably less resources and constraints. We compare these strategies and compare how they measure up to produce visually appealing images. We explore two approaches to style transfer: neural style transfer with improvements and universal style transfer. We also make a comparison between the different images produced and how they can be qualitatively measured.
1d2a3436fc7ff4b964fa61c0789df19e32ddf0ed
As this paper puts forward the notion of “Oblivious Transfers” and is a well-known and frequently cited paper, I felt I should typeset the manuscript, and here is the result. While typesetting, I tried to stick to the original manuscript as much as possible. However, there were some cases, such as a few typos or punctuation marks, which were changed. As in many papers on cryptography, Alice and Bob play the role of participants of the given cryptographic protocols. For the sake of readability, Alice’s and Bob’s messages were printed in red and blue ink, respectively. This work was carefully proofread by my colleague Y. Sobhdel (sobhdel@ce.sharif.edu). Thanks also to H. M. Moghaddam for mentioning a minor mistake in an earlier version. That said, I will be thankful if you inform me of any possible mistakes.
d20a17b42f95ee07e9a43cc852b35bda407c4be6
caf912b716905ccbf46d6d00d6a0b622834a7cd9
As machines have become more intelligent, there has been a renewed interest in methods for measuring their intelligence. A common approach is to propose tasks for which a human excels, but one which machines find difficult. However, an ideal task should also be easy to evaluate and not be easily gameable. We begin with a case study exploring the recently popular task of image captioning and its limitations as a task for measuring machine intelligence. An alternative and more promising task is Visual Question Answering that tests a machine’s ability to reason about language and vision. We describe a dataset unprecedented in size created for the task that contains over 760,000 human generated questions about images. Using around 10 million human generated answers, machines may be easily evaluated.
8d3b8a59144352d0f60015f32c836001e4344a34
Three-dimensional geometric data offer an excellent domain for studying representation learning and generative modeling. In this paper, we look at geometric data represented as point clouds. We introduce a deep autoencoder (AE) network with state-of-the-art reconstruction quality and generalization ability. The learned representations outperform existing methods on 3D recognition tasks and enable basic shape editing via simple algebraic manipulations, such as semantic part editing, shape analogies and shape interpolation. We perform a thorough study of different generative models including: GANs operating on the raw point clouds, significantly improved GANs trained in the fixed latent space of our AEs and, Gaussian mixture models (GMM). For our quantitative evaluation we propose measures of sample fidelity and diversity based on matchings between sets of point clouds. Interestingly, our careful evaluation of generalization, fidelity and diversity reveals that GMMs trained in the latent space of our AEs produce the best results.
2c51c8da2f82a956e633049616b1bb7730faa2da
As we are moving towards the Internet of Things (IoT), the number of sensors deployed around the world is growing at a rapid pace. Market research has shown a significant growth of sensor deployments over the past decade and has predicted a significant increment of the growth rate in the future. These sensors continuously generate enormous amounts of data. However, in order to add value to raw sensor data we need to understand it. Collection, modelling, reasoning, and distribution of context in relation to sensor data plays critical role in this challenge. Context-aware computing has proven to be successful in understanding sensor data. In this paper, we survey context awareness from an IoT perspective. We present the necessary background by introducing the IoT paradigm and context-aware fundamentals at the beginning. Then we provide an in-depth analysis of context life cycle. We evaluate a subset of projects (50) which represent the majority of research and commercial solutions proposed in the field of context-aware computing conducted over the last decade (2001-2011) based on our own taxonomy. Finally, based on our evaluation, we highlight the lessons to be learnt from the past and some possible directions for future research. The survey addresses a broad range of techniques, methods, models, functionalities, systems, applications, and middleware solutions related to context awareness and IoT. Our goal is not only to analyse, compare and consolidate past research work but also to appreciate their findings and discuss their applicability towards the IoT.
3406b672402828f2522b57e9ab11e0ae9c76ab2e
Ubiquitous computing has fueled the idea of constructing sentient, information-rich "smart spaces" that extend the boundaries of traditional computing to encompass physical spaces, embedded devices, sensors, and other machinery. To achieve this, smart spaces need to capture situational information so that they can detect changes in context and adapt themselves accordingly. However, without considering basic security issues ubiquitous computing environments could be rife with vulnerabilities. Ubiquitous computing environments impose new requirements on security. Security services, like authentication and access control, have to be non-intrusive, intelligent, and able to adapt to the rapidly changing contexts of the spaces. We present a ubiquitous security mechanism that integrates context-awareness with automated reasoning to perform authentication and access control in ubiquitous computing environments.
e658f77af84415bfa794202c433a22d08c91bed2
Pervasive computing is becoming a reality due to the rise of the so-called Internet of Things (IoT). In this paradigm, everyday and physical objects are being equipped with capabilities to detect and communicate information they receive from their environment, turning them into smart objects. However, such entities are usually deployed on environments with changing and dynamic conditions, which can be used by them to modify their operation or behavior. Under the foundations of EU FP7 SocIoTal project, this work provides an overview about how contextual information can be taken into account by smart objects when making security decisions, by considering such information as a first-class component, in order to realize the so-called context-aware security on IoT scenarios.
1c5a40cff6297bd14ecc3e0c5efbae76a6afce5b
We describe an approach to building security services for context-aware environments. Specifically, we focus on the design of security services that incorporate the use of security-relevant “context” to provide flexible access control and policy enforcement. We previously presented a generalized access control model that makes significant use of contextual information in policy definition. This document provides a concrete realization of such a model by presenting a system-level service architecture, as well as early implementation experience with the framework. Through our context-aware security services, our system architecture offers enhanced authentication services, more flexible access control and a security subsystem that can adapt itself based on current conditions in the environment. We discuss our architecture and implementation and show how it can be used to secure several sample applications.
4b814e9d09ff72279030960df5718041b8c1b50c
586407b38cc3bb0560ff9941a89f3402e34ee08b
This paper discusses the concept of business ecosystem. Business ecosystem is a relatively new concept in the field of business research, and there is still a lot of work to be done to establish it. First the subject is approached by examining a biological ecosystem, especially how biological ecosystems are defined, how they evolve and how they are classified and structured. Second, different analogies of biological ecosystem are reviewed, including industrial ecosystem, economy as an ecosystem, digital business ecosystem and social ecosystem. Third, business ecosystem concept is outlined by discussing views of main contributors and then bringing authors’ own definition out. Fourth, the emerging research field of complexity in social sciences is brought out due to authors’ attitude to consider ecosystems and business ecosystems as complex, adaptive systems. The focal complexity aspects appearing in business ecosystems are presented; they are self-organization, emergence, co-evolution and adaptation. By connecting business ecosystem concept to complexity research, it is possible to bring new insights to changing business environments.
08c2649dee7ba1ab46106425a854ca3af869c2f0
Contrary to widespread assumption, dynamic RAM (DRAM), the main memory in most modern computers, retains its contents for several seconds after power is lost, even at room temperature and even if removed from a motherboard. Although DRAM becomes less reliable when it is not refreshed, it is not immediately erased, and its contents persist sufficiently for malicious (or forensic) acquisition of usable full-system memory images. We show that this phenomenon limits the ability of an operating system to protect cryptographic key material from an attacker with physical access to a machine. It poses a particular threat to laptop users who rely on disk encryption: we demonstrate that it could be used to compromise several popular disk encryption products without the need for any special devices or materials. We experimentally characterize the extent and predictability of memory retention and report that remanence times can be increased dramatically with simple cooling techniques. We offer new algorithms for finding cryptographic keys in memory images and for correcting errors caused by bit decay. Though we discuss several strategies for mitigating these risks, we know of no simple remedy that would eliminate them.
05ba00812bbbe15be83418df6657f74edf76f727
Action recognition has received increasing attention during the last decade. Various approaches have been proposed to encode the videos that contain actions, among which self-similarity matrices (SSMs) have shown very good performance by encoding the dynamics of the video. However, SSMs become sensitive when there is a very large view change. In this paper, we tackle the multi-view action recognition problem by proposing a sparse code filtering (SCF) framework which can mine the action patterns. First, a class-wise sparse coding method is proposed to make the sparse codes of the between-class data lie close by. Then we integrate the classifiers and the class-wise sparse coding process into a collaborative filtering (CF) framework to mine the discriminative sparse codes and classifiers jointly. The experimental results on several public multi-view action recognition datasets demonstrate that the presented SCF framework outperforms other state-of-the-art methods.
c956b29a133673c32586c7736d12c606f2d59a21
f36ef0d3e8d3abc1f30abc06603471c9aa1cc0d7
9c573daa179718f6c362f296f123e8ea2a775082
We have developed a simple and efficient procedure to design H-plane rectangular waveguide T-junctions for equal and unequal two-way power splitters. This synthesis procedure is scalable, renders manufacturable structures, applicable to any arbitrary power split-ratio, and can offer wide band operation. In our implementation, we utilized wedges and inductive windows (being an integral part of the T-junctions), to provide more degrees of freedom, thus, excellent match at the input port, flat power-split ratio over the band with equal phase, where phase balance is essential for various antenna feeds.
640eccc55eeb23f561efcb32ca97d445624cf326
Wireless sensor networks are increasingly being deployed in real-world applications ranging from energy monitoring to water-level measurement. To better integrate with existing network infrastructure, they are being designed to communicate using IPv6. The current de-facto standard for routing in IPv6-based sensor networks is the shortest-path-based RPL, developed by the IETF 6LoWPaN working group. This paper describes BackIP, an alternative routing protocol for data collection in IPv6-based wireless sensor networks that is based on the backpressure paradigm. In a backpressure-based protocol, routing decisions can be made on-the-fly on a perpacket basis by nodes based on the current locally observed state, and prior work has shown that they can offer superior throughput performance and responsiveness to dynamic conditions compared to shortest-path routing protocols. We discuss a number of design decisions that are needed to enable backpressure routing to work in a scalable and efficient manner with IPv6. We implement and evaluate the performance of this protocol on a TinyOS-based real wireless sensor network testbed.
053912e76e50c9f923a1fc1c173f1365776060cc
The predominant methodology in training deep learning advocates the use of stochastic gradient descent methods (SGDs). Despite its ease of implementation, SGDs are difficult to tune and parallelize. These problems make it challenging to develop, debug and scale up deep learning algorithms with SGDs. In this paper, we show that more sophisticated off-the-shelf optimization methods such as Limited memory BFGS (L-BFGS) and Conjugate gradient (CG) with line search can significantly simplify and speed up the process of pretraining deep algorithms. In our experiments, the difference between LBFGS/CG and SGDs are more pronounced if we consider algorithmic extensions (e.g., sparsity regularization) and hardware extensions (e.g., GPUs or computer clusters). Our experiments with distributed optimization support the use of L-BFGS with locally connected networks and convolutional neural networks. Using L-BFGS, our convolutional network model achieves 0.69% on the standard MNIST dataset. This is a state-of-theart result on MNIST among algorithms that do not use distortions or pretraining.
cabcfc0c8704fa15fa8212a6f8944a249d5dcfa9
In this paper, a new miniaturized double-sided printed dipole antenna loaded with balanced Capacitively Loaded Loops (CLLs) as Metamaterial structure is presented. CLLs placed close to the edge of the printed antenna cause the antenna to radiate at two different frequencies, one of which is lower than self-resonant frequency of dipole antenna. In the other words, the loaded dipole antenna can perform at low frequency as compared with natural resonance frequency of unload half wavelength dipole. Finally, the CLL element is integrated with chip capacitor to provide a larger capacitance which in turn allows the resulting CLL element to resonate at a lower frequency. It is demonstrated that the proposed loaded dipole antenna is a dual band radiator with sufficient gain suitable for applications such as mobile communication and Industrial, Scientific and Medical (ISM) system. Prototype of miniaturized double resonant dipole antenna is fabricated and tested. The measured results are in good agreement with those obtained from simulation.
2671bf82168234a25fce7950e0527eb03b201e0c
Statistical parsers trained and tested on the Penn Wall Street Journal (WSJ) treebank have shown vast improvements over the last 10 years. Much of this improvement, however, is based upon an ever-increasing number of features to be trained on (typically) the WSJ treebank data. This has led to concern that such parsers may be too finely tuned to this corpus at the expense of portability to other genres. Such worries have merit. The standard “Charniak parser” checks in at a labeled precisionrecall f -measure of 89.7% on the Penn WSJ test set, but only 82.9% on the test set from the Brown treebank corpus. This paper should allay these fears. In particular, we show that the reranking parser described in Charniak and Johnson (2005) improves performance of the parser on Brown to 85.2%. Furthermore, use of the self-training techniques described in (McClosky et al., 2006) raise this to 87.8% (an error reduction of 28%) again without any use of labeled Brown data. This is remarkable since training the parser and reranker on labeled Brown data achieves only 88.4%.
d4e974d68c36de92609fcffaa3ee11bbcbc9eb57
13d09bcec49d2f0c76194f88b59520e6d20e7a34
Matching unknown latent fingerprints lifted from crime scenes to full (rolled or plain) fingerprints in law enforcement databases is of critical importance for combating crime and fighting terrorism. Compared to good quality full fingerprints acquired using live-scan or inking methods during enrollment, latent fingerprints are often smudgy and blurred, capture only a small finger area, and have large nonlinear distortion. For this reason, features (minutiae and singular points) in latents are typically manually marked by trained latent examiners. However, this introduces an undesired interoperability problem between latent examiners and automatic fingerprint identification systems (AFIS); the features marked by examiners are not always compatible with those automatically extracted by AFIS, resulting in reduced matching accuracy. While the use of automatically extracted minutiae from latents can avoid interoperability problem, such minutiae tend to be very unreliable, because of the poor quality of latents. In this paper, we improve latent to full fingerprint matching accuracy by combining manually marked (ground truth) minutiae with automatically extracted minutiae. Experimental results on a public domain database, NIST SD27, demonstrate the effectiveness of the proposed algorithm.
a5a268d65ad1e069770c11005021d830754b0b5c
The Internet of Things is currently getting significant interest from the scientific community. Academia and industry are both focused on moving ahead in attempts to enhance usability, maintainability, and security through standardization and development of best practices.We focus on security because of its impact as one of themost limiting factors to wider Internet ofThings adoption. Numerous research areas exist in the security domain, ranging from cryptography to network security to identitymanagement.This paper provides a survey of existing research applicable to the Internet of Things environment at the application layer in the areas of identity management, authentication, and authorization. We survey and analyze more than 200 articles, categorize them, and present current trends in the Internet of Things security domain.
81f76e74807e9d04e14065715e46a2d770a6d9cd
df26f9822785b07e787d43429ee5fdd2794ac7f8
This paper describes a general method for estimating the nominal relationship and expected error (covariance) between coordinate frames representing the relative locations of objects. The frames may be known only indirectly through a series of spatial relationships, each with its associated error, arisingfrom diverse causes, including positioning errors, measurement errors, or tolerances in part dimensions. This estimation method can be used to answer such questions as whether a camera attached, to a robot is likely to have a particular reference object in its field of view. The calculated estimates agree well with those/rom an independent Monte Carlo simulation. The method makes it possible to decide in advance whether an uncertain relationship is known accurately enough for some task and, i f not, how much of an improvement in locational knowledge a proposed sensor will provide. The method presented can be generalized to six degrees of freedom and provides a practical means of estimating the relationships (position and orientation) among objects, as well as estimating the uncertainty associated with the relationships.
414b0d139d83024d47649ba37c3d11b1165057d6
India is agriculture based nation. It is necessary to improve the productivity and quality of agro based products. The proposed design is an automatic system that aids the famers in irrigation process. It keeps notifying the farmer through an on-board LCD display and messages that is sent on the farmer's cellular number. This proposed design is also helpful for the farmers who are facing power failure issues to maintain a uniform water supply due to power failure or inadequate and non-uniform water supply. The automatic irrigation system also keeps the farmer updated with all the background activities through a SIM900 Module that sends messages on the registered number. This device can be a turning point for our society. The device is easily affordable by the farmers of the country. This proposed design is helpful for reducing the human labour. This is a low budget system with an essential social application.
6ed591fec03437ed2bf7479d92f49833e3851f71
An intelligent drip irrigation system optimizes water and fertilizer use for agricultural crops using wireless sensors and fuzzy logic. The wireless sensor networks consists of many sensor nodes, hub and control unit. The sensor collects real-time data such as temperature, soil humidity. This data is sent to the hub using the wireless technology. The hub processes the data using fuzzy logic and decides the time duration for keeping the valves open. Accordingly, the drip irrigation system is implemented for a particular amount of time. The whole system is powered by photovoltaic cells and has a communication link which allows the system to be monitored, controlled, and scheduled through cellular text messages. The system can quickly and accurately calculate water demand amount of crops, which can provide a scientific basis for water-saving irrigation, as well as a method to optimize the amount of fertilizer used.
8075c73fd8b13fa9663230a383f5712bf210ebcf
Efficient water management is a major concern in many cropping systems in semiarid and arid areas. Distributed in-field sensor-based irrigation systemsoffer a potential solution to support site-specific irrigation management that allows producers to maximize their productivity while saving water. This paper describes details of the design and instrumentation of variable rate irrigation, a wireless sensor network, and software for real-time in-field sensing and control of a site-specific precision linear-move irrigation system. Field conditions were site-specifically monitored by six in-field sensor stations distributed across the field based on a soil property map, and periodically sampled and wirelessly transmitted to a base station. An irrigation machine was converted to be electronically controlled by a programming logic controller that updates georeferenced location of sprinklers from a differential Global Positioning System (GPS) and wirelessly communicates with a computer at the base station. Communication signals from the sensor network and irrigation controller to the base station were successfully interfaced using low-cost Bluetooth wireless radio communication. Graphic user interface-based software developed in this paper offered stable remote access to field conditions and real-time control and monitoring of the variable-rate irrigation controller.
ebf9bfbb122237ffdde5ecbbb292181c92738fd4
This paper shows the design and fabrication of a Thermo-electric generator (TEG) and the implementation of an automated irrigation system using this TEG as a soil moisture detector. The TEG inserted in two heat exchangers is capable of finding the thermal difference between the air and the soil that establishes a relationship with the soil’s moisture condition. Being able to obtain the soil moisture level from the TEG’s output, a microcontroller is used to automate the irrigation system. The irrigation system adapts to the soil area’s condition it irrigates based from the moisture it detects via the TEG. The water consumption of the soil is controlled by the automated irrigation system based on the soil’s condition and therefore, promotes water conservation compared to that of the water consumption of manual irrigation system. It also optimizes plant growth in that it waters it to the correct moisture level at the right time.
59f153ddd37e22af153aa0d7caf3ec44053aa8e8
At present, labor-saving and water-saving technology is a key issue in irrigation. A wireless solution for intelligent field irrigation system dedicated to Jew's-ear planting in Lishui, Zhejiang, China, based on ZigBee technology was proposed in this paper. Instead of conventional wired connection, the wireless design made the system easy installation and maintenance. The hardware architecture and software algorithm of wireless sensor/actuator node and portable controller, acting as the end device and coordinator in ZigBee wireless sensor network respectively, were elaborated in detail. The performance of the whole system was evaluated in the end. The long-time smooth and proper running of the system in the field proved its high reliability and practicability. As an explorative application of wireless sensor network in irrigation management, this paper offered a methodology to establish large-scale remote intelligent irrigation system.
96e92ff6c7642cc75dc856ae4b22a5409c69e6cb
Cooperative navigation (CN) enables a group of cooperative robots to reduce their individual navigation errors. For a general multi-robot (MR) measurement model that involves both inertial navigation data and other onboard sensor readings, taken at different time instances, the various sources of information become correlated. Thus, this correlation should be solved for in the process of information fusion to obtain consistent state estimation. The common approach for obtaining the correlation terms is to maintain an augmented covariance matrix. This method would work for relative pose measurements, but is impractical for a general MR measurement model, because the identities of the robots involved in generating the measurements, as well as the measurement time instances, are unknown a priori. In the current work, a new consistent information fusion method for a general MR measurement model is developed. The proposed approach relies on graph theory. It enables explicit on-demand calculation of the required correlation terms. The graph is locally maintained by every robot in the group, representing all the MR measurement updates. The developed method calculates the correlation terms in the most general scenarios of MR measurements while properly handling the involved process and measurement noise. A theoretical example and a statistical study are provided, demonstrating the performance of the method for vision-aided navigation based on a three-view measurement model. The method is compared, in a simulated environment, to a fixed-lag centralized smoothing approach. The method is also validated in an experiment that involved real imagery and navigation data. Computational complexity estimates show that the newly-developed method is computationally efficient.
fc20f0ce11946c7d17a676fd880fec6dfc1c0397
bef9d9edd340eb09e2cda37cb7f4d4886a36fe66
96230bbd9804f4e7ac0017f9065ebe488f30b642
Understanding the behavior of stochastic gradient descent (SGD) in the context of deep neural networks has raised lots of concerns recently. Along this line, we theoretically study a general form of gradient based optimization dynamics with unbiased noise, which unifies SGD and standard Langevin dynamics. Through investigating this general optimization dynamics, we analyze the behavior of SGD on escaping from minima and its regularization effects. A novel indicator is derived to characterize the efficiency of escaping from minima through measuring the alignment of noise covariance and the curvature of loss function. Based on this indicator, two conditions are established to show which type of noise structure is superior to isotropic noise in term of escaping efficiency. We further show that the anisotropic noise in SGD satisfies the two conditions, and thus helps to escape from sharp and poor minima effectively, towards more stable and flat minima that typically generalize well. We verify our understanding through comparing this anisotropic diffusion with full gradient descent plus isotropic diffusion (i.e. Langevin dynamics) and other types of position-dependent noise.
d908f630582f1a11b6d481e635fb1d06e7671f32
27db63ab642d9c27601a9311d65b63e2d2d26744
While methods for comparing two learning algorithms on a single data set have been scrutinized for quite some time already, the issue of statistical tests for comparisons of more algorithms on multiple data sets, which is even more essential to typical machine learning studies, has been all but ignored. This article reviews the current practice and then theoretically and empirically examines several suitable tests. Based on that, we recommend a set of simple, yet safe and robust non-parametric tests for statistical comparisons of classifiers: the Wilcoxon signed ranks test for comparison of two classifiers and the Friedman test with the corresponding post-hoc tests for comparison of more classifiers over multiple data sets. Results of the latter can also be neatly presented with the newly introduced CD (critical difference) diagrams.
4dbd924046193a51e4a5780d0e6eb3a4705784cd
BayesOpt is a library with state-of-the-art Bayesian optimization methods to solve nonlinear optimization, stochastic bandits or sequential experimental design problems. Bayesian optimization is sample efficient by building a posterior distribution to capture the evidence and prior knowledge for the target function. Built in standard C++, the library is extremely efficient while being portable and flexible. It includes a common interface for C, C++, Python, Matlab and Octave.
801556eae6de26616d2ce90cdd4aecc4e2de7fe4
Electrically non-contact ECG measurement system on a chair can be applied to a number of various fields for continuous health monitoring in daily life. However, the body is floated electrically for this system due to the capacitive electrodes and the floated body is very sensitive to the external noises or motion artifacts which affect the measurement system as the common mode noise. In this paper, the Driven-Seat-Ground circuit similar to the Driven-Right-Leg circuit is proposed to reduce the common mode noise. The analysis of this equivalent circuit is performed and the output signal waveforms are compared between with Driven-Seat-Ground and with capacitive ground. As the results, the Driven-Seat-Ground circuit improves significantly the properties of the fully capacitive ECG measurement system as the negative feedback.
95f388c8cd9db1e800e515e53aaaf4e9b433866f
0747-5632/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.chb.2012.08.001 ⇑ Corresponding author. Tel.: +886 02 7734 3347; f E-mail address: joum@ntnu.edu.tw (M. Jou). Cloud computing technology has matured as it has been integrated with every kind of digitalization processes. It offers numerous advantages for data and software sharing, and thus making the management of complex IT systems much simpler. For education in engineering, cloud computing even provides students with versatile and ubiquitous access to software commonly used in the field without having to step into an actual computer lab. Our study analyzed learning attitudes and academic performances induced by the utilization of resources driven by cloud computing technologies. Comparisons were made between college students with high school and vocational high school backgrounds. One hundred and thirtytwo students who took the computer-aided designing (CAD) course participated in the study. Technology Acceptance Model (TAM) was used as the fundamental framework. Open-ended sets of questionnaires were designed to measure academic performance and causal attributions; the results indicated no significant differences in the cognitive domain between the two groups of students, though it is not so in both the psychomotor and the affective domains. College students with vocational high school background appeared to possess higher learning motivation in CAD applications. 2012 Elsevier Ltd. All rights reserved.
e2413f14a014603253815398e56c7fee0ba01a3d
This chapter provides the overview of the state of the art in intrusion detection research. Intrusion detection systems are software and/or hardware components that monitor computer systems and analyze events occurring in them for signs of intrusions. Due to widespread diversity and complexity of computer infrastructures, it is difficult to provide a completely secure computer system. Therefore, there are numerous security systems and intrusion detection systems that address different aspects of computer security. This chapter first provides taxonomy of computer intrusions, along with brief descriptions of major computer attack categories. Second, a common architecture of intrusion detection systems and their basic characteristics are presented. Third, taxonomy of intrusion detection systems based on five criteria (information source, analysis strategy, time aspects, architecture, response) is given. Finally, intrusion detection systems are classified according to each of these categories and the most representative research prototypes are briefly described.
42cfb5614cbef64a5efb0209ca31efe760cec0fc
The value system of a developmental robot signals the occurrence of salient sensory inputs, modulates the mapping from sensory inputs to action outputs, and evaluates candidate actions. In the work reported here, a low level value system is modeled and implemented. It simulates the non-associative animal learning mechanism known as habituation e ect. Reinforcement learning is also integrated with novelty. Experimental results show that the proposed value system works as designed in a study of robot viewing angle selection.
73447b6a02d1caff0a96472a2e0b571e1be497c8
Internet technology provides a new means of recalling and sharing personal memories in the digital age. What is the mnemonic consequence of posting personal memories online? Theories of transactive memory and autobiographical memory would make contrasting predictions. In the present study, college students completed a daily diary for a week, listing at the end of each day all the events that happened to them on that day. They also reported whether they posted any of the events online. Participants received a surprise memory test after the completion of the diary recording and then another test a week later. At both tests, events posted online were significantly more likely than those not posted online to be recalled. It appears that sharing memories online may provide unique opportunities for rehearsal and meaning-making that facilitate memory retention.
b3ede733fcd97271f745d8c0f71e44562abbb6d5
Identifying the function of problem behavior can lead to the development of more effective interventions. One way to identify the function is through functional behavior assessment (FBA). Teachers conduct FBA in schools. However, the task load of recording the data manually is high, and the challenge of accurately identifying antecedents and consequences is significant while interacting with students. These issues often result in imperfect information capture. CareLog allows teachers more easily to conduct FBAs and enhances the capture of relevant information. In this paper, we describe the design process that led to five design principles that governed the development of CareLog. We present results from a five-month, quasi-controlled study aimed at validating those design principles. We reflect on how various constraints imposed by special education settings impact the design and evaluation process for HCI practitioners and researchers.
77e7b0663f6774b3d6e1d51106020a9a0f96bcd2
This article explores the relationship between Internet use and the individual-level production of social capital. To do so, the authors adopt a motivational perspective to distinguish among types of Internet use when examining the factors predicting civic engagement, interpersonal trust, and life contentment. The predictive power of new media use is then analyzed relative to key demographic, contextual, and traditional media use variables using the 1999 DDB Life Style Study. Although the size of associations is generally small, the data suggest that informational uses of the Internet are positively related to individual differences in the production of social capital, whereas social-recreational uses are negatively related to these civic indicators. Analyses within subsamples defined by generational age breaks further suggest that social capital production is related to Internet use among Generation X, while it is tied to television use among Baby Boomers and newspaper use among members of the Civic Generation. The possibility of life cycle and cohort effects is discussed.
076be17f97325fda82d1537aaa48798eb66ba91f
Identity-based encryption (IBE) is an exciting alternative to public-key encryption, as IBE eliminates the need for a Public Key Infrastructure (PKI). The senders using an IBE do not need to look up the public keys and the corresponding certificates of the receivers, the identities (e.g. emails or IP addresses) of the latter are sufficient to encrypt. Any setting, PKI- or identity-based, must provide a means to revoke users from the system. Efficient revocation is a well-studied problem in the traditional PKI setting. However in the setting of IBE, there has been little work on studying the revocation mechanisms. The most practical solution requires the senders to also use time periods when encrypting, and all the receivers (regardless of whether their keys have been compromised or not) to update their private keys regularly by contacting the trusted authority. We note that this solution does not scale well -- as the number of users increases, the work on key updates becomes a bottleneck. We propose an IBE scheme that significantly improves key-update efficiency on the side of the trusted party (from linear to logarithmic in the number of users), while staying efficient for the users. Our scheme builds on the ideas of the Fuzzy IBE primitive and binary tree data structure, and is provably secure.
7a58abc92dbe41c9e5b3c7b0a358ab9096880f25
A frequently proposed method of reducing unsolicited bulk email (“spam”) is for senders to pay for each email they send. Proof-ofwork schemes avoid charging real money by requiring senders to demonstrate that they have expended processing time in solving a cryptographic puzzle. We attempt to determine how difficult that puzzle should be so as to be effective in preventing spam. We analyse this both from an economic perspective, “how can we stop it being cost-effective to send spam”, and from a security perspective, “spammers can access insecure end-user machines and will steal processing cycles to solve puzzles”. Both analyses lead to similar values of puzzle difficulty. Unfortunately, realworld data from a large ISP shows that these difficulty levels would mean that significant numbers of senders of legitimate email would be unable to continue their current levels of activity. We conclude that proof-of-work will not be a solution to the problem of spam.
5284e8897f3a73ff08da1f2ce744ba652583405a
1. SUMMARY Automatic grading of programming assignments has been a feature of computer science courses for almost as long as there have been computer science courses [1]. However, contemporary autograding systems in computer science courses have extended their scope far beyond performing automated assessment to include gamification [2], test coverage analysis [3], managing human-authored feedback, contest adjudication [4], secure remote code execution [5], and more. Many of these individual features have been described and evaluated in the computer science education literature, but little attention has been given to the practical benefits and challenges of using the systems that implement these features in computer science courses.
8a58a1107f790bc07774d18e0184e4bf9d1901ba
This thesis presents WiTrack, a system that tracks the 3D motion of a user from the radio signals reflected off her body. It works even if the person is occluded from the WiTrack device or in a different room. WiTrack does not require the user to carry any wireless device, yet its accuracy exceeds current RF localization systems, which require the user to hold a transceiver. Empirical measurements with a WiTrack prototype show that, on average, it localizes the center of a human body to within a median of 10 to 13 cm in the x and y dimensions, and 21 cm in the z dimension. It also provides coarse tracking of body parts, identifying the direction of a pointing hand with a median of 11.20. WiTrack bridges a gap between RF-based localization systems which locate a user through walls and occlusions, and human-computer interaction systems like Kinect, which can track a user without instrumenting her body, but require the user to stay within the direct line of sight of the device. Thesis Supervisor: Dina Katabi Title: Professor of Computer Science and Engineering
42004b6bdf5ea375dfaeb96c1fd6f8f77d908d65
Internet search rankings have a significant impact on consumer choices, mainly because users trust and choose higher-ranked results more than lower-ranked results. Given the apparent power of search rankings, we asked whether they could be manipulated to alter the preferences of undecided voters in democratic elections. Here we report the results of five relevant double-blind, randomized controlled experiments, using a total of 4,556 undecided voters representing diverse demographic characteristics of the voting populations of the United States and India. The fifth experiment is especially notable in that it was conducted with eligible voters throughout India in the midst of India's 2014 Lok Sabha elections just before the final votes were cast. The results of these experiments demonstrate that (i) biased search rankings can shift the voting preferences of undecided voters by 20% or more, (ii) the shift can be much higher in some demographic groups, and (iii) search ranking bias can be masked so that people show no awareness of the manipulation. We call this type of influence, which might be applicable to a variety of attitudes and beliefs, the search engine manipulation effect. Given that many elections are won by small margins, our results suggest that a search engine company has the power to influence the results of a substantial number of elections with impunity. The impact of such manipulations would be especially large in countries dominated by a single search engine company.
30a7fcdaa836837d87a8e4702ed015cd66e6ad03
This paper describes the construction of a system that recognizes hand-printed digits, using a combination of classical techniques and neural-net methods. The system has been trained and tested on real-world data, derived from zip codes seen on actual U.S. Mail. The system rejects a small percentage of the examples as unclassifiable, and achieves a very low error rate on the remaining examples. The system compares favorably with other state-of-the art recognizers. While some of the methods are specific to this task, it is hoped that many of the techniques will be applicable to a wide range of recognition tasks.
605a12a1d02451157cc5fd4dc475d5cbddd5cb01
To many people, home is a sanctuary. For those people who need special medical care, they may need to be pulled out of their home to meet their medical needs. As the population ages, the percentage of people in this group is increasing and the effects are expensive as well as unsatisfying. We hypothesize that many people with disabilities can lead independent lives in their own homes with the aid of athome automated assistance and health monitoring. In order to accomplish this, robust methods must be developed to collect relevant data and process it dynamically and adaptively to detect and/or predict threatening long-term trends or immediate crises. The main objective of this paper is to investigate techniques for using agent-based smart home technologies to provide this at-home health monitoring and assistance. To this end, we have developed novel inhabitant modeling and automation algorithms that provide remote health monitoring for caregivers. Specifically, we address the following technological challenges: 1) identifying lifestyle trends, 2) detecting anomalies in current data, and 3) designing a reminder assistance system. Our solution approaches are being tested in simulation and with volunteers at the UTA’s MavHome site, an agent-based
494fc1e30be172fbe393e0d68695ae318e23da8c
Green supply chain management (GSCM) has gained increasing attention within both academia and industry. As the literature grows, finding new directions by critically evaluating the research and identifying future directions becomes important in advancing knowledge for the field. Using organizational theories to help categorize the literature provides opportunities to address both the objectives of understanding where the field currently stands and identifying research opportunities and directions. After providing a background discussion on GSCM we categorize and review recent GSCM literature under nine broad organizational theories. Within this review framework, we also identify GSCM research questions that are worthy of investigation. Additional organizational theories which are considered valuable for future GSCM research are also identified with a conclusion for this review.
c3a41f97b29c6abce6f75ee9c668584d77a84170
Sustainability rests on the principle that we must meet the needs of the present without compromising the ability of future generations to meet their own needs. Starving people in poor nations, obesity in rich nations, increasing food prices, on-going climate changes, increasing fuel and transportation costs, flaws of the global market, worldwide pesticide pollution, pest adaptation and resistance, loss of soil fertility and organic carbon, soil erosion, decreasing biodiversity, desertification, and so on. Despite unprecedented advances in sciences allowing us to visit planets and disclose subatomic particles, serious terrestrial issues about food show clearly that conventional agriculture is no longer suited to feeding humans and preserving ecosystems. Sustainable agriculture is an alternative for solving fundamental and applied issues related to food production in an ecological way (Lal (2008) Agron. Sustain. Dev. 28, 57–64.). While conventional agriculture is driven almost solely by productivity and profit, sustainable agriculture integrates biological, chemical, physical, ecological, economic and social sciences in a comprehensive way to develop new farming practices that are safe and do not degrade our environment. To address current agronomical issues and to promote worldwide discussions and cooperation we implemented sharp changes at the journal Agronomy for Sustainable Development from 2003 to 2006. Here we report (1) the results of the renovation of the journal and (2) a short overview of current concepts of agronomical research for sustainable agriculture. Considered for a long time as a soft, side science, agronomy is rising fast as a central science because current issues are about food, and humans eat food. This report is the introductory article of the book Sustainable Agriculture, volume 1, published by EDP Sciences and Springer (Lichtfouse et al. (2009) Sustainable Agriculture, Vol. 1, Springer, EDP Sciences, in press).
8216ca257a33d0d64cce02f5bb37de31c5b824f8
1518c8dc6a07c2391e58ece6e2ad8edca87be56e
With advances in data collection and generation technologies, organizations and researchers are faced with the ever growing problem of how to manage and analyze large dynamic datasets. Environments that produce streaming sources of data are becoming common place. Examples include stock market, sensor, web click stream, and network data. In many instances, these environments are also equipped with multiple distributed computing nodes that are often located near the data sources. Analyzing and monitoring data in such environments requires data mining technology that is cognizant of the mining task, the distributed nature of the data, and the data influx rate. In this chapter, we survey the current state of the field and identify potential directions of future research.
dd86669b91927f4c4504786269f93870854e117f
Die Untersuchung der Akzeptanz von Technologien im Allgemeinen und Software im Besonderen ist ein fruchtbares Feld in der angloamerikanischen Forschung der Disziplinen (Management) Information Systems und der deutschen Wirtschaftsinformatik. Trotz zahlreicher Untersuchungen, die ihren Ursprung in dem Technologie-Akzeptanzmodell und verwandten Theorien haben, mehren sich Beiträge, welche die Defizite bisherigerer Studien und Forschungsansätze herausstellen. Eine wesentliche Ursache ist Fokussierung auf quantitative Forschungsmethoden, wie wir anhand von Metastudien und einer eigenen Literaturrecherche aufzeigen. Während quantitative Verfahren sich in der Regel gut für eine Überprüfung gegebener Theorien eignen, ist ihr Beitrag für die Bildung neuer Theorien begrenzt. Im vorliegenden Beitrag wird aufgezeigt, wie ein qualitatives Verfahren zur besseren Theoriebildung genutzt werden kann. Am Beispiel der Untersuchung der Akzeptanz von Projektmanagement-Software (PMS) kann aufgezeigt werden, dass dieses Vorgehen zu neuen Konstrukten führt, während einige Konstrukte bestehender Akzeptanz-Theorien nicht bestätigt werden konnten.
9249389a2fbc2151a80b4731f007c780616b067a
Ahsfract-Using the notion of fading memory we prove very strong versions of two folk theorems. The first is that any time-inuariant (TZ) con~inuou.r nonlinear operator can be approximated by a Volterra series operator, and the second is that the approximating operator can be realized as a finiiedimensional linear dynamical system with a nonlinear readout map. While previous approximation results are valid over finite time inlero& and for signals in compact sets, the approximations presented here hold for all time and for signals in useful (noncompact) sets. The discretetime analog of the second theorem asserts that nny TZ operator with fading memory can be approximated (in our strong sense) by a nonlinear movingaverage operator. Some further discussion of the notion of fading memory is given.
ef8af16b408a7c78ab0780fe419d37130f2efe4c
Three new classes of miniaturized Marchand balun are defined based on the synthesis of filter prototypes. They are suitable for mixed lumped-distributed planar realizations with small size resulting from transmission-line resonators being a quarter-wavelength long at frequencies higher than the passband center frequency. Each class corresponds to an S-plane bandpass prototype derived from the specification of transmission zero locations. A tunable 50:100-/spl Omega/ balun is realized at 1 GHz to demonstrate the advantages of the approach presented here.
87eeb5622d8fbe4dca5f1c9b4190f719818c4d6e
Web 2.0 technologies have enabled more and more people to freely comment on different kinds of entities (e.g. sellers, products, services). The large scale of information poses the need and challenge of automatic summarization. In many cases, each of the user-generated short comments comes with an overall rating. In this paper, we study the problem of generating a ``rated aspect summary'' of short comments, which is a decomposed view of the overall ratings for the major aspects so that a user could gain different perspectives towards the target entity. We formally define the problem and decompose the solution into three steps. We demonstrate the effectiveness of our methods by using eBay sellers' feedback comments. We also quantitatively evaluate each step of our methods and study how well human agree on such a summarization task. The proposed methods are quite general and can be used to generate rated aspect summary automatically given any collection of short comments each associated with an overall rating.
626d68fbbb10182a72d1ac305fbb52ae7e47f0dc
This work demonstrates the design of an adaptive reconfigurable rectifier to address the issue of early breakdown voltage in a conventional rectifier and extends the rectifier's operation for wide dynamic input power range. A depletion-mode field-effect transistor has been introduced to operate as a switch and compensate at low and high input power levels for rectifier. This design accomplishes 40% of RF-DC power conversion efficiency over a wide dynamic input power range from -10 dBm to 27 dBm, while exhibiting 78% of peak power efficiency at 22 dBm. The power harvester is designed to operate in the 900 MHz ISM band and suitable for Wireless Power Transfer applications.
767755e5c7389eefb8b60e784dc8395c8d0f417a
Cryptocurrencies like Bitcoin have proven to be a phenomenal success. Bitcoin-like systems use proofof-work mechanism which is therefore considered as 1-hop blockchain, and their security holds if the majority of the computing power is under the control of honest players. However, this assumption has been seriously challenged recently and Bitcoin-like systems will fail when this assumption is broken. We propose the €rst provably secure 2-hop blockchain by combining proof-of-work (€rst hop) and proof-of-stake (second hop) mechanisms. On top of Bitcoin’s brilliant ideas of utilizing the power of the honest miners, via their computing resources, to secure the blockchain, we further leverage the power of the honest users/stakeholders, via their coins/stake, to achieve this goal. Œe security of our blockchain holds if the honest players control majority of the collective resources (which consists of both computing power and stake). Œat said, even if the adversary controls more than 50% computing power, the honest players still have the chance to defend the blockchain via honest stake. ∗An early version with title “Securing Bitcoin-like Blockchains against a Malicious Majority of Computing Power” appeared in ePrint Archive in July 2016. Œe current version shares the same motivation. But the construction idea and modeling approach have been completely revised. †Virginia Commonwealth University. Email: duong‚3@vcu.edu. ‡Shanghai Jiao Tong University. Most work done while visiting Cryptography Lab at Virginia Commonwealth University. Email: fanlei@sjtu.edu.cn. §Virginia Commonwealth University. Email: hszhou@vcu.edu.
a293b3804d1972c9f72ed3490eaafa66349d1597
Many games have a collection of boards with the difficulty of an instance of the game determined by the starting configuration of the board. Correctly rating the difficulty of the boards is somewhat haphazard and required either a remarkable level of understanding of the game or a good deal of play-testing. In this study we explore evolutionary algorithms as a tool to automatically grade the difficulty of boards for a version of the game sokoban. Mean time-to-solution by an evolutionary algorithm and number of failures to solve a board are used as a surrogate for the difficulty of a board. Initial testing with a simple string-based representation, giving a sequence of moves for the sokoban agent, provided very little signal; it usually failed. Two other representations, based on a reactive linear genetic programming structure called an ISAc list, generated useful hardness-classification information for both hardness surrogates. These two representations differ in that one uses a randomly initialized population of ISAc lists while the other initializes populations with competent agents pre-trained on random collections of sokoban boards. The study encompasses four hardness surrogates: probability-of-failure and mean time-to-solution for each of these two representations. All four are found to generate similar information about board hardness, but probability-of-failure with pre-evolved agents is found to be faster to compute and to have a clearer meaning than the other three board-hardness surrogates.
844b795767b7c382808cc866ffe0c74742f706d4
The cortical, cerebellar and brainstem BOLD-signal changes have been identified with fMRI in humans during mental imagery of walking. In this study the whole brain activation and deactivation pattern during real locomotion was investigated by [(18)F]-FDG-PET and compared to BOLD-signal changes during imagined locomotion in the same subjects using fMRI. Sixteen healthy subjects were scanned at locomotion and rest with [(18)F]-FDG-PET. In the locomotion paradigm subjects walked at constant velocity for 10 min. Then [(18)F]-FDG was injected intravenously while subjects continued walking for another 10 min. For comparison fMRI was performed in the same subjects during imagined walking. During real and imagined locomotion a basic locomotion network including activations in the frontal cortex, cerebellum, pontomesencephalic tegmentum, parahippocampal, fusiform and occipital gyri, and deactivations in the multisensory vestibular cortices (esp. superior temporal gyrus, inferior parietal lobule) was shown. As a difference, the primary motor and somatosensory cortices were activated during real locomotion as distinct to the supplementary motor cortex and basal ganglia during imagined locomotion. Activations of the brainstem locomotor centers were more prominent in imagined locomotion. In conclusion, basic activation and deactivation patterns of real locomotion correspond to that of imagined locomotion. The differences may be due to distinct patterns of locomotion tested. Contrary to constant velocity real locomotion (10 min) in [(18)F]-FDG-PET, mental imagery of locomotion over repeated 20-s periods includes gait initiation and velocity changes. Real steady-state locomotion seems to use a direct pathway via the primary motor cortex, whereas imagined modulatory locomotion an indirect pathway via a supplementary motor cortex and basal ganglia loop.
d372629db7d6516c4729c847eb3f6484ee86de94
One of the most intriguing features of the Visual Question Answering (VQA) challenge is the unpredictability of the questions. Extracting the information required to answer them demands a variety of image operations from detection and counting, to segmentation and reconstruction. To train a method to perform even one of these operations accurately from {image, question, answer} tuples would be challenging, but to aim to achieve them all with a limited set of such training data seems ambitious at best. Our method thus learns how to exploit a set of external off-the-shelf algorithms to achieve its goal, an approach that has something in common with the Neural Turing Machine [10]. The core of our proposed method is a new co-attention model. In addition, the proposed approach generates human-readable reasons for its decision, and can still be trained end-to-end without ground truth reasons being given. We demonstrate the effectiveness on two publicly available datasets, Visual Genome and VQA, and show that it produces the state-of-the-art results in both cases.
8e9119613bceb83cc8a5db810cf5fd015cf75739
Rogue devices are an increasingly dangerous reality in the insider threat problem domain. Industry, government, and academia need to be aware of this problem and promote state-of-the-art detection methods.
95a213c530b605b28e1db4fcad6c3e8e1944f48b
018fd30f1a51c6523b382b6f7db87ddd865e393d
We have designed two end-fire antennas on LTCC with horizontal and vertical polarizations respectively. The antennas operate at 38GHz, a potential frequency for 5G applications. The horizontally-polarized antenna provide a broadband performance about 27% and 6dB end-fire gain and the vertically-polarized one provide a 12.5% bandwidth and 5dB gain. Both antenna are integrated under a compact substrate. Excellent isolation is achieved between the nearby elements making these antennas suitable for corner elements in 5G mobile system.