_id
stringlengths
40
40
text
stringlengths
0
10k
45c56dc268a04c5fc7ca04d7edb985caf2a25093
Presents parameter estimation methods common with discrete probability distributions, which is of particular interest in text modeling. Starting with maximum likelihood, a posteriori and Bayesian estimation, central concepts like conjugate distributions and Bayesian networks are reviewed. As an application, the model of latent Dirichlet allocation (LDA) is explained in detail with a full derivation of an approximate inference algorithm based on Gibbs sampling, including a discussion of Dirichlet hyperparameter estimation. History: version 1: May 2005, version 2.4: August 2008.
9e463eefadbcd336c69270a299666e4104d50159
2e268b70c7dcae58de2c8ff7bed1e58a5e58109a
This is an updated version of Chapter 4 of the author’s Dynamic Programming and Optimal Control, Vol. II, 4th Edition, Athena Scientific, 2012. It includes new material, and it is substantially revised and expanded (it has more than doubled in size). The new material aims to provide a unified treatment of several models, all of which lack the contractive structure that is characteristic of the discounted problems of Chapters 1 and 2: positive and negative cost models, deterministic optimal control (including adaptive DP), stochastic shortest path models, and risk-sensitive models. Here is a summary of the new material:
6d596cb55d99eae216840090b46bc5e49d7aeea5
We propose two novel techniques for overcoming load-imbalance encountered when implementing so-called look-ahead mechanisms in relevant dense matrix factorizations for the solution of linear systems. Both techniques target the scenario where two thread teams are created/activated during the factorization, with each team in charge of performing an independent task/branch of execution. The first technique promotes worker sharing (WS) between the two tasks, allowing the threads of the task that completes first to be reallocated for use by the costlier task. The second technique allows a fast task to alert the slower task of completion, enforcing the early termination (ET) of the second task, and a smooth transition of the factorization procedure into the next iteration. The two mechanisms are instantiated via a new malleable thread-level implementation of the basic linear algebra subprograms, and their benefits are illustrated via an implementation of the LU factorization with partial pivoting enhanced with look-ahead. Concretely, our experimental results on an Intel-Xeon system with 12 cores show the benefits of combining WS+ET, reporting competitive performance in comparison with a task-parallel runtime-based solution.
7157dda72073ff66cc2de6ec5db056a3e8b326d7
25 26 27 28 29 30 31 32 33 34 35 36 Article history: Received 13 February 2012 Received in revised form 18 March 2013 Accepted 4 April 2013 Available online xxxx
66903e95f84767a31beef430b2367492ac9cc750
OBJECTIVE This is the second in a series of articles that describe the prevalence, correlates, and consequences of childhood sexual abuse (CSA) in a birth cohort of more than 1,000 New Zealand children studied to the age of 18 years. This article examines the associations between reports of CSA at age 18 and DSM-IV diagnostic classifications at age 18. METHOD A birth cohort of New Zealand children was studied at annual intervals from birth to age 16 years. At age 18 years retrospective reports of CSA prior to age 16 and concurrently measured psychiatric symptoms were obtained. RESULTS Those reporting CSA had higher rates of major depression, anxiety disorder, conduct disorder, substance use disorder, and suicidal behaviors than those not reporting CSA (p < .002). There were consistent relationships between the extent of CSA and risk of disorder, with those reporting CSA involving intercourse having the highest risk of disorder. These results persisted when findings were adjusted for prospectively measured childhood family and related factors. Similar but less marked relationships between CSA and nonconcurrently measured disorders were found. CONCLUSIONS The findings suggest that CSA, and particularly severe CSA, was associated with increased risk of psychiatric disorder in young adults even when due allowance was made for prospectively measured confounding factors.
8df383aae16ce1003d57184d8e4bf729f265ab40
The design of a new microstrip-line-fed wideband circularly polarized (CP) annular-ring slot antenna (ARSA) is proposed. Compared with existing ring slot antennas, the ARSAs designed here possess much larger CP bandwidths. The main features of the proposed design include a wider ring slot, a pair of grounded hat-shaped patches, and a deformed bent feeding microstrip line. The ARSAs designed using FR4 substrates in the L and S bands have 3-dB axial-ratio bandwidths (ARBWs) of as large as 46% and 56%, respectively, whereas the one using an RT5880 substrate in the L band, 65%. In these 3-dB axial-ratio bands, impedance matching with VSWR ≤ 2 is also achieved.
95e873c3f64a9bd8346f5b5da2e4f14774536834
A substrate integrated waveguide (SIW) H-plane sectoral horn antenna, with significantly improved bandwidth, is presented. A tapered ridge, consisting of a simple arrangement of vias on the side flared wall within the multilayer substrate, is introduced to enlarge the operational bandwidth. A simple feed configuration is suggested to provide the propagating wave for the antenna structure. The proposed antenna is simulated by two well-known full-wave packages, Ansoft HFSS and CST Microwave Studio, based on segregate numerical methods. Close agreement between simulation results is reached. The designed antenna shows good radiation characteristics and low VSWR, lower than 2.5, for the whole frequency range of 18-40 GHz.
12a376e621d690f3e94bce14cd03c2798a626a38
This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers[6]. The third contribution is a method for combining increasingly more complex classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. The cascade can be viewed as an object specific focus-of-attention mechanism which unlike previous approaches provides statistical guarantees that discarded regions are unlikely to contain the object of interest. In the domain of face detection the system yields detection rates comparable to the best previous systems. Used in real-time applications, the detector runs at 15 frames per second without resorting to image differencing or skin color detection.
51f0e3fe5335e2c3a55e673a6adae646f0ad6e11
■ Abstract Sociologists often model social processes as interactions among variables. We review an alternative approach that models social life as interactions among adaptive agents who influence one another in response to the influence they receive. These agent-based models (ABMs) show how simple and predictable local interactions can generate familiar but enigmatic global patterns, such as the diffusion of information, emergence of norms, coordination of conventions, or participation in collective action. Emergent social patterns can also appear unexpectedly and then just as dramatically transform or disappear, as happens in revolutions, market crashes, fads, and feeding frenzies. ABMs provide theoretical leverage where the global patterns of interest are more than the aggregation of individual attributes, but at the same time, the emergent pattern cannot be understood without a bottom up dynamical model of the microfoundations at the relational level. We begin with a brief historical sketch of the shift from “factors” to “actors” in computational sociology that shows how agent-based modeling differs fundamentally from earlier sociological uses of computer simulation. We then review recent contributions focused on the emergence of social structure and social order out of local interaction. Although sociology has lagged behind other social sciences in appreciating this new methodology, a distinctive sociological contribution is evident in the papers we review. First, theoretical interest focuses on dynamic social networks that shape and are shaped by agent interaction. Second, ABMs are used to perform virtual experiments that test macrosociological theories by manipulating structural factors like network topology, social stratification, or spatial mobility. We conclude our review with a series of recommendations for realizing the rich sociological potential of this approach.
b73cdb60b2fe9fb317fca4fb9f5e1106e13c2345
aa0c01e553d0a1ab40c204725d13fe528c514bba
Fluent and safe interactions of humans and robots require both partners to anticipate the others’ actions. A common approach to human intention inference is to model specific trajectories towards known goals with supervised classifiers. However, these approaches do not take possible future movements into account nor do they make use of kinematic cues, such as legible and predictable motion. The bottleneck of these methods is the lack of an accurate model of general human motion. In this work, we present a conditional variational autoencoder that is trained to predict a window of future human motion given a window of past frames. Using skeletal data obtained from RGB depth images, we show how this unsupervised approach can be used for online motion prediction for up to 1660 ms. Additionally, we demonstrate online target prediction within the first 300-500 ms after motion onset without the use of target specific training data. The advantage of our probabilistic approach is the possibility to draw samples of possible future motions. Finally, we investigate how movements and kinematic cues are represented on the learned low dimensional manifold.
cbbf72d487f5b645d50d7d3d94b264f6a881c96f
This paper presents the first fully on-chip integrated energy harvester and rectenna at the W-Band in 65nm CMOS technology. The designs are based on a 1-stage Dickson voltage multiplier. The rectenna consists of an on-chip integrated dipole antenna with a reflector underneath the substrate to enhance the directivity and realized gain. The energy harvester and rectenna achieve a power conversion efficiency of 10% and 2% respectively at 94GHz. The stand-alone harvester occupies only 0.0945mm2 including pads, while the fully integrated rectenna occupies a minimal chip area of 0.48mm2.
30667550901b9420e02c7d61cdf8fa7d5db207af
6cdb6ba83bfaca7b2865a53341106a71e1b3d2dd
Social media are becoming ubiquitous and need to be managed like all other forms of media that organizations employ to meet their goals. However, social media are fundamentally different from any traditional or other online media because of their social network structure and egalitarian nature. These differences require a distinct measurement approach as a prerequisite for proper analysis and subsequent management. To develop the right social media metrics and subsequently construct appropriate dashboards, we provide a tool kit consisting of three novel components. First, we theoretically derive and propose a holistic framework that covers the major elements of social media, drawing on theories from marketing, psychology, and sociology. We continue to support and detail these elements — namely ‘motives,’ ‘content,’ ‘network structure,’ and ‘social roles & interactions’ — with recent research studies. Second, based on our theoretical framework, the literature review, and practical experience, we suggest nine guidelines that may prove valuable for designing appropriate social media metrics and constructing a sensible social media dashboard. Third, based on the framework and the guidelines we derive managerial implications and suggest an agenda for future research. © 2013 Direct Marketing Educational Foundation, Inc. Published by Elsevier Inc. All rights reserved.
3c9598a2be80a88fccecde80e6f266af7907d7e7
ab3d0ea202b2641eeb66f1d6a391a43598ba22b9
Reinforcement Learning (RL) is considered here as an adaptation technique of neural controllers of machines. The goal is to make Actor-Critic algorithms require less agent-environment interaction to obtain policies of the same quality, at the cost of additional background computations. We propose to achieve this goal in the spirit of experience replay. An estimation method of improvement direction of a changing policy, based on preceding experience, is essential here. We propose one that uses truncated importance sampling. We derive bounds of bias of that type of estimators and prove that this bias asymptotically vanishes. In the experimental study we apply our approach to the classic ActorCritic and obtain 20-fold increase in speed of learning.
71e258b1aeea7a0e2b2076a4fddb0679ad2ecf9f
The "Internet of Things"(IoT) opens opportunities for devices and softwares to share information on an unprecedented scale. However, such a large interconnected network poses new challenges to system developers and users. In this article, we propose a layered architecture of IoT system. Using this model, we try to identify and assess each layer's challenges. We also discuss several existing technologies that can be used make this architecture secure.
61f4f67fc0e73fa3aef8628aae53a4d9b502d381
Interpretable variables are useful in generative models. Generative Adversarial Networks (GANs) are generative models that are flexible in their input. The Information Maximizing GAN (InfoGAN) ties the output of the generator to a component of its input called the latent codes. By forcing the output to be tied to this input component, we can control some properties of the output representation. It is notoriously difficult to find the Nash equilibrium when jointly training the discriminator and generator in a GAN. We uncover some successful and unsuccessful configurations for generating images using InfoGAN.
41289566ac0176dced2312f813328ad4c0552618
The prevalence of mobile platforms, the large market share of Android, plus the openness of the Android Market makes it a hot target for malware attacks. Once a malware sample has been identified, it is critical to quickly reveal its malicious intent and inner workings. In this paper we present DroidScope, an Android analysis platform that continues the tradition of virtualization-based malware analysis. Unlike current desktop malware analysis platforms, DroidScope reconstructs both the OSlevel and Java-level semantics simultaneously and seamlessly. To facilitate custom analysis, DroidScope exports three tiered APIs that mirror the three levels of an Android device: hardware, OS and Dalvik Virtual Machine. On top of DroidScope, we further developed several analysis tools to collect detailed native and Dalvik instruction traces, profile API-level activity, and track information leakage through both the Java and native components using taint analysis. These tools have proven to be effective in analyzing real world malware samples and incur reasonably low performance overheads.
05ca17ffa777f64991a8da04f2fd03880ac51236
In this paper we explore the problem of creating vulnerability signatures. A vulnerability signature matches all exploits of a given vulnerability, even polymorphic or metamorphic variants. Our work departs from previous approaches by focusing on the semantics of the program and vulnerability exercised by a sample exploit instead of the semantics or syntax of the exploit itself. We show the semantics of a vulnerability define a language which contains all and only those inputs that exploit the vulnerability. A vulnerability signature is a representation (e.g., a regular expression) of the vulnerability language. Unlike exploit-based signatures whose error rate can only be empirically measured for known test cases, the quality of a vulnerability signature can be formally quantified for all possible inputs. We provide a formal definition of a vulnerability signature and investigate the computational complexity of creating and matching vulnerability signatures. We also systematically explore the design space of vulnerability signatures. We identify three central issues in vulnerability-signature creation: how a vulnerability signature represents the set of inputs that may exercise a vulnerability, the vulnerability coverage (i.e., number of vulnerable program paths) that is subject to our analysis during signature creation, and how a vulnerability signature is then created for a given representation and coverage. We propose new data-flow analysis and novel adoption of existing techniques such as constraint solving for automatically generating vulnerability signatures. We have built a prototype system to test our techniques. Our experiments show that we can automatically generate a vulnerability signature using a single exploit which is of much higher quality than previous exploit-based signatures. In addition, our techniques have several other security applications, and thus may be of independent interest
6fece3ef2da2c2f13a66407615f2c9a5b3737c88
This paper proposes a dynamic controller structure and a systematic design procedure for stabilizing discrete-time hybrid systems. The proposed approach is based on the concept of control Lyapunov functions (CLFs), which, when available, can be used to design a stabilizing state-feedback control law. In general, the construction of a CLF for hybrid dynamical systems involving both continuous and discrete states is extremely complicated, especially in the presence of non-trivial discrete dynamics. Therefore, we introduce the novel concept of a hybrid control Lyapunov function, which allows the compositional design of a discrete and a continuous part of the CLF, and we formally prove that the existence of a hybrid CLF guarantees the existence of a classical CLF. A constructive procedure is provided to synthesize a hybrid CLF, by expanding the dynamics of the hybrid system with a specific controller dynamics. We show that this synthesis procedure leads to a dynamic controller that can be implemented by a receding horizon control strategy, and that the associated optimization problem is numerically tractable for a fairly general class of hybrid systems, useful in real world applications. Compared to classical hybrid receding horizon control algorithms, the proposed approach typically requires a shorter prediction horizon to guarantee asymptotic stability of the closed-loop system, which yields a reduction of the computational burden, as illustrated through two examples.
3b3c153b09495e2f79dd973253f9d2ee763940a5
The applicability of machine learning methods is often limi ted by the amount of available labeled data, and by the ability (or inability) of the de signer to produce good internal representations and good similarity measures for the input data vectors. The aim of this thesis is to alleviate these two limitations by proposing al gorithms tolearngood internal representations, and invariant feature hierarchies from u nlabeled data. These methods go beyond traditional supervised learning algorithms, and rely on unsupervised, and semi-supervised learning. In particular, this work focuses on “deep learning” methods , a et of techniques and principles to train hierarchical models. Hierarchical mod els produce feature hierarchies that can capture complex non-linear dependencies among the observed data variables in a concise and efficient manner. After training, these mode ls can be employed in real-time systems because they compute the representation by a very fast forward propagation of the input through a sequence of non-linear transf ormations. When the paucity of labeled data does not allow the use of traditional supervi s d algorithms, each layer of the hierarchy can be trained in sequence starting at the bott om by using unsupervised or semi-supervised algorithms. Once each layer has been train ed, the whole system can be fine-tuned in an end-to-end fashion. We propose several unsu pervised algorithms that can be used as building block to train such feature hierarchi es. We investigate algorithms that produce sparse overcomplete representations a nd fe tures that are invariant to known and learned transformations. These algorithms are designed using the Energy-
447ce2aecdf742cf96137f8bf7355a7404489178
In this letter, a new type of wideband substrate integrated waveguide (SIW) cavity-backed patch antenna and array for millimeter wave (mmW) are investigated and implemented. The proposed antenna is composed of a rectangular patch with a backed SIW cavity. In order to enhance the bandwidth and radiation efficiency, the cavity is designed to resonate at its TE210 mode. Based on the proposed antenna, a 4 × 4 array is also designed. Both the proposed antenna and array are fabricated with standard printed circuit board (PCB) process, which possess the advantage of easy integration with planar circuits. The measured bandwidth (|S11| ≤ -10 dB) of the antenna element is larger than 15%, and that of the antenna array is about 8.7%. The measured peak gains are 6.5 dBi for the element and 17.8 dBi for the array, and the corresponding simulated radiation efficiencies are 83.9% and 74.9%, respectively. The proposed antenna and array are promising for millimeter-wave applications due to its merits of wide band, high efficiency, low cost, low profile, etc.
429d0dd7192450e2a52a8ae7f658a5d99222946e
A compact, low cost and high radiation efficiency antenna structure, planar waveguide, substrate integrated waveguide (SIW), dielectric resonator antennas (DRA) is presented in this paper. Since SIW is a high Q- waveguide and DRA is a low loss radiator, then SIW-DRA forms an excellent antenna system with high radiation efficiency at millimeter-waveband, where the conductor loss dominates. The impact of different antenna parameters on the antenna performance is studied. Experimental data for SIW-DRA, based on two different slot orientations, at millimeter-wave band are introduced and compared to the simulated HFSS results to validate our proposed antenna model. A good agreement is obtained. The measured gain for SIW-DRA single element showed a broadside gain of 5.51 dB,-19 dB maximum cross polarized radiation level, and overall calculated (simulated using HFSS) radiation efficiency of greater than 95%.
0cb2e8605a7b5ddb5f3006f71d19cb9da960db98
Modern deep neural networks have a large number of parameters, making them very hard to train. We propose DSD, a dense-sparse-dense training flow, for regularizing deep neural networks and achieving better optimization performance. In the first D (Dense) step, we train a dense network to learn connection weights and importance. In the S (Sparse) step, we regularize the network by pruning the unimportant connections with small weights and retraining the network given the sparsity constraint. In the final D (re-Dense) step, we increase the model capacity by removing the sparsity constraint, re-initialize the pruned parameters from zero and retrain the whole dense network. Experiments show that DSD training can improve the performance for a wide range of CNNs, RNNs and LSTMs on the tasks of image classification, caption generation and speech recognition. On ImageNet, DSD improved the Top1 accuracy of GoogLeNet by 1.1%, VGG-16 by 4.3%, ResNet-18 by 1.2% and ResNet-50 by 1.1%, respectively. On the WSJ’93 dataset, DSD improved DeepSpeech and DeepSpeech2 WER by 2.0% and 1.1%. On the Flickr-8K dataset, DSD improved the NeuralTalk BLEU score by over 1.7. DSD is easy to use in practice: at training time, DSD incurs only one extra hyper-parameter: the sparsity ratio in the S step. At testing time, DSD doesn’t change the network architecture or incur any inference overhead. The consistent and significant performance gain of DSD experiments shows the inadequacy of the current training methods for finding the best local optimum, while DSD effectively achieves superior optimization performance for finding a better solution. DSD models are available to download at https://songhan.github.io/DSD.
6fd329a1e7f513745e5fc462f146aa80c6090a1d
The identification of invalid data in recordings obtained using wearable sensors is of particular importance since data obtained from mobile patients is, in general, noisier than data obtained from nonmobile patients. In this paper, we present a signal quality index (SQI), which is intended to assess whether reliable heart rates (HRs) can be obtained from electrocardiogram (ECG) and photoplethysmogram (PPG) signals collected using wearable sensors. The algorithms were validated on manually labeled data. Sensitivities and specificities of 94% and 97% were achieved for the ECG and 91% and 95% for the PPG. Additionally, we propose two applications of the SQI. First, we demonstrate that, by using the SQI as a trigger for a power-saving strategy, it is possible to reduce the recording time by up to 94% for the ECG and 93% for the PPG with only minimal loss of valid vital-sign data. Second, we demonstrate how an SQI can be used to reduce the error in the estimation of respiratory rate (RR) from the PPG. The performance of the two applications was assessed on data collected from a clinical study on hospital patients who were able to walk unassisted.
ef8070a37fb6f0959acfcee9d40f0b3cb912ba9f
Over the last three decades, a methodological pluralism has developed within information systems (IS) research. Various disciplines and many research communities as well, contribute to this discussion. However, working on the same research topic or studying the same phenomenon does not necessarily ensure mutual understanding. Especially within this multidisciplinary and international context, the epistemological assumptions made by different researchers may vary fundamentally. These assumptions exert a substantial impact on how concepts like validity, reliability, quality and rigour of research are understood. Thus, the extensive publication of epistemological assumptions is, in effect, almost mandatory. Hence, the aim of this paper is to develop an epistemological framework which can be used for systematically analysing the epistemological assumptions in IS research. Rather than attempting to identify and classify IS research paradigms, this research aims at a comprehensive discussion of epistemology within the context of IS. It seeks to contribute to building the basis for identifying similarities as well as differences between distinct IS approaches and methods. In order to demonstrate the epistemological framework, the consensus-oriented interpretivist approach to conceptual modelling is used as an example.
25e989b45de04c6086364b376d29ec11008360a3
Humans acquire their most basic physical concepts early in development, and continue to enrich and expand their intuitive physics throughout life as they are exposed to more and varied dynamical environments. We introduce a hierarchical Bayesian framework to explain how people can learn physical parameters at multiple levels. In contrast to previous Bayesian models of theory acquisition (Tenenbaum, Kemp, Griffiths, & Goodman, 2011), we work with more expressive probabilistic program representations suitable for learning the forces and properties that govern how objects interact in dynamic scenes unfolding over time. We compare our model to human learners on a challenging task of estimating multiple physical parameters in novel microworlds given short movies. This task requires people to reason simultaneously about multiple interacting physical laws and properties. People are generally able to learn in this setting and are consistent in their judgments. Yet they also make systematic errors indicative of the approximations people might make in solving this computationally demanding problem with limited computational resources. We propose two approximations that complement the top-down Bayesian approach. One approximation model relies on a more bottom-up feature-based inference scheme. The second approximation combines the strengths of the bottom-up and top-down approaches, by taking the feature-based inference as its point of departure for a search in physical-parameter space.
6f0144dc7ba19123ddce8cdd4ad0f6dc36dd4ef2
International guidelines recommend the use of Gonadotropin-Releasing Hormone (GnRH) agonists in adolescents with gender dysphoria (GD) to suppress puberty. Little is known about the way gender dysphoric adolescents themselves think about this early medical intervention. The purpose of the present study was (1) to explicate the considerations of gender dysphoric adolescents in the Netherlands concerning the use of puberty suppression; (2) to explore whether the considerations of gender dysphoric adolescents differ from those of professionals working in treatment teams, and if so in what sense. This was a qualitative study designed to identify considerations of gender dysphoric adolescents regarding early treatment. All 13 adolescents, except for one, were treated with puberty suppression; five adolescents were trans girls and eight were trans boys. Their ages ranged between 13 and 18 years, with an average age of 16 years and 11 months, and a median age of 17 years and 4 months. Subsequently, the considerations of the adolescents were compared with views of clinicians treating youth with GD. From the interviews with the gender dysphoric adolescents, three themes emerged: (1) the difficulty of determining what is an appropriate lower age limit for starting puberty suppression. Most adolescents found it difficult to define an appropriate age limit and saw it as a dilemma; (2) the lack of data on the long-term effects of puberty suppression. Most adolescents stated that the lack of long-term data did not and would not stop them from wanting puberty suppression; (3) the role of the social context, for which there were two subthemes: (a) increased media-attention, on television, and on the Internet; (b) an imposed stereotype. Some adolescents were positive about the role of the social context, but others raised doubts about it. Compared to clinicians, adolescents were often more cautious in their treatment views. It is important to give voice to gender dysphoric adolescents when discussing the use of puberty suppression in GD. Otherwise, professionals might act based on assumptions about adolescents' opinions instead of their actual considerations. We encourage gathering more qualitative research data from gender dysphoric adolescents in other countries.
446573a346acdbd2eb8f0527c5d73fc707f04527
6e6f47c4b2109e7824cd475336c3676faf9b113e
We posit that visually descriptive language offers computer vision researchers both information about the world, and information about how people describe the world. The potential benefit from this source is made more significant due to the enormous amount of language data easily available today. We present a system to automatically generate natural language descriptions from images that exploits both statistics gleaned from parsing large quantities of text data and recognition algorithms from computer vision. The system is very effective at producing relevant sentences for images. It also generates descriptions that are notably more true to the specific image content than previous work.
9a0fff9611832cd78a82a32f47b8ca917fbd4077
9e5a13f3bc2580fd16bab15e31dc632148021f5d
A bandwidth enhanced method of a low-profile substrate integrated waveguide (SIW) cavity-backed slot antenna is presented in this paper. Bandwidth enhancement is achieved by simultaneously exciting two hybrid modes in the SIW-backed cavity and merging them within the required frequency range. These two hybrid modes, whose dominant fields are located in different half parts of the SIW cavity, are two different combinations of the and resonances. This design method has been validated by experiments. Compared with those of a previously presented SIW cavity-backed slot antenna, fractional impedance bandwidth of the proposed antenna is enhanced from 1.4% to 6.3%, its gain and radiation efficiency are also slightly improved to 6.0 dBi and 90%, and its SIW cavity size is reduced about 30%. The proposed antenna exhibits low cross polarization level and high front to back ratio. It still retains advantages of low-profile, low fabrication cost, and easy integration with planar circuits.
4c68e7eff1da14003cc7efbfbd9a0a0a3d5d4968
BACKGROUND Implementation science has progressed towards increased use of theoretical approaches to provide better understanding and explanation of how and why implementation succeeds or fails. The aim of this article is to propose a taxonomy that distinguishes between different categories of theories, models and frameworks in implementation science, to facilitate appropriate selection and application of relevant approaches in implementation research and practice and to foster cross-disciplinary dialogue among implementation researchers. DISCUSSION Theoretical approaches used in implementation science have three overarching aims: describing and/or guiding the process of translating research into practice (process models); understanding and/or explaining what influences implementation outcomes (determinant frameworks, classic theories, implementation theories); and evaluating implementation (evaluation frameworks). This article proposes five categories of theoretical approaches to achieve three overarching aims. These categories are not always recognized as separate types of approaches in the literature. While there is overlap between some of the theories, models and frameworks, awareness of the differences is important to facilitate the selection of relevant approaches. Most determinant frameworks provide limited "how-to" support for carrying out implementation endeavours since the determinants usually are too generic to provide sufficient detail for guiding an implementation process. And while the relevance of addressing barriers and enablers to translating research into practice is mentioned in many process models, these models do not identify or systematically structure specific determinants associated with implementation success. Furthermore, process models recognize a temporal sequence of implementation endeavours, whereas determinant frameworks do not explicitly take a process perspective of implementation.
00a7370518a6174e078df1c22ad366a2188313b5
Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. A method for finding the optical flow pattern is presented which assumes that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image. An iterative implementation is shown which successfully computes the optical flow for a number of synthetic image sequences. The algorithm is robust in that it can handle image sequences that are quantized rather coarsely in space and time. It is also insensitive to quantization of brightness levels and additive noise. Examples are included where the assumption of smoothness is violated at singular points or along lines in the image.
2315fc6c2c0c4abd2443e26a26e7bb86df8e24cc
We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.
1bc49abe5145055f1fa259bd4e700b1eb6b7f08d
We present SummaRuNNer, a Recurrent Neural Network (RNN) based sequence model for extractive summarization of documents and show that it achieves performance better than or comparable to state-of-the-art. Our model has the additional advantage of being very interpretable, since it allows visualization of its predictions broken up by abstract features such as information content, salience and novelty. Another novel contribution of our work is abstractive training of our extractive model that can train on human generated reference summaries alone, eliminating the need for sentence-level extractive labels.
3e4bd583795875c6550026fc02fb111daee763b4
In this paper, we use deep neural networks for inverting face sketches to synthesize photorealistic face images. We first construct a semi-simulated dataset containing a very large number of computergenerated face sketches with different styles and corresponding face images by expanding existing unconstrained face data sets. We then train models achieving state-of-the-art results on both computer-generated sketches and hand-drawn sketches by leveraging recent advances in deep learning such as batch normalization, deep residual learning, perceptual losses and stochastic optimization in combination with our new dataset. We finally demonstrate potential applications of our models in fine arts and forensic arts. In contrast to existing patch-based approaches, our deep-neuralnetwork-based approach can be used for synthesizing photorealistic face images by inverting face sketches in the wild.
2fd9f4d331d144f71baf2c66628b12c8c65d3ffb
Boosting is one of the most important recent developments in classification methodology. Boosting works by sequentially applying a classification algorithm to reweighted versions of the training data and then taking a weighted majority vote of the sequence of classifiers thus produced. For many classification algorithms, this simple strategy results in dramatic improvements in performance. We show that this seemingly mysterious phenomenon can be understood in terms of well-known statistical principles, namely additive modeling and maximum likelihood. For the two-class problem, boosting can be viewed as an approximation to additive modeling on the logistic scale using maximum Bernoulli likelihood as a criterion. We develop more direct approximations and show that they exhibit nearly identical results to boosting. Direct multiclass generalizations based on multinomial likelihood are derived that exhibit performance comparable to other recently proposed multiclass generalizations of boosting in most situations, and far superior in some. We suggest a minor modification to boosting that can reduce computation, often by factors of 10 to 50. Finally, we apply these insights to produce an alternative formulation of boosting decision trees. This approach, based on best-first truncated tree induction, often leads to better performance, and can provide interpretable descriptions of the aggregate decision rule. It is also much faster computationally, making it more suitable to large-scale data mining applications.
573ae3286d050281ffe4f6c973b64df171c9d5a5
We consider the problem of detecting a large number of different classes of objects in cluttered scenes. Traditional approaches require applying a battery of different classifiers to the image, at multiple locations and scales. This can be slow and can require a lot of training data since each classifier requires the computation of many different image features. In particular, for independently trained detectors, the (runtime) computational complexity and the (training-time) sample complexity scale linearly with the number of classes to be detected. We present a multitask learning procedure, based on boosted decision stumps, that reduces the computational and sample complexity by finding common features that can be shared across the classes (and/or views). The detectors for each class are trained jointly, rather than independently. For a given performance level, the total number of features required and, therefore, the runtime cost of the classifier, is observed to scale approximately logarithmically with the number of classes. The features selected by joint training are generic edge-like features, whereas the features chosen by training each class separately tend to be more object-specific. The generic features generalize better and considerably reduce the computational cost of multiclass object detection
1e75a3bc8bdd942b683cf0b27d1e1ed97fa3b4c3
Gaussian processes allow for flexible specification of prior assumptions of unknown dynamics in state space models. We present a procedure for efficient Bayesian learning in Gaussian process state space models, where the representation is formed by projecting the problem onto a set of approximate eigenfunctions derived from the prior covariance structure. Learning under this family of models can be conducted using a carefully crafted particle MCMC algorithm. This scheme is computationally efficient and yet allows for a fully Bayesian treatment of the problem. Compared to conventional system identification tools or existing learning methods, we show competitive performance and reliable quantification of uncertainties in the model.
142a799aac35f3b47df9fbfdc7547ddbebba0a91
We present a novel approach for model-based 6D pose refinement in color data. Building on the established idea of contour-based pose tracking, we teach a deep neural network to predict a translational and rotational update. At the core, we propose a new visual loss that drives the pose update by aligning object contours, thus avoiding the definition of any explicit appearance model. In contrast to previous work our method is correspondence-free, segmentation-free, can handle occlusion and is agnostic to geometrical symmetry as well as visual ambiguities. Additionally, we observe a strong robustness towards rough initialization. The approach can run in real-time and produces pose accuracies that come close to 3D ICP without the need for depth data. Furthermore, our networks are trained from purely synthetic data and will be published together with the refinement code at http://campar.in.tum. de/Main/FabianManhardt to ensure reproducibility.
046bf6fb90438335eaee07594855efbf541a8aba
Urbanization's rapid progress has modernized many people's lives but also engendered big issues, such as traffic congestion, energy consumption, and pollution. Urban computing aims to tackle these issues by using the data that has been generated in cities (e.g., traffic flow, human mobility, and geographical data). Urban computing connects urban sensing, data management, data analytics, and service providing into a recurrent process for an unobtrusive and continuous improvement of people's lives, city operation systems, and the environment. Urban computing is an interdisciplinary field where computer sciences meet conventional city-related fields, like transportation, civil engineering, environment, economy, ecology, and sociology in the context of urban spaces. This article first introduces the concept of urban computing, discussing its general framework and key challenges from the perspective of computer sciences. Second, we classify the applications of urban computing into seven categories, consisting of urban planning, transportation, the environment, energy, social, economy, and public safety and security, presenting representative scenarios in each category. Third, we summarize the typical technologies that are needed in urban computing into four folds, which are about urban sensing, urban data management, knowledge fusion across heterogeneous data, and urban data visualization. Finally, we give an outlook on the future of urban computing, suggesting a few research topics that are somehow missing in the community.
970b4d2ed1249af97cdf2fffdc7b4beae458db89
With nearly one billion online videos viewed everyday, an emerging new frontier in computer vision research is recognition and search in video. While much effort has been devoted to the collection and annotation of large scalable static image datasets containing thousands of image categories, human action datasets lag far behind. Current action recognition databases contain on the order of ten different action categories collected under fairly controlled conditions. State-of-the-art performance on these datasets is now near ceiling and thus there is a need for the design and creation of new benchmarks. To address this issue we collected the largest action video database to-date with 51 action categories, which in total contain around 7,000 manually annotated clips extracted from a variety of sources ranging from digitized movies to YouTube. We use this database to evaluate the performance of two representative computer vision systems for action recognition and explore the robustness of these methods under various conditions such as camera motion, viewpoint, video quality and occlusion.
3087289229146fc344560478aac366e4977749c0
Information theory has recently been employed to specify more precisely than has hitherto been possible man's capacity in certain sensory, perceptual, and perceptual-motor functions (5, 10, 13, 15, 17, 18). The experiments reported in the present paper extend the theory to the human motor system. The applicability of only the basic concepts, amount of information, noise, channel capacity, and rate of information transmission, will be examined at this time. General familiarity with these concepts as formulated by recent writers (4,11, 20, 22) is assumed. Strictly speaking, we cannot study man's motor system at the behavioral level in isolation from its associated sensory mechanisms. We can only analyze the behavior of the entire receptor-neural-effector system. How-
64305508a53cc99e62e6ff73592016d0b994afd4
RDF is increasingly being used to encode data for the semantic web and data exchange. There have been a large number of works that address RDF data management following different approaches. In this paper we provide an overview of these works. This review considers centralized solutions (what are referred to as warehousing approaches), distributed solutions, and the techniques that have been developed for querying linked data. In each category, further classifications are provided that would assist readers in understanding the identifying characteristics of different approaches.
6162ab446003a91fc5d53c3b82739631c2e66d0f
29f5ecc324e934d21fe8ddde814fca36cfe8eaea
Introduction Breast cancer (BC) is the most common cancer in women, affecting about 10% of all women at some stages of their life. In recent years, the incidence rate keeps increasing and data show that the survival rate is 88% after five years from diagnosis and 80% after 10 years from diagnosis [1]. Early prediction of breast cancer is one of the most crucial works in the follow-up process. Data mining methods can help to reduce the number of false positive and false negative decisions [2,3]. Consequently, new methods such as knowledge discovery in databases (KDD) has become a popular research tool for medical researchers who try to identify and exploit patterns and relationships among large number of variables, and predict the outcome of a disease using historical cases stored in datasets [4].
261e841c8e0175586fb193b1a199cefaa8ecf169
How should one apply deep learning to tasks such as morphological reinflection, which stochastically edit one string to get another? A recent approach to such sequence-to-sequence tasks is to compress the input string into a vector that is then used to generate the output string, using recurrent neural networks. In contrast, we propose to keep the traditional architecture, which uses a finite-state transducer to score all possible output strings, but to augment the scoring function with the help of recurrent networks. A stack of bidirectional LSTMs reads the input string from leftto-right and right-to-left, in order to summarize the input context in which a transducer arc is applied. We combine these learned features with the transducer to define a probability distribution over aligned output strings, in the form of a weighted finite-state automaton. This reduces hand-engineering of features, allows learned features to examine unbounded context in the input string, and still permits exact inference through dynamic programming. We illustrate our method on the tasks of morphological reinflection and lemmatization.
8f69384b197a424dfbd0f60d7c48c110faf2b982
014b191f412f8496813d7c358ddd11d8512f2005
High-resolution image radars open new opportunities for estimating velocity and direction of movement of extended objects from a single observation. Since radar sensors only measure the radial velocity, a tracking system is normally used to determine the velocity vector of the object. A stable velocity is estimated after several frames at the earliest, resulting in a significant loss of time for reacting to certain situations such as cross-traffic. The following paper presents a robust and model-free approach to determine the velocity vector of an extended target. In contrast to the Kalman filter, it does not require data association in time and space. An instant (~ 50 ms) and bias free estimation of its velocity vector is possible. Our approach can handle noise and systematic variations (e.g., micro-Doppler of wheels) in the signal. It is optimized to deal with measurement errors of the radar sensor not only in the radial velocity, but in the azimuth position as well. The accuracy of this method is increased by the fusion of multiple radar sensors.
ad6d5e4545c60ec559d27a09fbef13fa538172e1
In advanced driver assistance systems and autonomous driving, reliable environment perception and object tracking based on radar is fundamental. High-resolution radar sensors often provide multiple measurements per object. Since in this case traditional point tracking algorithms are not applicable any more, novel approaches for extended object tracking emerged in the last few years. However, they are primarily designed for lidar applications or omit the additional Doppler information of radars. Classical radar based tracking methods using the Doppler information are mostly designed for point tracking of parallel traffic. The measurement model presented in this paper is developed to track vehicles of approximately rectangular shape in arbitrary traffic scenarios including parallel and cross traffic. In addition to the kinematic state, it allows to determine and track the geometric state of the object. Using the Doppler information is an important component in the model. Furthermore, it neither requires measurement preprocessing, data clustering, nor explicit data association. For object tracking, a Rao-Blackwellized particle filter (RBPF) adapted to the measurement model is presented.
965f8bb9a467ce9538dec6bef57438964976d6d9
The accuracy of automated human face recognition algorithms can significantly degrade while recognizing same subjects under make-up and disguised appearances. Increasing constraints on enhanced security and surveillance requires enhanced accuracy from face recognition algorithms for faces under disguise and/or makeup. This paper presents a new database for face images under disguised and make-up appearances the development of face recognition algorithms under such covariates. This database has 2460 images from 410 different subjects and is acquired under real environment, focuses on make-up and disguises covariates and also provides ground truth (eye glass, goggle, mustache, beard) for every image. This can enable developed algorithms to automatically quantify their capability for identifying such important disguise attribute during the face recognition We also present comparative experimental results from two popular commercial matchers and from recent publications. Our experimental results suggest significant performance degradation in the capability of these matchers in automatically recognizing these faces. We also analyze face detection accuracy from these matchers. The experimental results underline the challenges in recognizing faces under these covariates. Availability of this new database in public domain will help to advance much needed research and development in recognizing make-up and disguised faces.
f8e32c5707df46bfcd683f723ad27d410e7ff37d
32c8c7949a6efa2c114e482c830321428ee58d70
This article discusses the capabilities of state-of-the art GPU-based high-throughput computing systems and considers the challenges to scaling single-chip parallel-computing systems, highlighting high-impact areas that the computing research community can address. Nvidia Research is investigating an architecture for a heterogeneous high-performance computing system that seeks to address these challenges.
8890bb44abb89601c950eb5e56172bb58d5beea8
Learning a goal-oriented dialog policy is generally performed offline with supervised learning algorithms or online with reinforcement learning (RL). Additionally, as companies accumulate massive quantities of dialog transcripts between customers and trained human agents, encoder-decoder methods have gained popularity as agent utterances can be directly treated as supervision without the need for utterance-level annotations. However, one potential drawback of such approaches is that they myopically generate the next agent utterance without regard for dialog-level considerations. To resolve this concern, this paper describes an offline RL method for learning from unannotated corpora that can optimize a goal-oriented policy at both the utterance and dialog level. We introduce a novel reward function and use both on-policy and off-policy policy gradient to learn a policy offline without requiring online user interaction or an explicit state space definition.
589d84d528d353a382a42e5b58dc48a57d332be8
The Rutgers Ankle is a Stewart platform-type haptic interface designed for use in rehabilitation. The system supplies six-degree-of-freedom (DOF) resistive forces on the patient's foot, in response to virtual reality-based exercises. The Rutgers Ankle controller contains an embedded Pentium board, pneumatic solenoid valves, valve controllers, and associated signal conditioning electronics. The rehabilitation exercise used in our case study consists of piloting a virtual airplane through loops. The exercise difficulty can be selected based on the number and placement of loops, the airplane speed in the virtual environment, and the degree of resistance provided by the haptic interface. Exercise data is stored transparently, in real time, in an Oracle database. These data consist of ankle position, forces, and mechanical work during an exercise, and over subsequent rehabilitation sessions. The number of loops completed and the time it took to do that are also stored online. A case study is presented of a patient nine months post-stroke using this system. Results showed that, over six rehabilitation sessions, the patient improved on clinical measures of strength and endurance, which corresponded well with torque and power output increases measured by the Rutgers Ankle. There were also substantial improvements in task accuracy and coordination during the simulation and the patient's walking and stair-climbing ability.
67161d331d496ad5255ad8982759a1c853856932
This paper proposes architecture for an early warning floods system to alert public against flood disasters. An effective early warning system must be developed with linkages between four elements, which are accurate data collection to undertake risk assessments, development of hazard monitoring services, communication on risk related information and existence of community response capabilities. This project focuses on monitoring water level remotely using wireless sensor network. The project also utilizes Global System for Mobile communication (GSM) and short message service (SMS) to relay data from sensors to computers or directly alert the respective victim's through their mobile phone. It is hope that the proposed architecture can be further develop into a functioning system, which would be beneficial to the community and act as a precautionary action to save lives in the case of flood disaster.
a5de09243b4b12fc4bcf4db56c8e38fc3beddf4f
Recent studies demonstrate that the implementation of enterprise social systems (ESSs) will transfer organizations into new paradigm of social business which results in enormous economic returns and competitive advantage. Social business creates a completely new way of working and organizing characterised by social collaboration, intrinsic knowledge sharing, voluntarily mass participation, just name a few. Thus, implementation of ESSs should tackle the uniqueness of the new way of working and organizing. However, there is a shortage of knowledge about implementation of these large enterprise systems. The purpose of this paper is to study governance model of ESSs implementation. A case study is conducted to investigate the implementation of the social intranet called the ‘Stream’ at Statkraft, which is a world-leading energy company in Norway. The governance model of ‘Stream’ emphasizes the close cooperation and accountability between corporate communication, human resources and IT, which implies paradigm shift in governance of implementing ESSs. Benefits and challenges in the implementation are also identified. Based on the knowledge and insights gained in the study, recommendations are proposed to assist the company in improving governance of ESSs implementation. The study contributes knowledge/know-how on governance of ESSs implementation.
5ca6217b3e8353778d05fe58bcc5a9ea79707287
E-government has become part and parcel of every government’s agenda. Many governments have embraced its significant impacts and influences on governmental operations. As the technology mantra has become more ubiquitous, so government have decided to inaugurate e-government policy in its agencies and departments in order to enhance the quality of services, better transparency and greater accountability. As for Malaysia, the government is inspired by the wave of the e-government, as its establishment can improve the quality of public service delivery, and also its internal operations. This qualitative study will explore the status implementation of e-government initiatives as a case study, and will also provide a comparative evaluation of these findings, using the South Korean government as a benchmark study, given its outstanding performance in e-government. The findings of this study will highlight potential areas for improvement in relation to the public administration perspective and from this comparative approach too, Malaysia can learn some lessons from South Korea’s practices to ensure the success of e-government projects.
2b2c30dfd3968c5d9418bb2c14b2382d3ccc64b2
DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for humanand machineconsumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data.
92930f4279b48f7e4e8ec2edc24e8aa65c5954fd
We present a data mining approach for profiling bank clients in order to support the process of detection of antimoney laundering operations. We first present the overall system architecture, and then focus on the relevant component for this paper. We detail the experiments performed on real world data from a financial institution, which allowed us to group clients in clusters and then generate a set of classification rules. We discuss the relevance of the founded client profiles and of the generated classification rules. According to the defined overall agent-based architecture, these rules will be incorporated in the knowledge base of the intelligent agents responsible for the signaling of suspicious transactions.
8985000860dbb88a80736cac8efe30516e69ee3f
Human activity recognition using smart home sensors is one of the bases of ubiquitous computing in smart environments and a topic undergoing intense research in the field of ambient assisted living. The increasingly large amount of data sets calls for machine learning methods. In this paper, we introduce a deep learning model that learns to classify human activities without using any prior knowledge. For this purpose, a Long Short Term Memory (LSTM) Recurrent Neural Network was applied to three real world smart home datasets. The results of these experiments show that the proposed approach outperforms the existing ones in terms of accuracy and performance.
b31f0085b7dd24bdde1e5cec003589ce4bf4238c
Domain adaptation (DA) is transfer learning which aims to learn an effective predictor on target data from source data despite data distribution mismatch between source and target. We present in this paper a novel unsupervised DA method for cross-domain visual recognition which simultaneously optimizes the three terms of a theoretically established error bound. Specifically, the proposed DA method iteratively searches a latent shared feature subspace where not only the divergence of data distributions between the source domain and the target domain is decreased as most state-of-the-art DA methods do, but also the inter-class distances are increased to facilitate discriminative learning. Moreover, the proposed DA method sparsely regresses class labels from the features achieved in the shared subspace while minimizing the prediction errors on the source data and ensuring label consistency between source and target. Data outliers are also accounted for to further avoid negative knowledge transfer. Comprehensive experiments and in-depth analysis verify the effectiveness of the proposed DA method which consistently outperforms the state-of-the-art DA methods on standard DA benchmarks, i.e., 12 cross-domain image classification tasks.
b9bc9a32791dba1fc85bb9d4bfb9c52e6f052d2e
A simple and efficient randomized algorithm is presented for solving single-query path planning problems in high-dimensional configuration spaces. The method works by incrementally building two Rapidly-exploring Random Trees (RRTs) rooted at the start and the goal configurations. The trees each explore space around them and also advance towards each other through the use of a simple greedy heuristic. Although originally designed to plan motions for a human arm (modeled as a 7-DOF kinematic chain) for the automatic graphic animation of collision-free grasping and manipulation tasks, the algorithm has been successfully applied to a variety of path planning problems. Computed examples include generating collision-free motions for rigid objects in 2D and 3D, and collision-free manipulation motions for a 6-DOF PUMA arm in a 3D workspace. Some basic theoretical analysis is also presented.
d967d9550f831a8b3f5cb00f8835a4c866da60ad
6a686b525a84a87ca3e4d90a6704da8588e84344
This communication presents a wideband circularly polarized (CP) 2 × 2 patch array using a sequential-phase feeding network. By combining three operating modes, both axial ratio (AR) and impedance bandwidths are enhanced and wider than those of previous published sequential-fed single-layer patch arrays. These three CP operating modes are tuned and matched by optimizing the truncated corners of patch elements and the sequential-phase feeding network. A prototype of the proposed patch array is built to validate the design experimentally. The measured -10-dB impedance bandwidth is 1.03 GHz (5.20-6.23 GHz), and the measured 3-dB AR bandwidth is 0.7 GHz (5.25-5.95 GHz), or 12.7% corresponding to the center frequency of 5.5 GHz. The measured peak gain is about 12 dBic and the gain variation is less than 3 dB within the AR bandwidth.
d97e3655f50ee9b679ac395b2637f6fa66af98c7
Feedback has been studied as a strategy for promoting energy conservation for more than 30 years, with studies reporting widely varying results. Literature reviews have suggested that the effectiveness of feedback depends on both how and to whom it is provided; yet variations in both the type of feedback provided and the study methodology have made it difficult for conclusions to be drawn. The current article analyzes past theoretical and empirical research on both feedback and proenvironmental behavior to identify unresolved issues, and utilizes a meta-analysis of 42 feedback studies published between 1976 and 2010 to test a set of hypotheses about when and how feedback about energy usage is most effective. Results indicate that feedback is effective overall, r = .071, p < .001, but with significant variation in effects (r varied from -.080 to .480). Several treatment variables were found to moderate this relationship, including frequency, medium, comparison message, duration, and combination with other interventions (e.g., goal, incentive). Overall, results provide further evidence of feedback as a promising strategy to promote energy conservation and suggest areas in which future research should focus to explore how and for whom feedback is most effective.
697754f7e62236f6a2a069134cbc62e3138ac89f
ee654db227dcb7b39d26bec7cc06e2b43b525826
54e7e6348fc8eb27dd6c34e0afbe8881eeb0debd
Extending beyond the boundaries of science, art, and culture, content-based multimedia information retrieval provides new paradigms and methods for searching through the myriad variety of media all over the world. This survey reviews 100+ recent articles on content-based multimedia information retrieval and discusses their role in current research directions which include browsing and search paradigms, user studies, affective computing, learning, semantic queries, new features and media types, high performance indexing, and evaluation techniques. Based on the current state of the art, we discuss the major challenges for the future.
2902e0a4b12cf8269bb32ef6a4ebb3f054cd087e
Optimizing task-related mathematical model is one of the most fundamental methodologies in statistic and learning areas. However, generally designed schematic iterations may hard to investigate complex data distributions in real-world applications. Recently, training deep propagations (i.e., networks) has gained promising performance in some particular tasks. Unfortunately, existing networks are often built in heuristic manners, thus lack of principled interpretations and solid theoretical supports. In this work, we provide a new paradigm, named Propagation and Optimization based Deep Model (PODM), to bridge the gaps between these different mechanisms (i.e., model optimization and deep propagation). On the one hand, we utilize PODM as a deeply trained solver for model optimization. Different from these existing network based iterations, which often lack theoretical investigations, we provide strict convergence analysis for PODM in the challenging nonconvex and nonsmooth scenarios. On the other hand, by relaxing the model constraints and performing end-to-end training, we also develop a PODM based strategy to integrate domain knowledge (formulated as models) and real data distributions (learned by networks), resulting in a generic ensemble framework for challenging real-world applications. Extensive experiments verify our theoretical results and demonstrate the superiority of PODM against these state-of-the-art approaches.
5dca5aa024f513801a53d9738161b8a01730d395
The task of building a map of an unknown environment and concurrently using that map to navigate is a central problem in mobile robotics research. This paper addresses the problem of how to perform concurrent mapping and localization (CML) adaptively using sonar. Stochastic mapping is a feature-based approach to CML that generalizes the extended Kalman lter to incorporate vehicle localization and environmental mapping. We describe an implementation of stochastic mapping that uses a delayed nearest neighbor data association strategy to initialize new features into the map, match measurements to map features, and delete out-of-date features. We introduce a metric for adaptive sensing which is de ned in terms of Fisher information and represents the sum of the areas of the error ellipses of the vehicle and feature estimates in the map. Predicted sensor readings and expected dead-reckoning errors are used to estimate the metric for each potential action of the robot, and the action which yields the lowest cost (i.e., the maximum information) is selected. This technique is demonstrated via simulations, in-air sonar experiments, and underwater sonar experiments. Results are shown for 1) adaptive control of motion and 2) adaptive control of motion and scanning. The vehicle tends to explore selectively di erent objects in the environment. The performance of this adaptive algorithm is shown to be superior to straight-line motion and random motion.
5eb1e4bb87b0d99d62f171f1eede90c98bf266ab
Wireless power transfer is a promising technology to fundamentally address energy problems in a wireless sensor network. To make such a technology work effectively, a vehicle is needed to carry a charger to travel inside the network. On the other hand, it has been well recognized that a mobile base station offers significant advantages over a fixed one. In this paper, we investigate an interesting problem of co-locating the mobile base station on the wireless charging vehicle. We study an optimization problem that jointly optimizes traveling path, stopping points, charging schedule, and flow routing. Our study is carried out in two steps. First, we study an idealized problem that assumes zero traveling time, and develop a provably near-optimal solution to this idealized problem. In the second step, we show how to develop a practical solution with non-zero traveling time and quantify the performance gap between this solution and the unknown optimal solution to the original problem.
229547ed3312ee6195104cdec7ce47578f92c2c6
This paper explores how the dynamic capabilities of firms may account for the emergence of differential firm performance within an industry. Synthesizing insights from both strategic and organizational theory, four performance-relevant attributes of dynamic capabilities are proposed: timing of dynamic capability deployment, imitation as part of the search for alternative resource configurations, cost of dynamic capability deployment, and learning to deploy dynamic capabilities. Theoretical propositions are developed suggesting how these attributes contribute to the emergence of differential firm performance. A formal model is presented in which dynamic capability is modeled as a set of routines guiding a firm’s evolutionary processes of change. Simulation of the model yields insights into the process of change through dynamic capability deployment, and permits refinement of the theoretical propositions. One of the interesting findings of this study is that even if dynamic capabilities are equifinal across firms, robust performance differences may arise across firms if the costs and timing of dynamic capability deployment differ across firms.
b533b13910cc0de21054116715988783fbea87cc
In these days an increasing number of public and commercial services are used through the Internet, so that security of information becomes more important issue in the society information Intrusion Detection System (IDS) used against attacks for protected to the Computer networks. On another way, some data mining techniques also contribute to intrusion detection. Some data mining techniques used for intrusion detection can be classified into two classes: misuse intrusion detection and anomaly intrusion detection. Misuse always refers to known attacks and harmful activities that exploit the known sensitivity of the system. Anomaly generally means a generally activity that is able to indicate an intrusion. In this paper, comparison made between 23 related papers of using data mining techniques for intrusion detection. Our work provide an overview on data mining and soft computing techniques such as Artificial Neural Network (ANN), Support Vector Machine (SVM) and Multivariate Adaptive Regression Spline (MARS), etc. In this paper comparison shown between IDS data mining techniques and tuples used for intrusion detection. In those 23 related papers, 7 research papers use ANN and 4 ones use SVM, because of ANN and SVM are more reliable than other models and structures. In addition, 8 researches use the DARPA1998 tuples and 13 researches use the KDDCup1999, because the standard tuples are much more credible than others. There is no best intrusion detection model in present time. However, future research directions for intrusion detection should be explored in this paper. Keywords— intrusion detection, data mining, ANN
a69fd2ad66791ad9fa8722a3b2916092d0f37967
We present an interactive system for synthesizing urban layouts by example. Our method simultaneously performs both a structure-based synthesis and an image-based synthesis to generate a complete urban layout with a plausible street network and with aerial-view imagery. Our approach uses the structure and image data of real-world urban areas and a synthesis algorithm to provide several high-level operations to easily and interactively generate complex layouts by example. The user can create new urban layouts by a sequence of operations such as join, expand, and blend without being concerned about low-level structural details. Further, the ability to blend example urban layout fragments provides a powerful way to generate new synthetic content. We demonstrate our system by creating urban layouts using example fragments from several real-world cities, each ranging from hundreds to thousands of city blocks and parcels.
9b8be6c3ebd7a79975067214e5eaea05d4ac2384
We show that gradient descent converges to a local minimizer , almost surely with random initialization. This is proved by applying the Stable Manifold Theorem from dynamical systems theory.
75235e03ac0ec643e8a784f432e6d1567eea81b7
Mining data streams has been a focal point of research interest over the past decade. Hardware and software advances have contributed to the significance of this area of research by introducing faster than ever data generation. This rapidly generated data has been termed as data streams. Credit card transactions, Google searches, phone calls in a city, and many others\are typical data streams. In many important applications, it is inevitable to analyze this streaming data in real time. Traditional data mining techniques have fallen short in addressing the needs of data stream mining. Randomization, approximation, and adaptation have been used extensively in developing new techniques or adopting exiting ones to enable them to operate in a streaming environment. This paper reviews key milestones and state of the art in the data stream mining area. Future insights are also be presented. C © 2011 Wiley Periodicals, Inc.
2327ad6f237b37150e84f0d745a05565ebf0b24d
Bit coin is the first digital currency to see widespread adoption. While payments are conducted between pseudonyms, Bit coin cannot offer strong privacy guarantees: payment transactions are recorded in a public decentralized ledger, from which much information can be deduced. Zero coin (Miers et al., IEEE S&P 2013) tackles some of these privacy issues by unlinking transactions from the payment's origin. Yet, it still reveals payments' destinations and amounts, and is limited in functionality. In this paper, we construct a full-fledged ledger-based digital currency with strong privacy guarantees. Our results leverage recent advances in zero-knowledge Succinct Non-interactive Arguments of Knowledge (zk-SNARKs). First, we formulate and construct decentralized anonymous payment schemes (DAP schemes). A DAP scheme enables users to directly pay each other privately: the corresponding transaction hides the payment's origin, destination, and transferred amount. We provide formal definitions and proofs of the construction's security. Second, we build Zero cash, a practical instantiation of our DAP scheme construction. In Zero cash, transactions are less than 1 kB and take under 6 ms to verify - orders of magnitude more efficient than the less-anonymous Zero coin and competitive with plain Bit coin.
3d08280ae82c2044c8dcc66d2be5a72c738e9cf9
I present a hybrid matrix factorisation model representing users and items as linear combinations of their content features’ latent factors. The model outperforms both collaborative and content-based models in cold-start or sparse interaction data scenarios (using both user and item metadata), and performs at least as well as a pure collaborative matrix factorisation model where interaction data is abundant. Additionally, feature embeddings produced by the model encode semantic information in a way reminiscent of word embedding approaches, making them useful for a range of related tasks such as tag recommendations.
25d1a2c364b05e0db056846ec397fbf0eacdca5c
Matrix factorization-based methods become popular in dyadic data analysis, where a fundamental problem, for example, is to perform document clustering or co-clustering words and documents given a term-document matrix. Nonnegative matrix tri-factorization (NMTF) emerges as a promising tool for co-clustering, seeking a 3-factor decomposition X USV with all factor matrices restricted to be nonnegative, i.e., U P 0; S P 0;V P 0: In this paper we develop multiplicative updates for orthogonal NMTF where X USV is pursued with orthogonality constraints, UU 1⁄4 I; and VV 1⁄4 I, exploiting true gradients on Stiefel manifolds. Experiments on various document data sets demonstrate that our method works well for document clustering and is useful in revealing polysemous words via co-clustering words and documents. 2010 Elsevier Ltd. All rights reserved.
461ac81b6ce10d48a6c342e64c59f86d7566fa68
This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.
c03fb606432af6637d9d7d31f447e62a855b77a0
Although there is evidence that academically successful students are engaged with their studies, it has proved difficult to define student engagement clearly. Student engagement is commonly construed as having two dimensions, social and academic. The rapid adoption of social media and digital technologies has ensured increasing interest in using them for improving student engagement. This paper examines Facebook usage among a first year psychology student cohort and reports that although the majority of students (94%) had Facebook accounts and spent an average of one hour per day on Facebook, usage was found to be predominantly social. Personality factors influenced usage patterns, with more conscientious students tending to use Facebook less than less conscientious students. This paper argues that, rather than promoting social engagement in a way that might increase academic engagement, it appears that Facebook is more likely to operate as a distracting influence.
171071069cb3b58cfe8e38232c25bfa99f1fbdf5
Online social networking sites have revealed an entirely new method of self-presentation. This cyber social tool provides a new site of analysis to examine personality and identity. The current study examines how narcissism and self-esteem are manifested on the social networking Web site Facebook.com . Self-esteem and narcissistic personality self-reports were collected from 100 Facebook users at York University. Participant Web pages were also coded based on self-promotional content features. Correlation analyses revealed that individuals higher in narcissism and lower in self-esteem were related to greater online activity as well as some self-promotional content. Gender differences were found to influence the type of self-promotional content presented by individual Facebook users. Implications and future research directions of narcissism and self-esteem on social networking Web sites are discussed.
5e30227914559ce088a750885761adbb7d2edbbf
Teenagers will freely give up personal information to join social networks on the Internet. Afterwards, they are surprised when their parents read their journals. Communities are outraged by the personal information posted by young people online and colleges keep track of student activities on and off campus. The posting of personal information by teens and students has consequences. This article will discuss the uproar over privacy issues in social networks by describing a privacy paradox; private versus public space; and, social networking privacy issues. It will finally discuss proposed privacy solutions and steps that can be taken to help resolve the privacy paradox.
6c394f5eecc0371b43331b54ed118c8637b8b60d
A novel design formula of multi-section power divider is derived to obtain wide isolation performance. The derived design formula is based on the singly terminated filter design theory. This paper presents several simulation and experimental results of multi section power divider to show validity of the proposed design formula. Experiments show excellent performance of multi section power divider with multi-octave isolation characteristic.
d12e3606d94050d382306761eb43b58c042ac390
Understanding the factors that lead to success (or failure) of students at placement tests is an interesting and challenging problem. Since the centralized placement tests and future academic achievements are considered to be related concepts, analysis of the success factors behind placement tests may help understand and potentially improve academic achievement. In this study using a large and feature rich dataset from Secondary Education Transition System in Turkey we developed models to predict secondary education placement test results, and using sensitivity analysis on those prediction models we identified the most important predictors. The results showed that C5 decision tree algorithm is the best predictor with 95% accuracy on hold-out sample, followed by support vector machines (with an accuracy of 91%) and artificial neural networks (with an accuracy of 89%). Logistic regression models came out to be the least accurate of the four with and overall accuracy of 82%. The sensitivity analysis revealed that previous test experience, whether a student has a scholarship, student’s number of siblings, previous years’ grade point average are among the most important predictors of the placement test scores. 2012 Elsevier Ltd. All rights reserved.
75859ac30f5444f0d9acfeff618444ae280d661d
Multibiometric systems are being increasingly de- ployed in many large-scale biometric applications (e.g., FBI-IAFIS, UIDAI system in India) because they have several advantages such as lower error rates and larger population coverage compared to unibiometric systems. However, multibiometric systems require storage of multiple biometric templates (e.g., fingerprint, iris, and face) for each user, which results in increased risk to user privacy and system security. One method to protect individual templates is to store only the secure sketch generated from the corresponding template using a biometric cryptosystem. This requires storage of multiple sketches. In this paper, we propose a feature-level fusion framework to simultaneously protect multiple templates of a user as a single secure sketch. Our main contributions include: (1) practical implementation of the proposed feature-level fusion framework using two well-known biometric cryptosystems, namery,fuzzy vault and fuzzy commitment, and (2) detailed analysis of the trade-off between matching accuracy and security in the proposed multibiometric cryptosystems based on two different databases (one real and one virtual multimodal database), each containing the three most popular biometric modalities, namely, fingerprint, iris, and face. Experimental results show that both the multibiometric cryptosystems proposed here have higher security and matching performance compared to their unibiometric counterparts.
98e03d35857f66c34fa79f3ea0dd2b4e3b670044
65227ddbbd12015ba8a45a81122b1fa540e79890
The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a method for rating Web pages objectively and mechanically, e ectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to e ciently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation.
0a202f1dfc6991a6a204eaa5e6b46d6223a4d98a
No feature-based vision system can work unless good features can be identi ed and tracked from frame to frame. Although tracking itself is by and large a solved problem, selecting features that can be tracked well and correspond to physical points in the world is still hard. We propose a feature selection criterion that is optimal by construction because it is based on how the tracker works, and a feature monitoring method that can detect occlusions, disocclusions, and features that do not correspond to points in the world. These methods are based on a new tracking algorithm that extends previous Newton-Raphson style search methods to work under a ne image transformations. We test performance with several simulations and experiments.
4f640c1338840f3740187352531dfeca9381b5c3
The problem of mining sequential patterns was recently introduced in [AS95]. We are given a database of sequences, where each sequence is a list of transactions ordered by transaction-time, and each transaction is a set of items. The problem is to discover all sequential patterns with a user-speci ed minimum support, where the support of a pattern is the number of data-sequences that contain the pattern. An example of a sequential pattern is \5% of customers bought `Foundation' and `Ringworld' in one transaction, followed by `Second Foundation' in a later transaction". We generalize the problem as follows. First, we add time constraints that specify a minimum and/or maximum time period between adjacent elements in a pattern. Second, we relax the restriction that the items in an element of a sequential pattern must come from the same transaction, instead allowing the items to be present in a set of transactions whose transaction-times are within a user-speci ed time window. Third, given a user-de ned taxonomy (is-a hierarchy) on items, we allow sequential patterns to include items across all levels of the taxonomy. We present GSP, a new algorithm that discovers these generalized sequential patterns. Empirical evaluation using synthetic and real-life data indicates that GSP is much faster than the AprioriAll algorithm presented in [AS95]. GSP scales linearly with the number of data-sequences, and has very good scale-up properties with respect to the average datasequence size. Also, Department of Computer Science, University of Wisconsin, Madison.
4282abe7e08bcfb2d282c063428fb187b2802e9c
With the gradual increase of cases using fillers, cases of patients treated by non-medical professionals or inexperienced physicians resulting in complications are also increasing. We herein report 2 patients who experienced acute complications after receiving filler injections and were successfully treated with adipose-derived stem cell (ADSCs) therapy. Case 1 was a 23-year-old female patient who received a filler (Restylane) injection in her forehead, glabella, and nose by a non-medical professional. The day after her injection, inflammation was observed with a 3×3 cm skin necrosis. Case 2 was a 30-year-old woman who received a filler injection of hyaluronic acid gel (Juvederm) on her nasal dorsum and tip at a private clinic. She developed erythema and swelling in the filler-injected area A solution containing ADSCs harvested from each patient's abdominal subcutaneous tissue was injected into the lesion at the subcutaneous and dermis levels. The wounds healed without additional treatment. With continuous follow-up, both patients experienced only fine linear scars 6 months postoperatively. By using adipose-derived stem cells, we successfully treated the acute complications of skin necrosis after the filler injection, resulting in much less scarring, and more satisfactory results were achieved not only in wound healing, but also in esthetics.
3198e5de8eb9edfd92e5f9c2cb325846e25f22aa
bdf434f475654ee0a99fe11fd63405b038244f69
Recidivism prediction scores are used across the USA to determine sentencing and supervision for hundreds of thousands of inmates. One such generator of recidivism prediction scores is Northpointe's Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) score, used in states like California and Florida, which past research has shown to be biased against black inmates according to certain measures of fairness. To counteract this racial bias, we present an adversarially-trained neural network that predicts recidivism and is trained to remove racial bias. When comparing the results of our model to COMPAS, we gain predictive accuracy and get closer to achieving two out of three measures of fairness: parity and equality of odds. Our model can be generalized to any prediction and demographic. This piece of research contributes an example of scientific replication and simplification in a high-stakes real-world application like recidivism prediction.
33fad977a6b317cfd6ecd43d978687e0df8a7338
ÐThis paper presents a theoretically very simple, yet efficient, multiresolution approach to gray-scale and rotation invariant texture classification based on local binary patterns and nonparametric discrimination of sample and prototype distributions. The method is based on recognizing that certain local binary patterns, termed auniform,o are fundamental properties of local image texture and their occurrence histogram is proven to be a very powerful texture feature. We derive a generalized gray-scale and rotation invariant operator presentation that allows for detecting the auniformo patterns for any quantization of the angular space and for any spatial resolution and presents a method for combining multiple operators for multiresolution analysis. The proposed approach is very robust in terms of gray-scale variations since the operator is, by definition, invariant against any monotonic transformation of the gray scale. Another advantage is computational simplicity as the operator can be realized with a few operations in a small neighborhood and a lookup table. Excellent experimental results obtained in true problems of rotation invariance, where the classifier is trained at one particular rotation angle and tested with samples from other rotation angles, demonstrate that good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary patterns. These operators characterize the spatial configuration of local image texture and the performance can be further improved by combining them with rotation invariant variance measures that characterize the contrast of local image texture. The joint distributions of these orthogonal measures are shown to be very powerful tools for rotation invariant texture analysis. Index TermsÐNonparametric, texture analysis, Outex, Brodatz, distribution, histogram, contrast.
8ade5d29ae9eac7b0980bc6bc1b873d0dd12a486
12a97799334e3a455e278f2a995a93a6e0c034bf
This paper proposes an embedding matching approach to Chinese word segmentation, which generalizes the traditional sequence labeling framework and takes advantage of distributed representations. The training and prediction algorithms have linear-time complexity. Based on the proposed model, a greedy segmenter is developed and evaluated on benchmark corpora. Experiments show that our greedy segmenter achieves improved results over previous neural network-based word segmenters, and its performance is competitive with state-of-the-art methods, despite its simple feature set and the absence of external resources for training.