corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-101
0704.2452
Optimum Linear LLR Calculation for Iterative Decoding on Fading Channels
<|reference_start|>Optimum Linear LLR Calculation for Iterative Decoding on Fading Channels: On a fading channel with no channel state information at the receiver, calculating true log-likelihood ratios (LLR) is complicated. Existing work assume that the power of the additive noise is known and use the expected value of the fading gain in a linear function of the channel output to find approximate LLRs. In this work, we first assume that the power of the additive noise is known and we find the optimum linear approximation of LLRs in the sense of maximum achievable transmission rate on the channel. The maximum achievable rate under this linear LLR calculation is almost equal to the maximum achievable rate under true LLR calculation. We also observe that this method appears to be the optimum in the sense of bit error rate performance too. These results are then extended to the case that the noise power is unknown at the receiver and a performance almost identical to the case that the noise power is perfectly known is obtained.<|reference_end|>
arxiv
@article{yazdani2007optimum, title={Optimum Linear LLR Calculation for Iterative Decoding on Fading Channels}, author={Raman Yazdani, Masoud Ardakani}, journal={arXiv preprint arXiv:0704.2452}, year={2007}, doi={10.1109/ISIT.2007.4557204}, archivePrefix={arXiv}, eprint={0704.2452}, primaryClass={cs.IT math.IT} }
yazdani2007optimum
arxiv-102
0704.2475
Physical Layer Network Coding
<|reference_start|>Physical Layer Network Coding: A main distinguishing feature of a wireless network compared with a wired network is its broadcast nature, in which the signal transmitted by a node may reach several other nodes, and a node may receive signals from several other nodes simultaneously. Rather than a blessing, this feature is treated more as an interference-inducing nuisance in most wireless networks today (e.g., IEEE 802.11). This paper shows that the concept of network coding can be applied at the physical layer to turn the broadcast property into a capacity-boosting advantage in wireless ad hoc networks. Specifically, we propose a physical-layer network coding (PNC) scheme to coordinate transmissions among nodes. In contrast to straightforward network coding which performs coding arithmetic on digital bit streams after they have been received, PNC makes use of the additive nature of simultaneously arriving electromagnetic (EM) waves for equivalent coding operation. And in doing so, PNC can potentially achieve 100% and 50% throughput increases compared with traditional transmission and straightforward network coding, respectively, in multi-hop networks. More specifically, the information-theoretic capacity of PNC is almost double that of traditional transmission in the SNR region of practical interest (higher than 0dB). We believe this is a first paper that ventures into EM-wave-based network coding at the physical layer and demonstrates its potential for boosting network capacity.<|reference_end|>
arxiv
@article{shengli2007physical, title={Physical Layer Network Coding}, author={Zhang Shengli, Soung-Chang Liew, Patrick P.K. Lam}, journal={arXiv preprint arXiv:0704.2475}, year={2007}, archivePrefix={arXiv}, eprint={0704.2475}, primaryClass={cs.IT math.IT} }
shengli2007physical
arxiv-103
0704.2505
Algebraic Distributed Space-Time Codes with Low ML Decoding Complexity
<|reference_start|>Algebraic Distributed Space-Time Codes with Low ML Decoding Complexity: "Extended Clifford algebras" are introduced as a means to obtain low ML decoding complexity space-time block codes. Using left regular matrix representations of two specific classes of extended Clifford algebras, two systematic algebraic constructions of full diversity Distributed Space-Time Codes (DSTCs) are provided for any power of two number of relays. The left regular matrix representation has been shown to naturally result in space-time codes meeting the additional constraints required for DSTCs. The DSTCs so constructed have the salient feature of reduced Maximum Likelihood (ML) decoding complexity. In particular, the ML decoding of these codes can be performed by applying the lattice decoder algorithm on a lattice of four times lesser dimension than what is required in general. Moreover these codes have a uniform distribution of power among the relays and in time, thus leading to a low Peak to Average Power Ratio at the relays.<|reference_end|>
arxiv
@article{rajan2007algebraic, title={Algebraic Distributed Space-Time Codes with Low ML Decoding Complexity}, author={G. Susinder Rajan and B. Sundar Rajan}, journal={arXiv preprint arXiv:0704.2505}, year={2007}, doi={10.1109/ISIT.2007.4557437}, archivePrefix={arXiv}, eprint={0704.2505}, primaryClass={cs.IT cs.DM math.IT} }
rajan2007algebraic
arxiv-104
0704.2507
STBCs from Representation of Extended Clifford Algebras
<|reference_start|>STBCs from Representation of Extended Clifford Algebras: A set of sufficient conditions to construct $\lambda$-real symbol Maximum Likelihood (ML) decodable STBCs have recently been provided by Karmakar et al. STBCs satisfying these sufficient conditions were named as Clifford Unitary Weight (CUW) codes. In this paper, the maximal rate (as measured in complex symbols per channel use) of CUW codes for $\lambda=2^a,a\in\mathbb{N}$ is obtained using tools from representation theory. Two algebraic constructions of codes achieving this maximal rate are also provided. One of the constructions is obtained using linear representation of finite groups whereas the other construction is based on the concept of right module algebra over non-commutative rings. To the knowledge of the authors, this is the first paper in which matrices over non-commutative rings is used to construct STBCs. An algebraic explanation is provided for the 'ABBA' construction first proposed by Tirkkonen et al and the tensor product construction proposed by Karmakar et al. Furthermore, it is established that the 4 transmit antenna STBC originally proposed by Tirkkonen et al based on the ABBA construction is actually a single complex symbol ML decodable code if the design variables are permuted and signal sets of appropriate dimensions are chosen.<|reference_end|>
arxiv
@article{rajan2007stbcs, title={STBCs from Representation of Extended Clifford Algebras}, author={G. Susinder Rajan and B. Sundar Rajan}, journal={arXiv preprint arXiv:0704.2507}, year={2007}, doi={10.1109/ISIT.2007.4557141}, archivePrefix={arXiv}, eprint={0704.2507}, primaryClass={cs.IT cs.DM math.IT} }
rajan2007stbcs
arxiv-105
0704.2509
Signal Set Design for Full-Diversity Low-Decoding-Complexity Differential Scaled-Unitary STBCs
<|reference_start|>Signal Set Design for Full-Diversity Low-Decoding-Complexity Differential Scaled-Unitary STBCs: The problem of designing high rate, full diversity noncoherent space-time block codes (STBCs) with low encoding and decoding complexity is addressed. First, the notion of $g$-group encodable and $g$-group decodable linear STBCs is introduced. Then for a known class of rate-1 linear designs, an explicit construction of fully-diverse signal sets that lead to four-group encodable and four-group decodable differential scaled unitary STBCs for any power of two number of antennas is provided. Previous works on differential STBCs either sacrifice decoding complexity for higher rate or sacrifice rate for lower decoding complexity.<|reference_end|>
arxiv
@article{rajan2007signal, title={Signal Set Design for Full-Diversity Low-Decoding-Complexity Differential Scaled-Unitary STBCs}, author={G. Susinder Rajan and B. Sundar Rajan}, journal={arXiv preprint arXiv:0704.2509}, year={2007}, doi={10.1109/ISIT.2007.4557453}, archivePrefix={arXiv}, eprint={0704.2509}, primaryClass={cs.IT math.IT} }
rajan2007signal
arxiv-106
0704.2511
Noncoherent Low-Decoding-Complexity Space-Time Codes for Wireless Relay Networks
<|reference_start|>Noncoherent Low-Decoding-Complexity Space-Time Codes for Wireless Relay Networks: The differential encoding/decoding setup introduced by Kiran et al, Oggier et al and Jing et al for wireless relay networks that use codebooks consisting of unitary matrices is extended to allow codebooks consisting of scaled unitary matrices. For such codebooks to be used in the Jing-Hassibi protocol for cooperative diversity, the conditions that need to be satisfied by the relay matrices and the codebook are identified. A class of previously known rate one, full diversity, four-group encodable and four-group decodable Differential Space-Time Codes (DSTCs) is proposed for use as Distributed DSTCs (DDSTCs) in the proposed set up. To the best of our knowledge, this is the first known low decoding complexity DDSTC scheme for cooperative wireless networks.<|reference_end|>
arxiv
@article{rajan2007noncoherent, title={Noncoherent Low-Decoding-Complexity Space-Time Codes for Wireless Relay Networks}, author={G. Susinder Rajan and B. Sundar Rajan}, journal={arXiv preprint arXiv:0704.2511}, year={2007}, doi={10.1109/ISIT.2007.4557438}, archivePrefix={arXiv}, eprint={0704.2511}, primaryClass={cs.IT math.IT} }
rajan2007noncoherent
arxiv-107
0704.2542
Narratives within immersive technologies
<|reference_start|>Narratives within immersive technologies: The main goal of this project is to research technical advances in order to enhance the possibility to develop narratives within immersive mediated environments. An important part of the research is concerned with the question of how a script can be written, annotated and realized for an immersive context. A first description of the main theoretical framework and the ongoing work and a first script example is provided. This project is part of the program for presence research, and it will exploit physiological feedback and Computational Intelligence within virtual reality.<|reference_end|>
arxiv
@article{llobera2007narratives, title={Narratives within immersive technologies}, author={Joan Llobera}, journal={arXiv preprint arXiv:0704.2542}, year={2007}, archivePrefix={arXiv}, eprint={0704.2542}, primaryClass={cs.HC} }
llobera2007narratives
arxiv-108
0704.2544
Existence Proofs of Some EXIT Like Functions
<|reference_start|>Existence Proofs of Some EXIT Like Functions: The Extended BP (EBP) Generalized EXIT (GEXIT) function introduced in \cite{MMRU05} plays a fundamental role in the asymptotic analysis of sparse graph codes. For transmission over the binary erasure channel (BEC) the analytic properties of the EBP GEXIT function are relatively simple and well understood. The general case is much harder and even the existence of the curve is not known in general. We introduce some tools from non-linear analysis which can be useful to prove the existence of EXIT like curves in some cases. The main tool is the Krasnoselskii-Rabinowitz (KR) bifurcation theorem.<|reference_end|>
arxiv
@article{rathi2007existence, title={Existence Proofs of Some EXIT Like Functions}, author={Vishwambhar Rathi, Ruediger Urbanke}, journal={arXiv preprint arXiv:0704.2544}, year={2007}, archivePrefix={arXiv}, eprint={0704.2544}, primaryClass={cs.IT math.IT} }
rathi2007existence
arxiv-109
0704.2596
Computing Extensions of Linear Codes
<|reference_start|>Computing Extensions of Linear Codes: This paper deals with the problem of increasing the minimum distance of a linear code by adding one or more columns to the generator matrix. Several methods to compute extensions of linear codes are presented. Many codes improving the previously known lower bounds on the minimum distance have been found.<|reference_end|>
arxiv
@article{grassl2007computing, title={Computing Extensions of Linear Codes}, author={Markus Grassl}, journal={Proceedings 2007 IEEE International Symposium on Information Theory (ISIT 2007), Nice, France, June 2007, pp. 476-480}, year={2007}, doi={10.1109/ISIT.2007.4557095}, archivePrefix={arXiv}, eprint={0704.2596}, primaryClass={cs.IT cs.DM math.IT} }
grassl2007computing
arxiv-110
0704.2609
A-infinity structure on simplicial complexes
<|reference_start|>A-infinity structure on simplicial complexes: A discrete (finite-difference) analogue of differential forms is considered, defined on simplicial complexes, including triangulations of continuous manifolds. Various operations are explicitly defined on these forms, including exterior derivative and exterior product. The latter one is non-associative. Instead, as anticipated, it is a part of non-trivial A-infinity structure, involving a chain of poly-linear operations, constrained by nilpotency relation: (d + \wedge + m + ...)^n = 0 with n=2.<|reference_end|>
arxiv
@article{dolotin2007a-infinity, title={A-infinity structure on simplicial complexes}, author={V.Dolotin, A.Morozov and Sh.Shakirov}, journal={arXiv preprint arXiv:0704.2609}, year={2007}, doi={10.1007/s11232-008-0093-9}, number={ITEP/TH-13/07}, archivePrefix={arXiv}, eprint={0704.2609}, primaryClass={math.GT cs.DM hep-th} }
dolotin2007a-infinity
arxiv-111
0704.2644
Joint universal lossy coding and identification of stationary mixing sources
<|reference_start|>Joint universal lossy coding and identification of stationary mixing sources: The problem of joint universal source coding and modeling, treated in the context of lossless codes by Rissanen, was recently generalized to fixed-rate lossy coding of finitely parametrized continuous-alphabet i.i.d. sources. We extend these results to variable-rate lossy block coding of stationary ergodic sources and show that, for bounded metric distortion measures, any finitely parametrized family of stationary sources satisfying suitable mixing, smoothness and Vapnik-Chervonenkis learnability conditions admits universal schemes for joint lossy source coding and identification. We also give several explicit examples of parametric sources satisfying the regularity conditions.<|reference_end|>
arxiv
@article{raginsky2007joint, title={Joint universal lossy coding and identification of stationary mixing sources}, author={Maxim Raginsky}, journal={arXiv preprint arXiv:0704.2644}, year={2007}, archivePrefix={arXiv}, eprint={0704.2644}, primaryClass={cs.IT cs.LG math.IT} }
raginsky2007joint
arxiv-112
0704.2651
Opportunistic Communications in an Orthogonal Multiaccess Relay Channel
<|reference_start|>Opportunistic Communications in an Orthogonal Multiaccess Relay Channel: The problem of resource allocation is studied for a two-user fading orthogonal multiaccess relay channel (MARC) where both users (sources) communicate with a destination in the presence of a relay. A half-duplex relay is considered that transmits on a channel orthogonal to that used by the sources. The instantaneous fading state between every transmit-receive pair in this network is assumed to be known at both the transmitter and receiver. Under an average power constraint at each source and the relay, the sum-rate for the achievable strategy of decode-and-forward (DF) is maximized over all power allocations (policies) at the sources and relay. It is shown that the sum-rate maximizing policy exploits the multiuser fading diversity to reveal the optimality of opportunistic channel use by each user. A geometric interpretation of the optimal power policy is also presented.<|reference_end|>
arxiv
@article{sankar2007opportunistic, title={Opportunistic Communications in an Orthogonal Multiaccess Relay Channel}, author={Lalitha Sankar, Yingbin Liang, H. Vincent Poor, Narayan B. Mandayam}, journal={arXiv preprint arXiv:0704.2651}, year={2007}, doi={10.1109/ISIT.2007.4557396}, archivePrefix={arXiv}, eprint={0704.2651}, primaryClass={cs.IT math.IT} }
sankar2007opportunistic
arxiv-113
0704.2659
Minimum Expected Distortion in Gaussian Layered Broadcast Coding with Successive Refinement
<|reference_start|>Minimum Expected Distortion in Gaussian Layered Broadcast Coding with Successive Refinement: A transmitter without channel state information (CSI) wishes to send a delay-limited Gaussian source over a slowly fading channel. The source is coded in superimposed layers, with each layer successively refining the description in the previous one. The receiver decodes the layers that are supported by the channel realization and reconstructs the source up to a distortion. In the limit of a continuum of infinite layers, the optimal power distribution that minimizes the expected distortion is given by the solution to a set of linear differential equations in terms of the density of the fading distribution. In the optimal power distribution, as SNR increases, the allocation over the higher layers remains unchanged; rather the extra power is allocated towards the lower layers. On the other hand, as the bandwidth ratio b (channel uses per source symbol) tends to zero, the power distribution that minimizes expected distortion converges to the power distribution that maximizes expected capacity. While expected distortion can be improved by acquiring CSI at the transmitter (CSIT) or by increasing diversity from the realization of independent fading paths, at high SNR the performance benefit from diversity exceeds that from CSIT, especially when b is large.<|reference_end|>
arxiv
@article{ng2007minimum, title={Minimum Expected Distortion in Gaussian Layered Broadcast Coding with Successive Refinement}, author={Chris T. K. Ng, Deniz Gunduz, Andrea J. Goldsmith, Elza Erkip}, journal={arXiv preprint arXiv:0704.2659}, year={2007}, doi={10.1109/ISIT.2007.4557165}, archivePrefix={arXiv}, eprint={0704.2659}, primaryClass={cs.IT math.IT} }
ng2007minimum
arxiv-114
0704.2668
Supervised Feature Selection via Dependence Estimation
<|reference_start|>Supervised Feature Selection via Dependence Estimation: We introduce a framework for filtering features that employs the Hilbert-Schmidt Independence Criterion (HSIC) as a measure of dependence between the features and the labels. The key idea is that good features should maximise such dependence. Feature selection for various supervised learning problems (including classification and regression) is unified under this framework, and the solutions can be approximated using a backward-elimination algorithm. We demonstrate the usefulness of our method on both artificial and real world datasets.<|reference_end|>
arxiv
@article{song2007supervised, title={Supervised Feature Selection via Dependence Estimation}, author={Le Song, Alex Smola, Arthur Gretton, Karsten Borgwardt, Justin Bedo}, journal={arXiv preprint arXiv:0704.2668}, year={2007}, archivePrefix={arXiv}, eprint={0704.2668}, primaryClass={cs.LG} }
song2007supervised
arxiv-115
0704.2680
A Channel that Heats Up
<|reference_start|>A Channel that Heats Up: Motivated by on-chip communication, a channel model is proposed where the variance of the additive noise depends on the weighted sum of the past channel input powers. For this channel, an expression for the capacity per unit cost is derived, and it is shown that the expression holds also in the presence of feedback.<|reference_end|>
arxiv
@article{koch2007a, title={A Channel that Heats Up}, author={Tobias Koch, Amos Lapidoth and Paul P. Sotiriadis}, journal={arXiv preprint arXiv:0704.2680}, year={2007}, archivePrefix={arXiv}, eprint={0704.2680}, primaryClass={cs.IT math.IT} }
koch2007a
arxiv-116
0704.2725
Exploiting Heavy Tails in Training Times of Multilayer Perceptrons: A Case Study with the UCI Thyroid Disease Database
<|reference_start|>Exploiting Heavy Tails in Training Times of Multilayer Perceptrons: A Case Study with the UCI Thyroid Disease Database: The random initialization of weights of a multilayer perceptron makes it possible to model its training process as a Las Vegas algorithm, i.e. a randomized algorithm which stops when some required training error is obtained, and whose execution time is a random variable. This modeling is used to perform a case study on a well-known pattern recognition benchmark: the UCI Thyroid Disease Database. Empirical evidence is presented of the training time probability distribution exhibiting a heavy tail behavior, meaning a big probability mass of long executions. This fact is exploited to reduce the training time cost by applying two simple restart strategies. The first assumes full knowledge of the distribution yielding a 40% cut down in expected time with respect to the training without restarts. The second, assumes null knowledge, yielding a reduction ranging from 9% to 23%.<|reference_end|>
arxiv
@article{cebrian2007exploiting, title={Exploiting Heavy Tails in Training Times of Multilayer Perceptrons: A Case Study with the UCI Thyroid Disease Database}, author={Manuel Cebrian and Ivan Cantador}, journal={arXiv preprint arXiv:0704.2725}, year={2007}, archivePrefix={arXiv}, eprint={0704.2725}, primaryClass={cs.NE} }
cebrian2007exploiting
arxiv-117
0704.2778
Random Access Broadcast: Stability and Throughput Analysis
<|reference_start|>Random Access Broadcast: Stability and Throughput Analysis: A wireless network in which packets are broadcast to a group of receivers through use of a random access protocol is considered in this work. The relation to previous work on networks of interacting queues is discussed and subsequently, the stability and throughput regions of the system are analyzed and presented. A simple network of two source nodes and two destination nodes is considered first. The broadcast service process is analyzed assuming a channel that allows for packet capture and multipacket reception. In this small network, the stability and throughput regions are observed to coincide. The same problem for a network with N sources and M destinations is considered next. The channel model is simplified in that multipacket reception is no longer permitted. Bounds on the stability region are developed using the concept of stability rank and the throughput region of the system is compared to the bounds. Our results show that as the number of destination nodes increases, the stability and throughput regions diminish. Additionally, a previous conjecture that the stability and throughput regions coincide for a network of arbitrarily many sources is supported for a broadcast scenario by the results presented in this work.<|reference_end|>
arxiv
@article{shrader2007random, title={Random Access Broadcast: Stability and Throughput Analysis}, author={Brooke Shrader and Anthony Ephremides}, journal={IEEE Transactions on Information Theory, vol. 53, no. 8, pp. 2915-2921, August 2007.}, year={2007}, archivePrefix={arXiv}, eprint={0704.2778}, primaryClass={cs.IT math.IT} }
shrader2007random
arxiv-118
0704.2779
The Complexity of Simple Stochastic Games
<|reference_start|>The Complexity of Simple Stochastic Games: In this paper we survey the computational time complexity of assorted simple stochastic game problems, and we give an overview of the best known algorithms associated with each problem.<|reference_end|>
arxiv
@article{dieckelmann2007the, title={The Complexity of Simple Stochastic Games}, author={Jonas Dieckelmann}, journal={arXiv preprint arXiv:0704.2779}, year={2007}, archivePrefix={arXiv}, eprint={0704.2779}, primaryClass={cs.CC cs.GT} }
dieckelmann2007the
arxiv-119
0704.2786
Writing on Dirty Paper with Resizing and its Application to Quasi-Static Fading Broadcast Channels
<|reference_start|>Writing on Dirty Paper with Resizing and its Application to Quasi-Static Fading Broadcast Channels: This paper studies a variant of the classical problem of ``writing on dirty paper'' in which the sum of the input and the interference, or dirt, is multiplied by a random variable that models resizing, known to the decoder but not to the encoder. The achievable rate of Costa's dirty paper coding (DPC) scheme is calculated and compared to the case of the decoder's also knowing the dirt. In the ergodic case, the corresponding rate loss vanishes asymptotically in the limits of both high and low signal-to-noise ratio (SNR), and is small at all finite SNR for typical distributions like Rayleigh, Rician, and Nakagami. In the quasi-static case, the DPC scheme is lossless at all SNR in terms of outage probability. Quasi-static fading broadcast channels (BC) without transmit channel state information (CSI) are investigated as an application of the robustness properties. It is shown that the DPC scheme leads to an outage achievable rate region that strictly dominates that of time division.<|reference_end|>
arxiv
@article{zhang2007writing, title={Writing on Dirty Paper with Resizing and its Application to Quasi-Static Fading Broadcast Channels}, author={Wenyi Zhang, Shivaprasad Kotagiri, and J. Nicholas Laneman}, journal={arXiv preprint arXiv:0704.2786}, year={2007}, doi={10.1109/ISIT.2007.4557255}, archivePrefix={arXiv}, eprint={0704.2786}, primaryClass={cs.IT math.IT} }
zhang2007writing
arxiv-120
0704.2808
Minimum cost distributed source coding over a network
<|reference_start|>Minimum cost distributed source coding over a network: This work considers the problem of transmitting multiple compressible sources over a network at minimum cost. The aim is to find the optimal rates at which the sources should be compressed and the network flows using which they should be transmitted so that the cost of the transmission is minimal. We consider networks with capacity constraints and linear cost functions. The problem is complicated by the fact that the description of the feasible rate region of distributed source coding problems typically has a number of constraints that is exponential in the number of sources. This renders general purpose solvers inefficient. We present a framework in which these problems can be solved efficiently by exploiting the structure of the feasible rate regions coupled with dual decomposition and optimization techniques such as the subgradient method and the proximal bundle method.<|reference_end|>
arxiv
@article{ramamoorthy2007minimum, title={Minimum cost distributed source coding over a network}, author={Aditya Ramamoorthy}, journal={arXiv preprint arXiv:0704.2808}, year={2007}, archivePrefix={arXiv}, eprint={0704.2808}, primaryClass={cs.IT cs.NI math.IT} }
ramamoorthy2007minimum
arxiv-121
0704.2811
On Algebraic Decoding of $q$-ary Reed-Muller and Product-Reed-Solomon Codes
<|reference_start|>On Algebraic Decoding of $q$-ary Reed-Muller and Product-Reed-Solomon Codes: We consider a list decoding algorithm recently proposed by Pellikaan-Wu \cite{PW2005} for $q$-ary Reed-Muller codes $\mathcal{RM}_q(\ell, m, n)$ of length $n \leq q^m$ when $\ell \leq q$. A simple and easily accessible correctness proof is given which shows that this algorithm achieves a relative error-correction radius of $\tau \leq (1 - \sqrt{{\ell q^{m-1}}/{n}})$. This is an improvement over the proof using one-point Algebraic-Geometric codes given in \cite{PW2005}. The described algorithm can be adapted to decode Product-Reed-Solomon codes. We then propose a new low complexity recursive algebraic decoding algorithm for Reed-Muller and Product-Reed-Solomon codes. Our algorithm achieves a relative error correction radius of $\tau \leq \prod_{i=1}^m (1 - \sqrt{k_i/q})$. This technique is then proved to outperform the Pellikaan-Wu method in both complexity and error correction radius over a wide range of code rates.<|reference_end|>
arxiv
@article{santhi2007on, title={On Algebraic Decoding of $q$-ary Reed-Muller and Product-Reed-Solomon Codes}, author={Nandakishore Santhi}, journal={arXiv preprint arXiv:0704.2811}, year={2007}, doi={10.1109/ISIT.2007.4557130}, number={LA-UR-07-0469}, archivePrefix={arXiv}, eprint={0704.2811}, primaryClass={cs.IT cs.DM math.IT} }
santhi2007on
arxiv-122
0704.2841
A High-Throughput Cross-Layer Scheme for Distributed Wireless Ad Hoc Networks
<|reference_start|>A High-Throughput Cross-Layer Scheme for Distributed Wireless Ad Hoc Networks: In wireless ad hoc networks, distributed nodes can collaboratively form an antenna array for long-distance communications to achieve high energy efficiency. In recent work, Ochiai, et al., have shown that such collaborative beamforming can achieve a statistically nice beampattern with a narrow main lobe and low sidelobes. However, the process of collaboration introduces significant delay, since all collaborating nodes need access to the same information. In this paper, a technique that significantly reduces the collaboration overhead is proposed. It consists of two phases. In the first phase, nodes transmit locally in a random access fashion. Collisions, when they occur, are viewed as linear mixtures of the collided packets. In the second phase, a set of cooperating nodes acts as a distributed antenna system and beamform the received analog waveform to one or more faraway destinations. This step requires multiplication of the received analog waveform by a complex number, which is independently computed by each cooperating node, and which enables separation of the collided packets based on their final destination. The scheme requires that each node has global knowledge of the network coordinates. The proposed scheme can achieve high throughput, which in certain cases exceeds one.<|reference_end|>
arxiv
@article{petropulu2007a, title={A High-Throughput Cross-Layer Scheme for Distributed Wireless Ad Hoc Networks}, author={Athina P. Petropulu, Lun Dong, H. Vincent Poor}, journal={arXiv preprint arXiv:0704.2841}, year={2007}, archivePrefix={arXiv}, eprint={0704.2841}, primaryClass={cs.IT math.IT} }
petropulu2007a
arxiv-123
0704.2852
Nature-Inspired Interconnects for Self-Assembled Large-Scale Network-on-Chip Designs
<|reference_start|>Nature-Inspired Interconnects for Self-Assembled Large-Scale Network-on-Chip Designs: Future nano-scale electronics built up from an Avogadro number of components needs efficient, highly scalable, and robust means of communication in order to be competitive with traditional silicon approaches. In recent years, the Networks-on-Chip (NoC) paradigm emerged as a promising solution to interconnect challenges in silicon-based electronics. Current NoC architectures are either highly regular or fully customized, both of which represent implausible assumptions for emerging bottom-up self-assembled molecular electronics that are generally assumed to have a high degree of irregularity and imperfection. Here, we pragmatically and experimentally investigate important design trade-offs and properties of an irregular, abstract, yet physically plausible 3D small-world interconnect fabric that is inspired by modern network-on-chip paradigms. We vary the framework's key parameters, such as the connectivity, the number of switch nodes, the distribution of long- versus short-range connections, and measure the network's relevant communication characteristics. We further explore the robustness against link failures and the ability and efficiency to solve a simple toy problem, the synchronization task. The results confirm that (1) computation in irregular assemblies is a promising and disruptive computing paradigm for self-assembled nano-scale electronics and (2) that 3D small-world interconnect fabrics with a power-law decaying distribution of shortcut lengths are physically plausible and have major advantages over local 2D and 3D regular topologies.<|reference_end|>
arxiv
@article{teuscher2007nature-inspired, title={Nature-Inspired Interconnects for Self-Assembled Large-Scale Network-on-Chip Designs}, author={Christof Teuscher}, journal={Chaos, 17(2):026106, 2007}, year={2007}, doi={10.1063/1.2740566}, number={LA-UR-07-0204}, archivePrefix={arXiv}, eprint={0704.2852}, primaryClass={cs.AR cond-mat.dis-nn nlin.AO} }
teuscher2007nature-inspired
arxiv-124
0704.2857
Modern Coding Theory: The Statistical Mechanics and Computer Science Point of View
<|reference_start|>Modern Coding Theory: The Statistical Mechanics and Computer Science Point of View: These are the notes for a set of lectures delivered by the two authors at the Les Houches Summer School on `Complex Systems' in July 2006. They provide an introduction to the basic concepts in modern (probabilistic) coding theory, highlighting connections with statistical mechanics. We also stress common concepts with other disciplines dealing with similar problems that can be generically referred to as `large graphical models'. While most of the lectures are devoted to the classical channel coding problem over simple memoryless channels, we present a discussion of more complex channel models. We conclude with an overview of the main open challenges in the field.<|reference_end|>
arxiv
@article{montanari2007modern, title={Modern Coding Theory: The Statistical Mechanics and Computer Science Point of View}, author={Andrea Montanari and Rudiger Urbanke}, journal={arXiv preprint arXiv:0704.2857}, year={2007}, archivePrefix={arXiv}, eprint={0704.2857}, primaryClass={cs.IT cond-mat.stat-mech math.IT} }
montanari2007modern
arxiv-125
0704.2900
Higher-order theories
<|reference_start|>Higher-order theories: We extend our approach to abstract syntax (with binding constructions) through modules and linearity. First we give a new general definition of arity, yielding the companion notion of signature. Then we obtain a modularity result as requested by Ghani and Uustalu (2003): in our setting, merging two extensions of syntax corresponds to building an amalgamated sum. Finally we define a natural notion of equation concerning a signature and prove the existence of an initial semantics for a so-called representable signature equipped with a set of equations.<|reference_end|>
arxiv
@article{hirschowitz2007higher-order, title={Higher-order theories}, author={Andre' Hirschowitz and Marco Maggesi}, journal={arXiv preprint arXiv:0704.2900}, year={2007}, archivePrefix={arXiv}, eprint={0704.2900}, primaryClass={cs.LO} }
hirschowitz2007higher-order
arxiv-126
0704.2902
Recommending Related Papers Based on Digital Library Access Records
<|reference_start|>Recommending Related Papers Based on Digital Library Access Records: An important goal for digital libraries is to enable researchers to more easily explore related work. While citation data is often used as an indicator of relatedness, in this paper we demonstrate that digital access records (e.g. http-server logs) can be used as indicators as well. In particular, we show that measures based on co-access provide better coverage than co-citation, that they are available much sooner, and that they are more accurate for recent papers.<|reference_end|>
arxiv
@article{pohl2007recommending, title={Recommending Related Papers Based on Digital Library Access Records}, author={Stefan Pohl, Filip Radlinski and Thorsten Joachims}, journal={arXiv preprint arXiv:0704.2902}, year={2007}, archivePrefix={arXiv}, eprint={0704.2902}, primaryClass={cs.DL cs.IR} }
pohl2007recommending
arxiv-127
0704.2919
On Verifying and Engineering the Well-gradedness of a Union-closed Family
<|reference_start|>On Verifying and Engineering the Well-gradedness of a Union-closed Family: Current techniques for generating a knowledge space, such as QUERY, guarantees that the resulting structure is closed under union, but not that it satisfies wellgradedness, which is one of the defining conditions for a learning space. We give necessary and sufficient conditions on the base of a union-closed set family that ensures that the family is well-graded. We consider two cases, depending on whether or not the family contains the empty set. We also provide algorithms for efficiently testing these conditions, and for augmenting a set family in a minimal way to one that satisfies these conditions.<|reference_end|>
arxiv
@article{eppstein2007on, title={On Verifying and Engineering the Well-gradedness of a Union-closed Family}, author={David Eppstein, Jean-Claude Falmagne, and Hasan Uzun}, journal={J. Mathematical Psychology 53(1):34-39, 2009}, year={2007}, doi={10.1016/j.jmp.2008.09.002}, archivePrefix={arXiv}, eprint={0704.2919}, primaryClass={math.CO cs.DM cs.DS} }
eppstein2007on
arxiv-128
0704.2926
Optimal Routing for the Gaussian Multiple-Relay Channel with Decode-and-Forward
<|reference_start|>Optimal Routing for the Gaussian Multiple-Relay Channel with Decode-and-Forward: In this paper, we study a routing problem on the Gaussian multiple relay channel, in which nodes employ a decode-and-forward coding strategy. We are interested in routes for the information flow through the relays that achieve the highest DF rate. We first construct an algorithm that provably finds optimal DF routes. As the algorithm runs in factorial time in the worst case, we propose a polynomial time heuristic algorithm that finds an optimal route with high probability. We demonstrate that that the optimal (and near optimal) DF routes are good in practice by simulating a distributed DF coding scheme using low density parity check codes with puncturing and incremental redundancy.<|reference_end|>
arxiv
@article{ong2007optimal, title={Optimal Routing for the Gaussian Multiple-Relay Channel with Decode-and-Forward}, author={Lawrence Ong, Mehul Motani}, journal={Proceedings of the 2007 IEEE International Symposium on Information Theory (ISIT 2007), Acropolis Congress and Exhibition Center, Nice, France, pp. 1061-1065, Jun. 24-29 2007.}, year={2007}, doi={10.1109/ISIT.2007.4557364}, archivePrefix={arXiv}, eprint={0704.2926}, primaryClass={cs.IT math.IT} }
ong2007optimal
arxiv-129
0704.2963
Using Access Data for Paper Recommendations on ArXivorg
<|reference_start|>Using Access Data for Paper Recommendations on ArXivorg: This thesis investigates in the use of access log data as a source of information for identifying related scientific papers. This is done for arXiv.org, the authority for publication of e-prints in several fields of physics. Compared to citation information, access logs have the advantage of being immediately available, without manual or automatic extraction of the citation graph. Because of that, a main focus is on the question, how far user behavior can serve as a replacement for explicit meta-data, which potentially might be expensive or completely unavailable. Therefore, we compare access, content, and citation-based measures of relatedness on different recommendation tasks. As a final result, an online recommendation system has been built that can help scientists to find further relevant literature, without having to search for them actively.<|reference_end|>
arxiv
@article{pohl2007using, title={Using Access Data for Paper Recommendations on ArXiv.org}, author={Stefan Pohl}, journal={arXiv preprint arXiv:0704.2963}, year={2007}, archivePrefix={arXiv}, eprint={0704.2963}, primaryClass={cs.DL cs.IR} }
pohl2007using
arxiv-130
0704.3019
Arbitrary Rate Permutation Modulation for the Gaussian Channel
<|reference_start|>Arbitrary Rate Permutation Modulation for the Gaussian Channel: In this paper non-group permutation modulated sequences for the Gaussian channel are considered. Without the restriction to group codes rather than subsets of group codes, arbitrary rates are achievable. The code construction utilizes the known optimal group constellations to ensure at least the same performance but exploit the Gray code ordering structure of multiset permutations as a selection criterion at the decoder. The decoder achieves near maximum likelihood performance at low computational cost and low additional memory requirements at the receiver.<|reference_end|>
arxiv
@article{henkel2007arbitrary, title={Arbitrary Rate Permutation Modulation for the Gaussian Channel}, author={Oliver Henkel}, journal={arXiv preprint arXiv:0704.3019}, year={2007}, archivePrefix={arXiv}, eprint={0704.3019}, primaryClass={cs.IT math.IT} }
henkel2007arbitrary
arxiv-131
0704.3035
Achievable Rates for Two-Way Wire-Tap Channels
<|reference_start|>Achievable Rates for Two-Way Wire-Tap Channels: We consider two-way wire-tap channels, where two users are communicating with each other in the presence of an eavesdropper, who has access to the communications through a multiple-access channel. We find achievable rates for two different scenarios, the Gaussian two-way wire-tap channel, (GTW-WT), and the binary additive two-way wire-tap channel, (BATW-WT). It is shown that the two-way channels inherently provide a unique advantage for wire-tapped scenarios, as the users know their own transmitted signals and in effect help encrypt the other user's messages, similar to a one-time pad. We compare the achievable rates to that of the Gaussian multiple-access wire-tap channel (GMAC-WT) to illustrate this advantage.<|reference_end|>
arxiv
@article{tekin2007achievable, title={Achievable Rates for Two-Way Wire-Tap Channels}, author={Ender Tekin, Aylin Yener}, journal={arXiv preprint arXiv:0704.3035}, year={2007}, archivePrefix={arXiv}, eprint={0704.3035}, primaryClass={cs.IT cs.CR math.IT} }
tekin2007achievable
arxiv-132
0704.3094
Detection of two-sided alternatives in a Brownian motion model
<|reference_start|>Detection of two-sided alternatives in a Brownian motion model: This work examines the problem of sequential detection of a change in the drift of a Brownian motion in the case of two-sided alternatives. Applications to real life situations in which two-sided changes can occur are discussed. Traditionally, 2-CUSUM stopping rules have been used for this problem due to their asymptotically optimal character as the mean time between false alarms tends to $\infty$. In particular, attention has focused on 2-CUSUM harmonic mean rules due to the simplicity in calculating their first moments. In this paper, we derive closed-form expressions for the first moment of a general 2-CUSUM stopping rule. We use these expressions to obtain explicit upper and lower bounds for it. Moreover, we derive an expression for the rate of change of this first moment as one of the threshold parameters changes. Based on these expressions we obtain explicit upper and lower bounds to this rate of change. Using these expressions we are able to find the best 2-CUSUM stopping rule with respect to the extended Lorden criterion. In fact, we demonstrate not only the existence but also the uniqueness of the best 2-CUSUM stopping both in the case of a symmetric change and in the case of a non-symmetric case. Furthermore, we discuss the existence of a modification of the 2-CUSUM stopping rule that has a strictly better performance than its classical 2-CUSUM counterpart for small values of the mean time between false alarms. We conclude with a discussion on the open problem of strict optimality in the case of two-sided alternatives.<|reference_end|>
arxiv
@article{hadjiliadis2007detection, title={Detection of two-sided alternatives in a Brownian motion model}, author={Olympia Hadjiliadis and H.Vincent Poor}, journal={arXiv preprint arXiv:0704.3094}, year={2007}, archivePrefix={arXiv}, eprint={0704.3094}, primaryClass={cs.IT math.IT} }
hadjiliadis2007detection
arxiv-133
0704.3120
Space Time Codes from Permutation Codes
<|reference_start|>Space Time Codes from Permutation Codes: A new class of space time codes with high performance is presented. The code design utilizes tailor-made permutation codes, which are known to have large minimal distances as spherical codes. A geometric connection between spherical and space time codes has been used to translate them into the final space time codes. Simulations demonstrate that the performance increases with the block lengths, a result that has been conjectured already in previous work. Further, the connection to permutation codes allows for moderate complex en-/decoding algorithms.<|reference_end|>
arxiv
@article{henkel2007space, title={Space Time Codes from Permutation Codes}, author={Oliver Henkel}, journal={Proc. IEEE GlobeCom, San Francisco, California, Nov. 2006}, year={2007}, archivePrefix={arXiv}, eprint={0704.3120}, primaryClass={cs.IT math.IT} }
henkel2007space
arxiv-134
0704.3141
Algorithm for Evaluation of the Interval Power Function of Unconstrained Arguments
<|reference_start|>Algorithm for Evaluation of the Interval Power Function of Unconstrained Arguments: We describe an algorithm for evaluation of the interval extension of the power function of variables x and y given by the expression x^y. Our algorithm reduces the general case to the case of non-negative bases.<|reference_end|>
arxiv
@article{petrov2007algorithm, title={Algorithm for Evaluation of the Interval Power Function of Unconstrained Arguments}, author={Evgueni Petrov}, journal={arXiv preprint arXiv:0704.3141}, year={2007}, archivePrefix={arXiv}, eprint={0704.3141}, primaryClass={cs.MS} }
petrov2007algorithm
arxiv-135
0704.3157
Experimenting with recursive queries in database and logic programming systems
<|reference_start|>Experimenting with recursive queries in database and logic programming systems: This paper considers the problem of reasoning on massive amounts of (possibly distributed) data. Presently, existing proposals show some limitations: {\em (i)} the quantity of data that can be handled contemporarily is limited, due to the fact that reasoning is generally carried out in main-memory; {\em (ii)} the interaction with external (and independent) DBMSs is not trivial and, in several cases, not allowed at all; {\em (iii)} the efficiency of present implementations is still not sufficient for their utilization in complex reasoning tasks involving massive amounts of data. This paper provides a contribution in this setting; it presents a new system, called DLV$^{DB}$, which aims to solve these problems. Moreover, the paper reports the results of a thorough experimental analysis we have carried out for comparing our system with several state-of-the-art systems (both logic and databases) on some classical deductive problems; the other tested systems are: LDL++, XSB, Smodels and three top-level commercial DBMSs. DLV$^{DB}$ significantly outperforms even the commercial Database Systems on recursive queries. To appear in Theory and Practice of Logic Programming (TPLP)<|reference_end|>
arxiv
@article{terracina2007experimenting, title={Experimenting with recursive queries in database and logic programming systems}, author={Giorgio Terracina, Nicola Leone, Vincenzino Lio, Claudio Panetta}, journal={arXiv preprint arXiv:0704.3157}, year={2007}, archivePrefix={arXiv}, eprint={0704.3157}, primaryClass={cs.AI cs.DB} }
terracina2007experimenting
arxiv-136
0704.3177
Computing modular polynomials in quasi-linear time
<|reference_start|>Computing modular polynomials in quasi-linear time: We analyse and compare the complexity of several algorithms for computing modular polynomials. We show that an algorithm relying on floating point evaluation of modular functions and on interpolation, which has received little attention in the literature, has a complexity that is essentially (up to logarithmic factors) linear in the size of the computed polynomials. In particular, it obtains the classical modular polynomials $\Phi_\ell$ of prime level $\ell$ in time O (\ell^3 \log^4 \ell \log \log \ell). Besides treating modular polynomials for $\Gamma^0 (\ell)$, which are an important ingredient in many algorithms dealing with isogenies of elliptic curves, the algorithm is easily adapted to more general situations. Composite levels are handled just as easily as prime levels, as well as polynomials between a modular function and its transform of prime level, such as the Schl\"afli polynomials and their generalisations. Our distributed implementation of the algorithm confirms the theoretical analysis by computing modular equations of record level around 10000 in less than two weeks on ten processors.<|reference_end|>
arxiv
@article{enge2007computing, title={Computing modular polynomials in quasi-linear time}, author={Andreas Enge (INRIA Futurs)}, journal={Mathematics of Computation 78, 267 (2009) 1809-1824}, year={2007}, archivePrefix={arXiv}, eprint={0704.3177}, primaryClass={math.NT cs.CC} }
enge2007computing
arxiv-137
0704.3197
Euclidean Shortest Paths in Simple Cube Curves at a Glance
<|reference_start|>Euclidean Shortest Paths in Simple Cube Curves at a Glance: This paper reports about the development of two provably correct approximate algorithms which calculate the Euclidean shortest path (ESP) within a given cube-curve with arbitrary accuracy, defined by $\epsilon >0$, and in time complexity $\kappa(\epsilon) \cdot {\cal O}(n)$, where $\kappa(\epsilon)$ is the length difference between the path used for initialization and the minimum-length path, divided by $\epsilon$. A run-time diagram also illustrates this linear-time behavior of the implemented ESP algorithm.<|reference_end|>
arxiv
@article{li2007euclidean, title={Euclidean Shortest Paths in Simple Cube Curves at a Glance}, author={Fajie Li and Reinhard Klette}, journal={arXiv preprint arXiv:0704.3197}, year={2007}, number={CITR-TR-198}, archivePrefix={arXiv}, eprint={0704.3197}, primaryClass={cs.CG cs.DM} }
li2007euclidean
arxiv-138
0704.3199
Generalized Stability Condition for Generalized and Doubly-Generalized LDPC Codes
<|reference_start|>Generalized Stability Condition for Generalized and Doubly-Generalized LDPC Codes: In this paper, the stability condition for low-density parity-check (LDPC) codes on the binary erasure channel (BEC) is extended to generalized LDPC (GLDPC) codes and doublygeneralized LDPC (D-GLDPC) codes. It is proved that, in both cases, the stability condition only involves the component codes with minimum distance 2. The stability condition for GLDPC codes is always expressed as an upper bound to the decoding threshold. This is not possible for D-GLDPC codes, unless all the generalized variable nodes have minimum distance at least 3. Furthermore, a condition called derivative matching is defined in the paper. This condition is sufficient for a GLDPC or DGLDPC code to achieve the stability condition with equality. If this condition is satisfied, the threshold of D-GLDPC codes (whose generalized variable nodes have all minimum distance at least 3) and GLDPC codes can be expressed in closed form.<|reference_end|>
arxiv
@article{paolini2007generalized, title={Generalized Stability Condition for Generalized and Doubly-Generalized LDPC Codes}, author={E. Paolini, M. Fossorier, M. Chiani}, journal={arXiv preprint arXiv:0704.3199}, year={2007}, doi={10.1109/ISIT.2007.4557440}, archivePrefix={arXiv}, eprint={0704.3199}, primaryClass={cs.IT math.IT} }
paolini2007generalized
arxiv-139
0704.3228
Characterization of P2P IPTV Traffic: Scaling Analysis
<|reference_start|>Characterization of P2P IPTV Traffic: Scaling Analysis: P2P IPTV applications arise on the Internet and will be massively used in the future. It is expected that P2P IPTV will contribute to increase the overall Internet traffic. In this context, it is important to measure the impact of P2P IPTV on the networks and to characterize this traffic. Dur- ing the 2006 FIFA World Cup, we performed an extensive measurement campaign. We measured network traffic generated by broadcasting soc- cer games by the most popular P2P IPTV applications, namely PPLive, PPStream, SOPCast and TVAnts. From the collected data, we charac- terized the P2P IPTV traffic structure at different time scales by using wavelet based transform method. To the best of our knowledge, this is the first work, which presents a complete multiscale analysis of the P2P IPTV traffic. Our results show that the scaling properties of the TCP traffic present periodic behavior whereas the UDP traffic is stationary and lead to long- range depedency characteristics. For all the applications, the download traffic has different characteristics than the upload traffic. The signaling traffic has a significant impact on the download traffic but it has negligible impact on the upload. Both sides of the traffic and its granularity has to be taken into account to design accurate P2P IPTV traffic models.<|reference_end|>
arxiv
@article{silverston2007characterization, title={Characterization of P2P IPTV Traffic: Scaling Analysis}, author={Thomas Silverston, Olivier Fourmaux and Kave Salamatian}, journal={arXiv preprint arXiv:0704.3228}, year={2007}, archivePrefix={arXiv}, eprint={0704.3228}, primaryClass={cs.NI cs.MM} }
silverston2007characterization
arxiv-140
0704.3238
Alternative axiomatics and complexity of deliberative STIT theories
<|reference_start|>Alternative axiomatics and complexity of deliberative STIT theories: We propose two alternatives to Xu's axiomatization of the Chellas STIT. The first one also provides an alternative axiomatization of the deliberative STIT. The second one starts from the idea that the historic necessity operator can be defined as an abbreviation of operators of agency, and can thus be eliminated from the logic of the Chellas STIT. The second axiomatization also allows us to establish that the problem of deciding the satisfiability of a STIT formula without temporal operators is NP-complete in the single-agent case, and is NEXPTIME-complete in the multiagent case, both for the deliberative and the Chellas' STIT.<|reference_end|>
arxiv
@article{balbiani2007alternative, title={Alternative axiomatics and complexity of deliberative STIT theories}, author={Philippe Balbiani, Andreas Herzig and Nicolas Troquard}, journal={arXiv preprint arXiv:0704.3238}, year={2007}, doi={10.1007/s10992-007-9078-7}, archivePrefix={arXiv}, eprint={0704.3238}, primaryClass={cs.LO} }
balbiani2007alternative
arxiv-141
0704.3241
Neighbor Discovery in Wireless Networks:A Multiuser-Detection Approach
<|reference_start|>Neighbor Discovery in Wireless Networks:A Multiuser-Detection Approach: We examine the problem of determining which nodes are neighbors of a given one in a wireless network. We consider an unsupervised network operating on a frequency-flat Gaussian channel, where $K+1$ nodes associate their identities to nonorthogonal signatures, transmitted at random times, synchronously, and independently. A number of neighbor-discovery algorithms, based on different optimization criteria, are introduced and analyzed. Numerical results show how reduced-complexity algorithms can achieve a satisfactory performance.<|reference_end|>
arxiv
@article{angelosante2007neighbor, title={Neighbor Discovery in Wireless Networks:A Multiuser-Detection Approach}, author={Daniele Angelosante, Ezio Biglieri, Marco Lops}, journal={arXiv preprint arXiv:0704.3241}, year={2007}, archivePrefix={arXiv}, eprint={0704.3241}, primaryClass={cs.IT math.IT} }
angelosante2007neighbor
arxiv-142
0704.3268
2D Path Solutions from a Single Layer Excitable CNN Model
<|reference_start|>2D Path Solutions from a Single Layer Excitable CNN Model: An easily implementable path solution algorithm for 2D spatial problems, based on excitable/programmable characteristics of a specific cellular nonlinear network (CNN) model is presented and numerically investigated. The network is a single layer bioinspired model which was also implemented in CMOS technology. It exhibits excitable characteristics with regionally bistable cells. The related response realizes propagations of trigger autowaves, where the excitable mode can be globally preset and reset. It is shown that, obstacle distributions in 2D space can also be directly mapped onto the coupled cell array in the network. Combining these two features, the network model can serve as the main block in a 2D path computing processor. The related algorithm and configurations are numerically experimented with circuit level parameters and performance estimations are also presented. The simplicity of the model also allows alternative technology and device level implementation, which may become critical in autonomous processor design of related micro or nanoscale robotic applications.<|reference_end|>
arxiv
@article{karahaliloglu20072d, title={2D Path Solutions from a Single Layer Excitable CNN Model}, author={Koray Karahaliloglu}, journal={arXiv preprint arXiv:0704.3268}, year={2007}, archivePrefix={arXiv}, eprint={0704.3268}, primaryClass={cs.RO cs.NE} }
karahaliloglu20072d
arxiv-143
0704.3287
Sample size cognizant detection of signals in white noise
<|reference_start|>Sample size cognizant detection of signals in white noise: The detection and estimation of signals in noisy, limited data is a problem of interest to many scientific and engineering communities. We present a computationally simple, sample eigenvalue based procedure for estimating the number of high-dimensional signals in white noise when there are relatively few samples. We highlight a fundamental asymptotic limit of sample eigenvalue based detection of weak high-dimensional signals from a limited sample size and discuss its implication for the detection of two closely spaced signals. This motivates our heuristic definition of the 'effective number of identifiable signals.' Numerical simulations are used to demonstrate the consistency of the algorithm with respect to the effective number of signals and the superior performance of the algorithm with respect to Wax and Kailath's "asymptotically consistent" MDL based estimator.<|reference_end|>
arxiv
@article{rao2007sample, title={Sample size cognizant detection of signals in white noise}, author={N. Raj Rao, Alan Edelman}, journal={arXiv preprint arXiv:0704.3287}, year={2007}, archivePrefix={arXiv}, eprint={0704.3287}, primaryClass={cs.IT math.IT} }
rao2007sample
arxiv-144
0704.3292
Coalition Games with Cooperative Transmission: A Cure for the Curse of Boundary Nodes in Selfish Packet-Forwarding Wireless Networks
<|reference_start|>Coalition Games with Cooperative Transmission: A Cure for the Curse of Boundary Nodes in Selfish Packet-Forwarding Wireless Networks: In wireless packet-forwarding networks with selfish nodes, applications of a repeated game can induce the nodes to forward each others' packets, so that the network performance can be improved. However, the nodes on the boundary of such networks cannot benefit from this strategy, as the other nodes do not depend on them. This problem is sometimes known as the curse of the boundary nodes. To overcome this problem, an approach based on coalition games is proposed, in which the boundary nodes can use cooperative transmission to help the backbone nodes in the middle of the network. In return, the backbone nodes are willing to forward the boundary nodes' packets. The stability of the coalitions is studied using the concept of a core. Then two types of fairness, namely, the min-max fairness using nucleolus and the average fairness using the Shapley function are investigated. Finally, a protocol is designed using both repeated games and coalition games. Simulation results show how boundary nodes and backbone nodes form coalitions together according to different fairness criteria. The proposed protocol can improve the network connectivity by about 50%, compared with pure repeated game schemes.<|reference_end|>
arxiv
@article{han2007coalition, title={Coalition Games with Cooperative Transmission: A Cure for the Curse of Boundary Nodes in Selfish Packet-Forwarding Wireless Networks}, author={Zhu Han and H. Vincent Poor}, journal={in the Proceedings of the 5th International Symposium on Modeling and Optimization in Mobile Ad Hoc and Wireless Networks, WiOpt07, Limassol, Cyprus, April 16-20, 2007}, year={2007}, archivePrefix={arXiv}, eprint={0704.3292}, primaryClass={cs.IT math.IT} }
han2007coalition
arxiv-145
0704.3313
Straggler Identification in Round-Trip Data Streams via Newton's Identities and Invertible Bloom Filters
<|reference_start|>Straggler Identification in Round-Trip Data Streams via Newton's Identities and Invertible Bloom Filters: We introduce the straggler identification problem, in which an algorithm must determine the identities of the remaining members of a set after it has had a large number of insertion and deletion operations performed on it, and now has relatively few remaining members. The goal is to do this in o(n) space, where n is the total number of identities. The straggler identification problem has applications, for example, in determining the set of unacknowledged packets in a high-bandwidth multicast data stream. We provide a deterministic solution to the straggler identification problem that uses only O(d log n) bits and is based on a novel application of Newton's identities for symmetric polynomials. This solution can identify any subset of d stragglers from a set of n O(log n)-bit identifiers, assuming that there are no false deletions of identities not already in the set. Indeed, we give a lower bound argument that shows that any small-space deterministic solution to the straggler identification problem cannot be guaranteed to handle false deletions. Nevertheless, we show that there is a simple randomized solution using O(d log n log(1/epsilon)) bits that can maintain a multiset and solve the straggler identification problem, tolerating false deletions, where epsilon>0 is a user-defined parameter bounding the probability of an incorrect response. This randomized solution is based on a new type of Bloom filter, which we call the invertible Bloom filter.<|reference_end|>
arxiv
@article{eppstein2007straggler, title={Straggler Identification in Round-Trip Data Streams via Newton's Identities and Invertible Bloom Filters}, author={David Eppstein and Michael T. Goodrich}, journal={arXiv preprint arXiv:0704.3313}, year={2007}, archivePrefix={arXiv}, eprint={0704.3313}, primaryClass={cs.DS} }
eppstein2007straggler
arxiv-146
0704.3316
Vocabulary growth in collaborative tagging systems
<|reference_start|>Vocabulary growth in collaborative tagging systems: We analyze a large-scale snapshot of del.icio.us and investigate how the number of different tags in the system grows as a function of a suitably defined notion of time. We study the temporal evolution of the global vocabulary size, i.e. the number of distinct tags in the entire system, as well as the evolution of local vocabularies, that is the growth of the number of distinct tags used in the context of a given resource or user. In both cases, we find power-law behaviors with exponents smaller than one. Surprisingly, the observed growth behaviors are remarkably regular throughout the entire history of the system and across very different resources being bookmarked. Similar sub-linear laws of growth have been observed in written text, and this qualitative universality calls for an explanation and points in the direction of non-trivial cognitive processes in the complex interaction patterns characterizing collaborative tagging.<|reference_end|>
arxiv
@article{cattuto2007vocabulary, title={Vocabulary growth in collaborative tagging systems}, author={Ciro Cattuto, Andrea Baldassarri, Vito D. P. Servedio, Vittorio Loreto}, journal={arXiv preprint arXiv:0704.3316}, year={2007}, archivePrefix={arXiv}, eprint={0704.3316}, primaryClass={cs.IR cond-mat.stat-mech cs.CY physics.data-an} }
cattuto2007vocabulary
arxiv-147
0704.3359
Direct Optimization of Ranking Measures
<|reference_start|>Direct Optimization of Ranking Measures: Web page ranking and collaborative filtering require the optimization of sophisticated performance measures. Current Support Vector approaches are unable to optimize them directly and focus on pairwise comparisons instead. We present a new approach which allows direct optimization of the relevant loss functions. This is achieved via structured estimation in Hilbert spaces. It is most related to Max-Margin-Markov networks optimization of multivariate performance measures. Key to our approach is that during training the ranking problem can be viewed as a linear assignment problem, which can be solved by the Hungarian Marriage algorithm. At test time, a sort operation is sufficient, as our algorithm assigns a relevance score to every (document, query) pair. Experiments show that the our algorithm is fast and that it works very well.<|reference_end|>
arxiv
@article{le2007direct, title={Direct Optimization of Ranking Measures}, author={Quoc Le and Alexander Smola}, journal={arXiv preprint arXiv:0704.3359}, year={2007}, archivePrefix={arXiv}, eprint={0704.3359}, primaryClass={cs.IR cs.AI} }
le2007direct
arxiv-148
0704.3391
Lifetime Improvement in Wireless Sensor Networks via Collaborative Beamforming and Cooperative Transmission
<|reference_start|>Lifetime Improvement in Wireless Sensor Networks via Collaborative Beamforming and Cooperative Transmission: Collaborative beamforming (CB) and cooperative transmission (CT) have recently emerged as communication techniques that can make effective use of collaborative/cooperative nodes to create a virtual multiple-input/multiple-output (MIMO) system. Extending the lifetime of networks composed of battery-operated nodes is a key issue in the design and operation of wireless sensor networks. This paper considers the effects on network lifetime of allowing closely located nodes to use CB/CT to reduce the load or even to avoid packet-forwarding requests to nodes that have critical battery life. First, the effectiveness of CB/CT in improving the signal strength at a faraway destination using energy in nearby nodes is studied. Then, the performance improvement obtained by this technique is analyzed for a special 2D disk case. Further, for general networks in which information-generation rates are fixed, a new routing problem is formulated as a linear programming problem, while for other general networks, the cost for routing is dynamically adjusted according to the amount of energy remaining and the effectiveness of CB/CT. From the analysis and the simulation results, it is seen that the proposed method can reduce the payloads of energy-depleting nodes by about 90% in the special case network considered and improve the lifetimes of general networks by about 10%, compared with existing techniques.<|reference_end|>
arxiv
@article{han2007lifetime, title={Lifetime Improvement in Wireless Sensor Networks via Collaborative Beamforming and Cooperative Transmission}, author={Zhu Han and H. Vincent Poor}, journal={arXiv preprint arXiv:0704.3391}, year={2007}, archivePrefix={arXiv}, eprint={0704.3391}, primaryClass={cs.IT math.IT} }
han2007lifetime
arxiv-149
0704.3395
General-Purpose Computing on a Semantic Network Substrate
<|reference_start|>General-Purpose Computing on a Semantic Network Substrate: This article presents a model of general-purpose computing on a semantic network substrate. The concepts presented are applicable to any semantic network representation. However, due to the standards and technological infrastructure devoted to the Semantic Web effort, this article is presented from this point of view. In the proposed model of computing, the application programming interface, the run-time program, and the state of the computing virtual machine are all represented in the Resource Description Framework (RDF). The implementation of the concepts presented provides a practical computing paradigm that leverages the highly-distributed and standardized representational-layer of the Semantic Web.<|reference_end|>
arxiv
@article{rodriguez2007general-purpose, title={General-Purpose Computing on a Semantic Network Substrate}, author={Marko A. Rodriguez}, journal={Emergent Web Intelligence: Advanced Semantic Technologies, Advanced Information and Knowledge Processing series, Springer-Verlag, pages 57-104, ISBN:978-1-84996-076-2, June 2010}, year={2007}, number={LA-UR-07-2885}, archivePrefix={arXiv}, eprint={0704.3395}, primaryClass={cs.AI cs.PL} }
rodriguez2007general-purpose
arxiv-150
0704.3396
Lifetime Improvement of Wireless Sensor Networks by Collaborative Beamforming and Cooperative Transmission
<|reference_start|>Lifetime Improvement of Wireless Sensor Networks by Collaborative Beamforming and Cooperative Transmission: Extending network lifetime of battery-operated devices is a key design issue that allows uninterrupted information exchange among distributive nodes in wireless sensor networks. Collaborative beamforming (CB) and cooperative transmission (CT) have recently emerged as new communication techniques that enable and leverage effective resource sharing among collaborative/cooperative nodes. In this paper, we seek to maximize the lifetime of sensor networks by using the new idea that closely located nodes can use CB/CT to reduce the load or even avoid packet forwarding requests to nodes that have critical battery life. First, we study the effectiveness of CB/CT to improve the signal strength at a faraway destination using energy in nearby nodes. Then, a 2D disk case is analyzed to assess the resulting performance improvement. For general networks, if information-generation rates are fixed, the new routing problem is formulated as a linear programming problem; otherwise, the cost for routing is dynamically adjusted according to the amount of energy remaining and the effectiveness of CB/CT. From the analysis and simulation results, it is seen that the proposed schemes can improve the lifetime by about 90% in the 2D disk network and by about 10% in the general networks, compared to existing schemes.<|reference_end|>
arxiv
@article{han2007lifetime, title={Lifetime Improvement of Wireless Sensor Networks by Collaborative Beamforming and Cooperative Transmission}, author={Zhu Han and H. Vincent Poor}, journal={arXiv preprint arXiv:0704.3396}, year={2007}, archivePrefix={arXiv}, eprint={0704.3396}, primaryClass={cs.IT math.IT} }
han2007lifetime
arxiv-151
0704.3399
Cooperative Transmission Protocols with High Spectral Efficiency and High Diversity Order Using Multiuser Detection and Network Coding
<|reference_start|>Cooperative Transmission Protocols with High Spectral Efficiency and High Diversity Order Using Multiuser Detection and Network Coding: Cooperative transmission is an emerging communication technique that takes advantages of the broadcast nature of wireless channels. However, due to low spectral efficiency and the requirement of orthogonal channels, its potential for use in future wireless networks is limited. In this paper, by making use of multiuser detection (MUD) and network coding, cooperative transmission protocols with high spectral efficiency, diversity order, and coding gain are developed. Compared with the traditional cooperative transmission protocols with single-user detection, in which the diversity gain is only for one source user, the proposed MUD cooperative transmission protocols have the merits that the improvement of one user's link can also benefit the other users. In addition, using MUD at the relay provides an environment in which network coding can be employed. The coding gain and high diversity order can be obtained by fully utilizing the link between the relay and the destination. From the analysis and simulation results, it is seen that the proposed protocols achieve higher diversity gain, better asymptotic efficiency, and lower bit error rate, compared to traditional MUD and to existing cooperative transmission protocols.<|reference_end|>
arxiv
@article{han2007cooperative, title={Cooperative Transmission Protocols with High Spectral Efficiency and High Diversity Order Using Multiuser Detection and Network Coding}, author={Zhu Han, Xin Zhang, and H. Vincent Poor}, journal={arXiv preprint arXiv:0704.3399}, year={2007}, doi={10.1109/ICC.2007.698}, archivePrefix={arXiv}, eprint={0704.3399}, primaryClass={cs.IT math.IT} }
han2007cooperative
arxiv-152
0704.3402
Diversity-Multiplexing Tradeoff in Selective-Fading MIMO Channels
<|reference_start|>Diversity-Multiplexing Tradeoff in Selective-Fading MIMO Channels: We establish the optimal diversity-multiplexing (DM) tradeoff of coherent time, frequency and time-frequency selective-fading MIMO channels and provide a code design criterion for DM-tradeoff optimality. Our results are based on the analysis of the "Jensen channel" associated to a given selective-fading MIMO channel. While the original problem seems analytically intractable due to the mutual information being a sum of correlated random variables, the Jensen channel is equivalent to the original channel in the sense of the DM-tradeoff and lends itself nicely to analytical treatment. Finally, as a consequence of our results, we find that the classical rank criterion for space-time code design (in selective-fading MIMO channels) ensures optimality in the sense of the DM-tradeoff.<|reference_end|>
arxiv
@article{coronel2007diversity-multiplexing, title={Diversity-Multiplexing Tradeoff in Selective-Fading MIMO Channels}, author={Pedro Coronel and Helmut B"olcskei}, journal={arXiv preprint arXiv:0704.3402}, year={2007}, archivePrefix={arXiv}, eprint={0704.3402}, primaryClass={cs.IT math.IT} }
coronel2007diversity-multiplexing
arxiv-153
0704.3405
Estimation Diversity and Energy Efficiency in Distributed Sensing
<|reference_start|>Estimation Diversity and Energy Efficiency in Distributed Sensing: Distributed estimation based on measurements from multiple wireless sensors is investigated. It is assumed that a group of sensors observe the same quantity in independent additive observation noises with possibly different variances. The observations are transmitted using amplify-and-forward (analog) transmissions over non-ideal fading wireless channels from the sensors to a fusion center, where they are combined to generate an estimate of the observed quantity. Assuming that the Best Linear Unbiased Estimator (BLUE) is used by the fusion center, the equal-power transmission strategy is first discussed, where the system performance is analyzed by introducing the concept of estimation outage and estimation diversity, and it is shown that there is an achievable diversity gain on the order of the number of sensors. The optimal power allocation strategies are then considered for two cases: minimum distortion under power constraints; and minimum power under distortion constraints. In the first case, it is shown that by turning off bad sensors, i.e., sensors with bad channels and bad observation quality, adaptive power gain can be achieved without sacrificing diversity gain. Here, the adaptive power gain is similar to the array gain achieved in Multiple-Input Single-Output (MISO) multi-antenna systems when channel conditions are known to the transmitter. In the second case, the sum power is minimized under zero-outage estimation distortion constraint, and some related energy efficiency issues in sensor networks are discussed.<|reference_end|>
arxiv
@article{cui2007estimation, title={Estimation Diversity and Energy Efficiency in Distributed Sensing}, author={Shuguang Cui and Jinjun Xiao and Andrea Goldsmith and Zhi-Quan Luo and H. Vincent Poor}, journal={arXiv preprint arXiv:0704.3405}, year={2007}, doi={10.1109/TSP.2007.896019}, archivePrefix={arXiv}, eprint={0704.3405}, primaryClass={cs.IT math.IT} }
cui2007estimation
arxiv-154
0704.3408
The Trade-off between Processing Gains of an Impulse Radio UWB System in the Presence of Timing Jitter
<|reference_start|>The Trade-off between Processing Gains of an Impulse Radio UWB System in the Presence of Timing Jitter: In time hopping impulse radio, $N_f$ pulses of duration $T_c$ are transmitted for each information symbol. This gives rise to two types of processing gain: (i) pulse combining gain, which is a factor $N_f$, and (ii) pulse spreading gain, which is $N_c=T_f/T_c$, where $T_f$ is the mean interval between two subsequent pulses. This paper investigates the trade-off between these two types of processing gain in the presence of timing jitter. First, an additive white Gaussian noise (AWGN) channel is considered and approximate closed form expressions for bit error probability are derived for impulse radio systems with and without pulse-based polarity randomization. Both symbol-synchronous and chip-synchronous scenarios are considered. The effects of multiple-access interference and timing jitter on the selection of optimal system parameters are explained through theoretical analysis. Finally, a multipath scenario is considered and the trade-off between processing gains of a synchronous impulse radio system with pulse-based polarity randomization is analyzed. The effects of the timing jitter, multiple-access interference and inter-frame interference are investigated. Simulation studies support the theoretical results.<|reference_end|>
arxiv
@article{gezici2007the, title={The Trade-off between Processing Gains of an Impulse Radio UWB System in the Presence of Timing Jitter}, author={Sinan Gezici, Andreas F. Molisch, H. Vincent Poor, and Hisashi Kobayashi}, journal={arXiv preprint arXiv:0704.3408}, year={2007}, archivePrefix={arXiv}, eprint={0704.3408}, primaryClass={cs.IT math.IT} }
gezici2007the
arxiv-155
0704.3433
Bayesian approach to rough set
<|reference_start|>Bayesian approach to rough set: This paper proposes an approach to training rough set models using Bayesian framework trained using Markov Chain Monte Carlo (MCMC) method. The prior probabilities are constructed from the prior knowledge that good rough set models have fewer rules. Markov Chain Monte Carlo sampling is conducted through sampling in the rough set granule space and Metropolis algorithm is used as an acceptance criteria. The proposed method is tested to estimate the risk of HIV given demographic data. The results obtained shows that the proposed approach is able to achieve an average accuracy of 58% with the accuracy varying up to 66%. In addition the Bayesian rough set give the probabilities of the estimated HIV status as well as the linguistic rules describing how the demographic parameters drive the risk of HIV.<|reference_end|>
arxiv
@article{marwala2007bayesian, title={Bayesian approach to rough set}, author={Tshilidzi Marwala and Bodie Crossingham}, journal={arXiv preprint arXiv:0704.3433}, year={2007}, archivePrefix={arXiv}, eprint={0704.3433}, primaryClass={cs.AI} }
marwala2007bayesian
arxiv-156
0704.3434
On sensing capacity of sensor networks for the class of linear observation, fixed SNR models
<|reference_start|>On sensing capacity of sensor networks for the class of linear observation, fixed SNR models: In this paper we address the problem of finding the sensing capacity of sensor networks for a class of linear observation models and a fixed SNR regime. Sensing capacity is defined as the maximum number of signal dimensions reliably identified per sensor observation. In this context sparsity of the phenomena is a key feature that determines sensing capacity. Precluding the SNR of the environment the effect of sparsity on the number of measurements required for accurate reconstruction of a sparse phenomena has been widely dealt with under compressed sensing. Nevertheless the development there was motivated from an algorithmic perspective. In this paper our aim is to derive these bounds in an information theoretic set-up and thus provide algorithm independent conditions for reliable reconstruction of sparse signals. In this direction we first generalize the Fano's inequality and provide lower bounds to the probability of error in reconstruction subject to an arbitrary distortion criteria. Using these lower bounds to the probability of error, we derive upper bounds to sensing capacity and show that for fixed SNR regime sensing capacity goes down to zero as sparsity goes down to zero. This means that disproportionately more sensors are required to monitor very sparse events. Our next main contribution is that we show the effect of sensing diversity on sensing capacity, an effect that has not been considered before. Sensing diversity is related to the effective \emph{coverage} of a sensor with respect to the field. In this direction we show the following results (a) Sensing capacity goes down as sensing diversity per sensor goes down; (b) Random sampling (coverage) of the field by sensors is better than contiguous location sampling (coverage).<|reference_end|>
arxiv
@article{aeron2007on, title={On sensing capacity of sensor networks for the class of linear observation, fixed SNR models}, author={Shuchin Aeron, Manqi Zhao and Venkatesh Saligrama}, journal={arXiv preprint arXiv:0704.3434}, year={2007}, archivePrefix={arXiv}, eprint={0704.3434}, primaryClass={cs.IT math.IT} }
aeron2007on
arxiv-157
0704.3453
An Adaptive Strategy for the Classification of G-Protein Coupled Receptors
<|reference_start|>An Adaptive Strategy for the Classification of G-Protein Coupled Receptors: One of the major problems in computational biology is the inability of existing classification models to incorporate expanding and new domain knowledge. This problem of static classification models is addressed in this paper by the introduction of incremental learning for problems in bioinformatics. Many machine learning tools have been applied to this problem using static machine learning structures such as neural networks or support vector machines that are unable to accommodate new information into their existing models. We utilize the fuzzy ARTMAP as an alternate machine learning system that has the ability of incrementally learning new data as it becomes available. The fuzzy ARTMAP is found to be comparable to many of the widespread machine learning systems. The use of an evolutionary strategy in the selection and combination of individual classifiers into an ensemble system, coupled with the incremental learning ability of the fuzzy ARTMAP is proven to be suitable as a pattern classifier. The algorithm presented is tested using data from the G-Coupled Protein Receptors Database and shows good accuracy of 83%. The system presented is also generally applicable, and can be used in problems in genomics and proteomics.<|reference_end|>
arxiv
@article{mohamed2007an, title={An Adaptive Strategy for the Classification of G-Protein Coupled Receptors}, author={S. Mohamed, D. Rubin, and T. Marwala}, journal={arXiv preprint arXiv:0704.3453}, year={2007}, archivePrefix={arXiv}, eprint={0704.3453}, primaryClass={cs.AI q-bio.QM} }
mohamed2007an
arxiv-158
0704.3496
Polynomial algorithms for protein similarity search for restricted mRNA structures
<|reference_start|>Polynomial algorithms for protein similarity search for restricted mRNA structures: In this paper we consider the problem of computing an mRNA sequence of maximal similarity for a given mRNA of secondary structure constraints, introduced by Backofen et al. in [BNS02] denoted as the MRSO problem. The problem is known to be NP-complete for planar associated implied structure graphs of vertex degree at most 3. In [BFHV05] a first polynomial dynamic programming algorithms for MRSO on implied structure graphs with maximum vertex degree 3 of bounded cut-width is shown. We give a simple but more general polynomial dynamic programming solution for the MRSO problem for associated implied structure graphs of bounded clique-width. Our result implies that MRSO is polynomial for graphs of bounded tree-width, co-graphs, $P_4$-sparse graphs, and distance hereditary graphs. Further we conclude that the problem of comparing two solutions for MRSO is hard for the class of problems which can be solved in polynomial time with a number of parallel queries to an oracle in NP.<|reference_end|>
arxiv
@article{gurski2007polynomial, title={Polynomial algorithms for protein similarity search for restricted mRNA structures}, author={Frank Gurski}, journal={arXiv preprint arXiv:0704.3496}, year={2007}, archivePrefix={arXiv}, eprint={0704.3496}, primaryClass={cs.DS cs.CC} }
gurski2007polynomial
arxiv-159
0704.3500
Une plate-forme dynamique pour l'\'evaluation des performances des bases de donn\'ees \`a objets
<|reference_start|>Une plate-forme dynamique pour l'\'evaluation des performances des bases de donn\'ees \`a objets: In object-oriented or object-relational databases such as multimedia databases or most XML databases, access patterns are not static, i.e., applications do not always access the same objects in the same order repeatedly. However, this has been the way these databases and associated optimisation techniques such as clustering have been evaluated up to now. This paper opens up research regarding this issue by proposing a dynamic object evaluation framework (DOEF). DOEF accomplishes access pattern change by defining configurable styles of change. It is a preliminary prototype that has been designed to be open and fully extensible. Though originally designed for the object-oriented model, it can also be used within the object-relational model with few adaptations. Furthermore, new access pattern change models can be added too. To illustrate the capabilities of DOEF, we conducted two different sets of experiments. In the first set of experiments, we used DOEF to compare the performances of four state of the art dynamic clustering algorithms. The results show that DOEF is effective at determining the adaptability of each dynamic clustering algorithm to changes in access pattern. They also led us to conclude that dynamic clustering algorithms can cope with moderate levels of access pattern change, but that performance rapidly degrades to be worse than no clustering when vigorous styles of access pattern change are applied. In the second set of experiments, we used DOEF to compare the performance of two different object stores: Platypus and SHORE. The use of DOEF exposed the poor swapping performance of Platypus.<|reference_end|>
arxiv
@article{he2007une, title={Une plate-forme dynamique pour l'\'evaluation des performances des bases de donn\'ees \`a objets}, author={Zhen He, J'er^ome Darmont (ERIC)}, journal={19\`emes Journ\'ees de Bases de Donn\'ees Avanc\'ees (BDA 03), Lyon (20/10/2003) 423-442}, year={2007}, archivePrefix={arXiv}, eprint={0704.3500}, primaryClass={cs.DB} }
he2007une
arxiv-160
0704.3501
Conception d'un banc d'essais d\'ecisionnel
<|reference_start|>Conception d'un banc d'essais d\'ecisionnel: We present in this paper a new benchmark for evaluating the performances of data warehouses. Benchmarking is useful either to system users for comparing the performances of different systems, or to system engineers for testing the effect of various design choices. While the TPC (Transaction Processing Performance Council) standard benchmarks address the first point, they are not tuneable enough to address the second one. Our Data Warehouse Engineering Benchmark (DWEB) allows to generate various ad-hoc synthetic data warehouses and workloads. DWEB is fully parameterized. However, two levels of parameterization keep it easy to tune. Since DWEB mainly meets engineering benchmarking needs, it is complimentary to the TPC standard benchmarks, and not a competitor. Finally, DWEB is implemented as a Java free software that can be interfaced with most existing relational database management systems.<|reference_end|>
arxiv
@article{darmont2007conception, title={Conception d'un banc d'essais d\'ecisionnel}, author={J'er^ome Darmont (ERIC), Fadila Bentayeb (ERIC), Omar Boussa"id (ERIC)}, journal={20\`emes Journ\'ees Bases de Donn\'ees Avanc\'ees (BDA 04), Montpellier (19/10/2004) 493-511}, year={2007}, archivePrefix={arXiv}, eprint={0704.3501}, primaryClass={cs.DB} }
darmont2007conception
arxiv-161
0704.3504
Smooth R\'enyi Entropy of Ergodic Quantum Information Sources
<|reference_start|>Smooth R\'enyi Entropy of Ergodic Quantum Information Sources: We prove that the average smooth Renyi entropy rate will approach the entropy rate of a stationary, ergodic information source, which is equal to the Shannon entropy rate for a classical information source and the von Neumann entropy rate for a quantum information source.<|reference_end|>
arxiv
@article{schoenmakers2007smooth, title={Smooth R\'enyi Entropy of Ergodic Quantum Information Sources}, author={Berry Schoenmakers, Jilles Tjoelker, Pim Tuyls and Evgeny Verbitskiy}, journal={arXiv preprint arXiv:0704.3504}, year={2007}, archivePrefix={arXiv}, eprint={0704.3504}, primaryClass={quant-ph cs.IT math.IT} }
schoenmakers2007smooth
arxiv-162
0704.3515
Comparing Robustness of Pairwise and Multiclass Neural-Network Systems for Face Recognition
<|reference_start|>Comparing Robustness of Pairwise and Multiclass Neural-Network Systems for Face Recognition: Noise, corruptions and variations in face images can seriously hurt the performance of face recognition systems. To make such systems robust, multiclass neuralnetwork classifiers capable of learning from noisy data have been suggested. However on large face data sets such systems cannot provide the robustness at a high level. In this paper we explore a pairwise neural-network system as an alternative approach to improving the robustness of face recognition. In our experiments this approach is shown to outperform the multiclass neural-network system in terms of the predictive accuracy on the face images corrupted by noise.<|reference_end|>
arxiv
@article{uglov2007comparing, title={Comparing Robustness of Pairwise and Multiclass Neural-Network Systems for Face Recognition}, author={J. Uglov, V. Schetinin, C. Maple}, journal={arXiv preprint arXiv:0704.3515}, year={2007}, doi={10.1155/2008/468693}, archivePrefix={arXiv}, eprint={0704.3515}, primaryClass={cs.AI} }
uglov2007comparing
arxiv-163
0704.3520
Vers l'auto-administration des entrep\^ots de donn\'ees
<|reference_start|>Vers l'auto-administration des entrep\^ots de donn\'ees: With the wide development of databases in general and data warehouses in particular, it is important to reduce the tasks that a database administrator must perform manually. The idea of using data mining techniques to extract useful knowledge for administration from the data themselves has existed for some years. However, little research has been achieved. The aim of this study is to search for a way of extracting useful knowledge from stored data to automatically apply performance optimization techniques, and more particularly indexing techniques. We have designed a tool that extracts frequent itemsets from a given workload to compute an index configuration that helps optimizing data access time. The experiments we performed showed that the index configurations generated by our tool allowed performance gains of 15% to 25% on a test database and a test data warehouse.<|reference_end|>
arxiv
@article{aouiche2007vers, title={Vers l'auto-administration des entrep\^ots de donn\'ees}, author={Kamel Aouiche (ERIC), J'er^ome Darmont (ERIC)}, journal={XXXV\`emes Journ\'ees de Statistique, Session sp\'eciale Entreposage et Fouille de Donn\'ees, Lyon (02/06/2003) 105-108}, year={2007}, archivePrefix={arXiv}, eprint={0704.3520}, primaryClass={cs.DB} }
aouiche2007vers
arxiv-164
0704.3536
$\delta$-sequences and Evaluation Codes defined by Plane Valuations at Infinity
<|reference_start|>$\delta$-sequences and Evaluation Codes defined by Plane Valuations at Infinity: We introduce the concept of $\delta$-sequence. A $\delta$-sequence $\Delta$ generates a well-ordered semigroup $S$ in $\mathbb{Z}^2$ or $\mathbb{R}$. We show how to construct (and compute parameters) for the dual code of any evaluation code associated with a weight function defined by $\Delta$ from the polynomial ring in two indeterminates to a semigroup $S$ as above. We prove that this is a simple procedure which can be understood by considering a particular class of valuations of function fields of surfaces, called plane valuations at infinity. We also give algorithms to construct an unlimited number of $\delta$-sequences of the different existing types, and so this paper provides the tools to know and use a new large set of codes.<|reference_end|>
arxiv
@article{galindo2007$\delta$-sequences, title={$\delta$-sequences and Evaluation Codes defined by Plane Valuations at Infinity}, author={C. Galindo and F.Monserrat}, journal={arXiv preprint arXiv:0704.3536}, year={2007}, archivePrefix={arXiv}, eprint={0704.3536}, primaryClass={cs.IT math.IT} }
galindo2007$\delta$-sequences
arxiv-165
0704.3573
Simulating spin systems on IANUS, an FPGA-based computer
<|reference_start|>Simulating spin systems on IANUS, an FPGA-based computer: We describe the hardwired implementation of algorithms for Monte Carlo simulations of a large class of spin models. We have implemented these algorithms as VHDL codes and we have mapped them onto a dedicated processor based on a large FPGA device. The measured performance on one such processor is comparable to O(100) carefully programmed high-end PCs: it turns out to be even better for some selected spin models. We describe here codes that we are currently executing on the IANUS massively parallel FPGA-based system.<|reference_end|>
arxiv
@article{belletti2007simulating, title={Simulating spin systems on IANUS, an FPGA-based computer}, author={F. Belletti, M. Cotallo, A. Cruz, L. A. Fern'andez, A. Gordillo, A. Maiorano, F. Mantovani, E. Marinari, V. Mart'in-Mayor, A. Mu~noz-Sudupe, D. Navarro, S. P'erez-Gaviro, J. J. Ruiz-Lorenzo, S. F. Schifano, D. Sciretti, A. Taranc'on, R. Tripiccione, J. L. Velasco}, journal={Computer Physics Communications, 178 (3), p.208-216, (2008)}, year={2007}, doi={10.1016/j.cpc.2007.09.006}, archivePrefix={arXiv}, eprint={0704.3573}, primaryClass={cond-mat.dis-nn cs.AR} }
belletti2007simulating
arxiv-166
0704.3588
On Energy Efficient Hierarchical Cross-Layer Design: Joint Power Control and Routing for Ad Hoc Networks
<|reference_start|>On Energy Efficient Hierarchical Cross-Layer Design: Joint Power Control and Routing for Ad Hoc Networks: In this paper, a hierarchical cross-layer design approach is proposed to increase energy efficiency in ad hoc networks through joint adaptation of nodes' transmitting powers and route selection. The design maintains the advantages of the classic OSI model, while accounting for the cross-coupling between layers, through information sharing. The proposed joint power control and routing algorithm is shown to increase significantly the overall energy efficiency of the network, at the expense of a moderate increase in complexity. Performance enhancement of the joint design using multiuser detection is also investigated, and it is shown that the use of multiuser detection can increase the capacity of the ad hoc network significantly for a given level of energy consumption.<|reference_end|>
arxiv
@article{comaniciu2007on, title={On Energy Efficient Hierarchical Cross-Layer Design: Joint Power Control and Routing for Ad Hoc Networks}, author={Cristina Comaniciu and H. Vincent Poor}, journal={arXiv preprint arXiv:0704.3588}, year={2007}, archivePrefix={arXiv}, eprint={0704.3588}, primaryClass={cs.IT math.IT} }
comaniciu2007on
arxiv-167
0704.3591
Capacity of a Class of Modulo-Sum Relay Channels
<|reference_start|>Capacity of a Class of Modulo-Sum Relay Channels: This paper characterizes the capacity of a class of modulo additive noise relay channels, in which the relay observes a corrupted version of the noise and has a separate channel to the destination. The capacity is shown to be strictly below the cut-set bound in general and achievable using a quantize-and-forward strategy at the relay. This result confirms a conjecture by Ahlswede and Han about the capacity of channels with rate limited state information at the destination for this particular class of channels.<|reference_end|>
arxiv
@article{aleksic2007capacity, title={Capacity of a Class of Modulo-Sum Relay Channels}, author={Marko Aleksic, Peyman Razaghi, Wei Yu}, journal={arXiv preprint arXiv:0704.3591}, year={2007}, archivePrefix={arXiv}, eprint={0704.3591}, primaryClass={cs.IT math.IT} }
aleksic2007capacity
arxiv-168
0704.3635
Rough Sets Computations to Impute Missing Data
<|reference_start|>Rough Sets Computations to Impute Missing Data: Many techniques for handling missing data have been proposed in the literature. Most of these techniques are overly complex. This paper explores an imputation technique based on rough set computations. In this paper, characteristic relations are introduced to describe incompletely specified decision tables.It is shown that the basic rough set idea of lower and upper approximations for incompletely specified decision tables may be defined in a variety of different ways. Empirical results obtained using real data are given and they provide a valuable and promising insight to the problem of missing data. Missing data were predicted with an accuracy of up to 99%.<|reference_end|>
arxiv
@article{nelwamondo2007rough, title={Rough Sets Computations to Impute Missing Data}, author={Fulufhelo Vincent Nelwamondo and Tshilidzi Marwala}, journal={arXiv preprint arXiv:0704.3635}, year={2007}, archivePrefix={arXiv}, eprint={0704.3635}, primaryClass={cs.CV cs.IR} }
nelwamondo2007rough
arxiv-169
0704.3643
Sabbath Day Home Automation: "It's Like Mixing Technology and Religion"
<|reference_start|>Sabbath Day Home Automation: "It's Like Mixing Technology and Religion": We present a qualitative study of 20 American Orthodox Jewish families' use of home automation for religious purposes. These lead users offer insight into real-life, long-term experience with home automation technologies. We discuss how automation was seen by participants to contribute to spiritual experience and how participants oriented to the use of automation as a religious custom. We also discuss the relationship of home automation to family life. We draw design implications for the broader population, including surrender of control as a design resource, home technologies that support long-term goals and lifestyle choices, and respite from technology.<|reference_end|>
arxiv
@article{woodruff2007sabbath, title={Sabbath Day Home Automation: "It's Like Mixing Technology and Religion"}, author={Allison Woodruff, Sally Augustin, and Brooke Foucault}, journal={arXiv preprint arXiv:0704.3643}, year={2007}, archivePrefix={arXiv}, eprint={0704.3643}, primaryClass={cs.HC} }
woodruff2007sabbath
arxiv-170
0704.3644
Capacity Gain from Two-Transmitter and Two-Receiver Cooperation
<|reference_start|>Capacity Gain from Two-Transmitter and Two-Receiver Cooperation: Capacity improvement from transmitter and receiver cooperation is investigated in a two-transmitter, two-receiver network with phase fading and full channel state information available at all terminals. The transmitters cooperate by first exchanging messages over an orthogonal transmitter cooperation channel, then encoding jointly with dirty paper coding. The receivers cooperate by using Wyner-Ziv compress-and-forward over an analogous orthogonal receiver cooperation channel. To account for the cost of cooperation, the allocation of network power and bandwidth among the data and cooperation channels is studied. It is shown that transmitter cooperation outperforms receiver cooperation and improves capacity over non-cooperative transmission under most operating conditions when the cooperation channel is strong. However, a weak cooperation channel limits the transmitter cooperation rate; in this case receiver cooperation is more advantageous. Transmitter-and-receiver cooperation offers sizable additional capacity gain over transmitter-only cooperation at low SNR, whereas at high SNR transmitter cooperation alone captures most of the cooperative capacity improvement.<|reference_end|>
arxiv
@article{ng2007capacity, title={Capacity Gain from Two-Transmitter and Two-Receiver Cooperation}, author={Chris T. K. Ng, Nihar Jindal, Andrea J. Goldsmith, Urbashi Mitra}, journal={arXiv preprint arXiv:0704.3644}, year={2007}, doi={10.1109/TIT.2007.904987}, archivePrefix={arXiv}, eprint={0704.3644}, primaryClass={cs.IT math.IT} }
ng2007capacity
arxiv-171
0704.3646
Lower Bounds on Implementing Robust and Resilient Mediators
<|reference_start|>Lower Bounds on Implementing Robust and Resilient Mediators: We consider games that have (k,t)-robust equilibria when played with a mediator, where an equilibrium is (k,t)-robust if it tolerates deviations by coalitions of size up to k and deviations by up to $t$ players with unknown utilities. We prove lower bounds that match upper bounds on the ability to implement such mediators using cheap talk (that is, just allowing communication among the players). The bounds depend on (a) the relationship between k, t, and n, the total number of players in the system; (b) whether players know the exact utilities of other players; (c) whether there are broadcast channels or just point-to-point channels; (d) whether cryptography is available; and (e) whether the game has a $k+t)-punishment strategy; that is, a strategy that, if used by all but at most $k+t$ players, guarantees that every player gets a worse outcome than they do with the equilibrium strategy.<|reference_end|>
arxiv
@article{abraham2007lower, title={Lower Bounds on Implementing Robust and Resilient Mediators}, author={Ittai Abraham, Danny Dolev, and Joseph Y. Halpern}, journal={arXiv preprint arXiv:0704.3646}, year={2007}, archivePrefix={arXiv}, eprint={0704.3646}, primaryClass={cs.GT cs.CR cs.DC} }
abraham2007lower
arxiv-172
0704.3647
Evaluating Personal Archiving Strategies for Internet-based Information
<|reference_start|>Evaluating Personal Archiving Strategies for Internet-based Information: Internet-based personal digital belongings present different vulnerabilities than locally stored materials. We use responses to a survey of people who have recovered lost websites, in combination with supplementary interviews, to paint a fuller picture of current curatorial strategies and practices. We examine the types of personal, topical, and commercial websites that respondents have lost and the reasons they have lost this potentially valuable material. We further explore what they have tried to recover and how the loss influences their subsequent practices. We found that curation of personal digital materials in online stores bears some striking similarities to the curation of similar materials stored locally in that study participants continue to archive personal assets by relying on a combination of benign neglect, sporadic backups, and unsystematic file replication. However, we have also identified issues specific to Internet-based material: how risk is spread by distributing the files among multiple servers and services; the circular reasoning participants use when they discuss the safety of their digital assets; and the types of online material that are particularly vulnerable to loss. The study reveals ways in which expectations of permanence and notification are violated and situations in which benign neglect has far greater consequences for the long-term fate of important digital assets.<|reference_end|>
arxiv
@article{marshall2007evaluating, title={Evaluating Personal Archiving Strategies for Internet-based Information}, author={Catherine C. Marshall, Frank McCown, and Michael L. Nelson}, journal={arXiv preprint arXiv:0704.3647}, year={2007}, archivePrefix={arXiv}, eprint={0704.3647}, primaryClass={cs.DL cs.CY cs.HC} }
marshall2007evaluating
arxiv-173
0704.3653
The Long Term Fate of Our Digital Belongings: Toward a Service Model for Personal Archives
<|reference_start|>The Long Term Fate of Our Digital Belongings: Toward a Service Model for Personal Archives: We conducted a preliminary field study to understand the current state of personal digital archiving in practice. Our aim is to design a service for the long-term storage, preservation, and access of digital belongings by examining how personal archiving needs intersect with existing and emerging archiving technologies, best practices, and policies. Our findings not only confirmed that experienced home computer users are creating, receiving, and finding an increasing number of digital belongings, but also that they have already lost irreplaceable digital artifacts such as photos, creative efforts, and records. Although participants reported strategies such as backup and file replication for digital safekeeping, they were seldom able to implement them consistently. Four central archiving themes emerged from the data: (1) people find it difficult to evaluate the worth of accumulated materials; (2) personal storage is highly distributed both on- and offline; (3) people are experiencing magnified curatorial problems associated with managing files in the aggregate, creating appropriate metadata, and migrating materials to maintainable formats; and (4) facilities for long-term access are not supported by the current desktop metaphor. Four environmental factors further complicate archiving in consumer settings: the pervasive influence of malware; consumer reliance on ad hoc IT providers; an accretion of minor system and registry inconsistencies; and strong consumer beliefs about the incorruptibility of digital forms, the reliability of digital technologies, and the social vulnerability of networked storage.<|reference_end|>
arxiv
@article{marshall2007the, title={The Long Term Fate of Our Digital Belongings: Toward a Service Model for Personal Archives}, author={Catherine C. Marshall, Sara Bly, and Francoise Brun-Cottan}, journal={arXiv preprint arXiv:0704.3653}, year={2007}, archivePrefix={arXiv}, eprint={0704.3653}, primaryClass={cs.DL cs.CY cs.HC} }
marshall2007the
arxiv-174
0704.3662
An Automated Evaluation Metric for Chinese Text Entry
<|reference_start|>An Automated Evaluation Metric for Chinese Text Entry: In this paper, we propose an automated evaluation metric for text entry. We also consider possible improvements to existing text entry evaluation metrics, such as the minimum string distance error rate, keystrokes per character, cost per correction, and a unified approach proposed by MacKenzie, so they can accommodate the special characteristics of Chinese text. Current methods lack an integrated concern about both typing speed and accuracy for Chinese text entry evaluation. Our goal is to remove the bias that arises due to human factors. First, we propose a new metric, called the correction penalty (P), based on Fitts' law and Hick's law. Next, we transform it into the approximate amortized cost (AAC) of information theory. An analysis of the AAC of Chinese text input methods with different context lengths is also presented.<|reference_end|>
arxiv
@article{jiang2007an, title={An Automated Evaluation Metric for Chinese Text Entry}, author={Mike Tian-Jian Jiang, James Zhan, Jaimie Lin, Jerry Lin, Wen-Lien Hsu}, journal={Jiang, Mike Tian-Jian, et al. "Robustness analysis of adaptive chinese input methods." Advances in Text Input Methods (WTIM 2011) (2011): 53}, year={2007}, archivePrefix={arXiv}, eprint={0704.3662}, primaryClass={cs.HC cs.CL} }
jiang2007an
arxiv-175
0704.3665
On the Development of Text Input Method - Lessons Learned
<|reference_start|>On the Development of Text Input Method - Lessons Learned: Intelligent Input Methods (IM) are essential for making text entries in many East Asian scripts, but their application to other languages has not been fully explored. This paper discusses how such tools can contribute to the development of computer processing of other oriental languages. We propose a design philosophy that regards IM as a text service platform, and treats the study of IM as a cross disciplinary subject from the perspectives of software engineering, human-computer interaction (HCI), and natural language processing (NLP). We discuss these three perspectives and indicate a number of possible future research directions.<|reference_end|>
arxiv
@article{jiang2007on, title={On the Development of Text Input Method - Lessons Learned}, author={Mike Tian-Jian Jiang, Deng Liu, Meng-Juei Hsieh, Wen-Lien Hsu}, journal={arXiv preprint arXiv:0704.3665}, year={2007}, archivePrefix={arXiv}, eprint={0704.3665}, primaryClass={cs.CL cs.HC} }
jiang2007on
arxiv-176
0704.3674
Periodicity of certain piecewise affine planar maps
<|reference_start|>Periodicity of certain piecewise affine planar maps: We determine periodic and aperiodic points of certain piecewise affine maps in the Euclidean plane. Using these maps, we prove for $\lambda\in\{\frac{\pm1\pm\sqrt5}2,\pm\sqrt2,\pm\sqrt3\}$ that all integer sequences $(a_k)_{k\in\mathbb Z}$ satisfying $0\le a_{k-1}+\lambda a_k+a_{k+1}<1$ are periodic.<|reference_end|>
arxiv
@article{akiyama2007periodicity, title={Periodicity of certain piecewise affine planar maps}, author={Shigeki Akiyama, Horst Brunotte, Attila Petho, Wolfgang Steiner (LIAFA)}, journal={Tsukuba Journal of Mathematics 32, 1 (2008) 197-251}, year={2007}, archivePrefix={arXiv}, eprint={0704.3674}, primaryClass={math.DS cs.DM math.NT} }
akiyama2007periodicity
arxiv-177
0704.3683
The Complexity of Weighted Boolean #CSP
<|reference_start|>The Complexity of Weighted Boolean #CSP: This paper gives a dichotomy theorem for the complexity of computing the partition function of an instance of a weighted Boolean constraint satisfaction problem. The problem is parameterised by a finite set F of non-negative functions that may be used to assign weights to the configurations (feasible solutions) of a problem instance. Classical constraint satisfaction problems correspond to the special case of 0,1-valued functions. We show that the partition function, i.e. the sum of the weights of all configurations, can be computed in polynomial time if either (1) every function in F is of ``product type'', or (2) every function in F is ``pure affine''. For every other fixed set F, computing the partition function is FP^{#P}-complete.<|reference_end|>
arxiv
@article{dyer2007the, title={The Complexity of Weighted Boolean #CSP}, author={Martin Dyer, Leslie Ann Goldberg and Mark Jerrum}, journal={SIAM J. Comput. 38(5), 1970-1986}, year={2007}, doi={10.1137/070690201}, archivePrefix={arXiv}, eprint={0704.3683}, primaryClass={cs.CC math.CO} }
dyer2007the
arxiv-178
0704.3708
Network statistics on early English Syntax: Structural criteria
<|reference_start|>Network statistics on early English Syntax: Structural criteria: This paper includes a reflection on the role of networks in the study of English language acquisition, as well as a collection of practical criteria to annotate free-speech corpora from children utterances. At the theoretical level, the main claim of this paper is that syntactic networks should be interpreted as the outcome of the use of the syntactic machinery. Thus, the intrinsic features of such machinery are not accessible directly from (known) network properties. Rather, what one can see are the global patterns of its use and, thus, a global view of the power and organization of the underlying grammar. Taking a look into more practical issues, the paper examines how to build a net from the projection of syntactic relations. Recall that, as opposed to adult grammars, early-child language has not a well-defined concept of structure. To overcome such difficulty, we develop a set of systematic criteria assuming constituency hierarchy and a grammar based on lexico-thematic relations. At the end, what we obtain is a well defined corpora annotation that enables us i) to perform statistics on the size of structures and ii) to build a network from syntactic relations over which we can perform the standard measures of complexity. We also provide a detailed example.<|reference_end|>
arxiv
@article{corominas-murtra2007network, title={Network statistics on early English Syntax: Structural criteria}, author={Bernat Corominas-Murtra}, journal={arXiv preprint arXiv:0704.3708}, year={2007}, archivePrefix={arXiv}, eprint={0704.3708}, primaryClass={cs.CL} }
corominas-murtra2007network
arxiv-179
0704.3746
Distributed Algorithms for Spectrum Allocation, Power Control, Routing, and Congestion Control in Wireless Networks
<|reference_start|>Distributed Algorithms for Spectrum Allocation, Power Control, Routing, and Congestion Control in Wireless Networks: We develop distributed algorithms to allocate resources in multi-hop wireless networks with the aim of minimizing total cost. In order to observe the fundamental duplexing constraint that co-located transmitters and receivers cannot operate simultaneously on the same frequency band, we first devise a spectrum allocation scheme that divides the whole spectrum into multiple sub-bands and activates conflict-free links on each sub-band. We show that the minimum number of required sub-bands grows asymptotically at a logarithmic rate with the chromatic number of network connectivity graph. A simple distributed and asynchronous algorithm is developed to feasibly activate links on the available sub-bands. Given a feasible spectrum allocation, we then design node-based distributed algorithms for optimally controlling the transmission powers on active links for each sub-band, jointly with traffic routes and user input rates in response to channel states and traffic demands. We show that under specified conditions, the algorithms asymptotically converge to the optimal operating point.<|reference_end|>
arxiv
@article{xi2007distributed, title={Distributed Algorithms for Spectrum Allocation, Power Control, Routing, and Congestion Control in Wireless Networks}, author={Yufang Xi, Edmund M. Yeh}, journal={arXiv preprint arXiv:0704.3746}, year={2007}, doi={10.1145/1288107.1288132}, archivePrefix={arXiv}, eprint={0704.3746}, primaryClass={cs.NI} }
xi2007distributed
arxiv-180
0704.3773
Avoiding Rotated Bitboards with Direct Lookup
<|reference_start|>Avoiding Rotated Bitboards with Direct Lookup: This paper describes an approach for obtaining direct access to the attacked squares of sliding pieces without resorting to rotated bitboards. The technique involves creating four hash tables using the built in hash arrays from an interpreted, high level language. The rank, file, and diagonal occupancy are first isolated by masking the desired portion of the board. The attacked squares are then directly retrieved from the hash tables. Maintaining incrementally updated rotated bitboards becomes unnecessary as does all the updating, mapping and shifting required to access the attacked squares. Finally, rotated bitboard move generation speed is compared with that of the direct hash table lookup method.<|reference_end|>
arxiv
@article{tannous2007avoiding, title={Avoiding Rotated Bitboards with Direct Lookup}, author={Sam Tannous}, journal={ICGA Journal, Vol. 30, No. 2, pp. 85-91. (June 2007).}, year={2007}, archivePrefix={arXiv}, eprint={0704.3773}, primaryClass={cs.DS} }
tannous2007avoiding
arxiv-181
0704.3780
Stochastic Optimization Algorithms
<|reference_start|>Stochastic Optimization Algorithms: When looking for a solution, deterministic methods have the enormous advantage that they do find global optima. Unfortunately, they are very CPU-intensive, and are useless on untractable NP-hard problems that would require thousands of years for cutting-edge computers to explore. In order to get a result, one needs to revert to stochastic algorithms, that sample the search space without exploring it thoroughly. Such algorithms can find very good results, without any guarantee that the global optimum has been reached; but there is often no other choice than using them. This chapter is a short introduction to the main methods used in stochastic optimization.<|reference_end|>
arxiv
@article{collet2007stochastic, title={Stochastic Optimization Algorithms}, author={Pierre Collet, Jean-Philippe Rennard}, journal={Rennard, J.-P., Handbook of Research on Nature Inspired Computing for Economics and Management, IGR, 2006}, year={2007}, archivePrefix={arXiv}, eprint={0704.3780}, primaryClass={cs.NE} }
collet2007stochastic
arxiv-182
0704.3835
Minimizing Unsatisfaction in Colourful Neighbourhoods
<|reference_start|>Minimizing Unsatisfaction in Colourful Neighbourhoods: Colouring sparse graphs under various restrictions is a theoretical problem of significant practical relevance. Here we consider the problem of maximizing the number of different colours available at the nodes and their neighbourhoods, given a predetermined number of colours. In the analytical framework of a tree approximation, carried out at both zero and finite temperatures, solutions obtained by population dynamics give rise to estimates of the threshold connectivity for the incomplete to complete transition, which are consistent with those of existing algorithms. The nature of the transition as well as the validity of the tree approximation are investigated.<|reference_end|>
arxiv
@article{wong2007minimizing, title={Minimizing Unsatisfaction in Colourful Neighbourhoods}, author={K. Y. Michael Wong and David Saad}, journal={J. Phys. A: Math. Theor. 41, 324023 (2008).}, year={2007}, doi={10.1088/1751-8113/41/32/324023}, archivePrefix={arXiv}, eprint={0704.3835}, primaryClass={cs.DS cond-mat.dis-nn cs.CC} }
wong2007minimizing
arxiv-183
0704.3878
A Game-Theoretic Approach to Energy-Efficient Modulation in CDMA Networks with Delay Constraints
<|reference_start|>A Game-Theoretic Approach to Energy-Efficient Modulation in CDMA Networks with Delay Constraints: A game-theoretic framework is used to study the effect of constellation size on the energy efficiency of wireless networks for M-QAM modulation. A non-cooperative game is proposed in which each user seeks to choose its transmit power (and possibly transmit symbol rate) as well as the constellation size in order to maximize its own utility while satisfying its delay quality-of-service (QoS) constraint. The utility function used here measures the number of reliable bits transmitted per joule of energy consumed, and is particularly suitable for energy-constrained networks. The best-response strategies and Nash equilibrium solution for the proposed game are derived. It is shown that in order to maximize its utility (in bits per joule), a user must choose the lowest constellation size that can accommodate the user's delay constraint. Using this framework, the tradeoffs among energy efficiency, delay, throughput and constellation size are also studied and quantified. The effect of trellis-coded modulation on energy efficiency is also discussed.<|reference_end|>
arxiv
@article{meshkati2007a, title={A Game-Theoretic Approach to Energy-Efficient Modulation in CDMA Networks with Delay Constraints}, author={Farhad Meshkati, Andrea J. Goldsmith, H. Vincent Poor and Stuart C. Schwartz}, journal={arXiv preprint arXiv:0704.3878}, year={2007}, doi={10.1109/RWS.2007.351784}, archivePrefix={arXiv}, eprint={0704.3878}, primaryClass={cs.IT cs.GT math.IT} }
meshkati2007a
arxiv-184
0704.3880
Energy-Efficient Resource Allocation in Wireless Networks with Quality-of-Service Constraints
<|reference_start|>Energy-Efficient Resource Allocation in Wireless Networks with Quality-of-Service Constraints: A game-theoretic model is proposed to study the cross-layer problem of joint power and rate control with quality of service (QoS) constraints in multiple-access networks. In the proposed game, each user seeks to choose its transmit power and rate in a distributed manner in order to maximize its own utility while satisfying its QoS requirements. The user's QoS constraints are specified in terms of the average source rate and an upper bound on the average delay where the delay includes both transmission and queuing delays. The utility function considered here measures energy efficiency and is particularly suitable for wireless networks with energy constraints. The Nash equilibrium solution for the proposed non-cooperative game is derived and a closed-form expression for the utility achieved at equilibrium is obtained. It is shown that the QoS requirements of a user translate into a "size" for the user which is an indication of the amount of network resources consumed by the user. Using this competitive multiuser framework, the tradeoffs among throughput, delay, network capacity and energy efficiency are studied. In addition, analytical expressions are given for users' delay profiles and the delay performance of the users at Nash equilibrium is quantified.<|reference_end|>
arxiv
@article{meshkati2007energy-efficient, title={Energy-Efficient Resource Allocation in Wireless Networks with Quality-of-Service Constraints}, author={Farhad Meshkati, H. Vincent Poor, Stuart C. Schwartz and Radu V. Balan}, journal={arXiv preprint arXiv:0704.3880}, year={2007}, doi={10.1109/TCOMM.2009.11.050638}, archivePrefix={arXiv}, eprint={0704.3880}, primaryClass={cs.IT math.IT} }
meshkati2007energy-efficient
arxiv-185
0704.3881
A Unified Approach to Energy-Efficient Power Control in Large CDMA Systems
<|reference_start|>A Unified Approach to Energy-Efficient Power Control in Large CDMA Systems: A unified approach to energy-efficient power control is proposed for code-division multiple access (CDMA) networks. The approach is applicable to a large family of multiuser receivers including the matched filter, the decorrelator, the linear minimum mean-square error (MMSE) receiver, and the (nonlinear) optimal detectors. It exploits the linear relationship that has been shown to exist between the transmit power and the output signal-to-interference-plus-noise ratio (SIR) in the large-system limit. It is shown that, for this family of receivers, when users seek to selfishly maximize their own energy efficiency, the Nash equilibrium is SIR-balanced. In addition, a unified power control (UPC) algorithm for reaching the Nash equilibrium is proposed. The algorithm adjusts the user's transmit powers by iteratively computing the large-system multiuser efficiency, which is independent of instantaneous spreading sequences. The convergence of the algorithm is proved for the matched filter, the decorrelator, and the MMSE receiver, and is demonstrated by means of simulation for an optimal detector. Moreover, the performance of the algorithm in finite-size systems is studied and compared with that of a conventional power control scheme, in which user powers depend on the instantaneous spreading sequences.<|reference_end|>
arxiv
@article{meshkati2007a, title={A Unified Approach to Energy-Efficient Power Control in Large CDMA Systems}, author={Farhad Meshkati, Dongning Guo, H. Vincent Poor and Stuart C. Schwartz}, journal={arXiv preprint arXiv:0704.3881}, year={2007}, archivePrefix={arXiv}, eprint={0704.3881}, primaryClass={cs.IT math.IT} }
meshkati2007a
arxiv-186
0704.3886
A Note on Ontology and Ordinary Language
<|reference_start|>A Note on Ontology and Ordinary Language: We argue for a compositional semantics grounded in a strongly typed ontology that reflects our commonsense view of the world and the way we talk about it. Assuming such a structure we show that the semantics of various natural language phenomena may become nearly trivial.<|reference_end|>
arxiv
@article{saba2007a, title={A Note on Ontology and Ordinary Language}, author={Walid S. Saba}, journal={arXiv preprint arXiv:0704.3886}, year={2007}, archivePrefix={arXiv}, eprint={0704.3886}, primaryClass={cs.AI cs.CL} }
saba2007a
arxiv-187
0704.3890
An algorithm for clock synchronization with the gradient property in sensor networks
<|reference_start|>An algorithm for clock synchronization with the gradient property in sensor networks: We introduce a distributed algorithm for clock synchronization in sensor networks. Our algorithm assumes that nodes in the network only know their immediate neighborhoods and an upper bound on the network's diameter. Clock-synchronization messages are only sent as part of the communication, assumed reasonably frequent, that already takes place among nodes. The algorithm has the gradient property of [2], achieving an O(1) worst-case skew between the logical clocks of neighbors. As in the case of [3,8], the algorithm's actions are such that no constant lower bound exists on the rate at which logical clocks progress in time, and for this reason the lower bound of [2,5] that forbids constant skew between neighbors does not apply.<|reference_end|>
arxiv
@article{pussente2007an, title={An algorithm for clock synchronization with the gradient property in sensor networks}, author={Rodolfo M. Pussente, Valmir C. Barbosa}, journal={Journal of Parallel and Distributed Computing 69 (2009), 261-265}, year={2007}, doi={10.1016/j.jpdc.2008.11.001}, archivePrefix={arXiv}, eprint={0704.3890}, primaryClass={cs.DC} }
pussente2007an
arxiv-188
0704.3904
Acyclic Preference Systems in P2P Networks
<|reference_start|>Acyclic Preference Systems in P2P Networks: In this work we study preference systems natural for the Peer-to-Peer paradigm. Most of them fall in three categories: global, symmetric and complementary. All these systems share an acyclicity property. As a consequence, they admit a stable (or Pareto efficient) configuration, where no participant can collaborate with better partners than their current ones. We analyze the representation of the such preference systems and show that any acyclic system can be represented with a symmetric mark matrix. This gives a method to merge acyclic preference systems and retain the acyclicity. We also consider such properties of the corresponding collaboration graph, as clustering coefficient and diameter. In particular, studying the example of preferences based on real latency measurements, we observe that its stable configuration is a small-world graph.<|reference_end|>
arxiv
@article{gai2007acyclic, title={Acyclic Preference Systems in P2P Networks}, author={Anh-Tuan Gai (INRIA Rocquencourt), Dmitry Lebedev (FT R&D), Fabien Mathieu (FT R&D), Fabien De Montgolfier (LIAFA), Julien Reynier (LIENS), Laurent Viennot (INRIA Rocquencourt)}, journal={arXiv preprint arXiv:0704.3904}, year={2007}, archivePrefix={arXiv}, eprint={0704.3904}, primaryClass={cs.DS cs.GT} }
gai2007acyclic
arxiv-189
0704.3905
Ensemble Learning for Free with Evolutionary Algorithms ?
<|reference_start|>Ensemble Learning for Free with Evolutionary Algorithms ?: Evolutionary Learning proceeds by evolving a population of classifiers, from which it generally returns (with some notable exceptions) the single best-of-run classifier as final result. In the meanwhile, Ensemble Learning, one of the most efficient approaches in supervised Machine Learning for the last decade, proceeds by building a population of diverse classifiers. Ensemble Learning with Evolutionary Computation thus receives increasing attention. The Evolutionary Ensemble Learning (EEL) approach presented in this paper features two contributions. First, a new fitness function, inspired by co-evolution and enforcing the classifier diversity, is presented. Further, a new selection criterion based on the classification margin is proposed. This criterion is used to extract the classifier ensemble from the final population only (Off-line) or incrementally along evolution (On-line). Experiments on a set of benchmark problems show that Off-line outperforms single-hypothesis evolutionary learning and state-of-art Boosting and generates smaller classifier ensembles.<|reference_end|>
arxiv
@article{gagné2007ensemble, title={Ensemble Learning for Free with Evolutionary Algorithms ?}, author={Christian Gagn'e (INFORMATIQUE WGZ INC.), Mich`ele Sebag (INRIA Futurs), Marc Schoenauer (INRIA Futurs), Marco Tomassini (ISI)}, journal={Dans GECCO (2007)}, year={2007}, archivePrefix={arXiv}, eprint={0704.3905}, primaryClass={cs.AI} }
gagné2007ensemble
arxiv-190
0704.3931
The Complexity of Model Checking Higher-Order Fixpoint Logic
<|reference_start|>The Complexity of Model Checking Higher-Order Fixpoint Logic: Higher-Order Fixpoint Logic (HFL) is a hybrid of the simply typed \lambda-calculus and the modal \lambda-calculus. This makes it a highly expressive temporal logic that is capable of expressing various interesting correctness properties of programs that are not expressible in the modal \lambda-calculus. This paper provides complexity results for its model checking problem. In particular we consider those fragments of HFL built by using only types of bounded order k and arity m. We establish k-fold exponential time completeness for model checking each such fragment. For the upper bound we use fixpoint elimination to obtain reachability games that are singly-exponential in the size of the formula and k-fold exponential in the size of the underlying transition system. These games can be solved in deterministic linear time. As a simple consequence, we obtain an exponential time upper bound on the expression complexity of each such fragment. The lower bound is established by a reduction from the word problem for alternating (k-1)-fold exponential space bounded Turing Machines. Since there are fixed machines of that type whose word problems are already hard with respect to k-fold exponential time, we obtain, as a corollary, k-fold exponential time completeness for the data complexity of our fragments of HFL, provided m exceeds 3. This also yields a hierarchy result in expressive power.<|reference_end|>
arxiv
@article{axelsson2007the, title={The Complexity of Model Checking Higher-Order Fixpoint Logic}, author={Roland Axelsson, Martin Lange, and Rafal Somla}, journal={Logical Methods in Computer Science, Volume 3, Issue 2 (June 29, 2007) lmcs:754}, year={2007}, doi={10.2168/LMCS-3(2:7)2007}, archivePrefix={arXiv}, eprint={0704.3931}, primaryClass={cs.LO} }
axelsson2007the
arxiv-191
0704.3969
Diversity of MIMO Multihop Relay Channels - Part I: Amplify-and-Forward
<|reference_start|>Diversity of MIMO Multihop Relay Channels - Part I: Amplify-and-Forward: In this two-part paper, we consider the multiantenna multihop relay channels in which the source signal arrives at the destination through N independent relaying hops in series. The main concern of this work is to design relaying strategies that utilize efficiently the relays in such a way that the diversity is maximized. In part I, we focus on the amplify-and-forward (AF) strategy with which the relays simply scale the received signal and retransmit it. More specifically, we characterize the diversity-multiplexing tradeoff (DMT) of the AF scheme in a general multihop channel with arbitrary number of antennas and arbitrary number of hops. The DMT is in closed-form expression as a function of the number of antennas at each node. First, we provide some basic results on the DMT of the general Rayleigh product channels. It turns out that these results have very simple and intuitive interpretation. Then, the results are applied to the AF multihop channels which is shown to be equivalent to the Rayleigh product channel, in the DMT sense. Finally, the project-and-forward (PF) scheme, a variant of the AF scheme, is proposed. We show that the PF scheme has the same DMT as the AF scheme, while the PF can have significant power gain over the AF scheme in some cases. In part II, we will derive the upper bound on the diversity of the multihop channels and show that it can be achieved by partitioning the multihop channel into AF subchannels.<|reference_end|>
arxiv
@article{yang2007diversity, title={Diversity of MIMO Multihop Relay Channels - Part I: Amplify-and-Forward}, author={Sheng Yang and Jean-Claude Belfiore}, journal={arXiv preprint arXiv:0704.3969}, year={2007}, archivePrefix={arXiv}, eprint={0704.3969}, primaryClass={cs.IT math.IT} }
yang2007diversity
arxiv-192
0705.0010
Critical phenomena in complex networks
<|reference_start|>Critical phenomena in complex networks: The combination of the compactness of networks, featuring small diameters, and their complex architectures results in a variety of critical effects dramatically different from those in cooperative systems on lattices. In the last few years, researchers have made important steps toward understanding the qualitatively new critical phenomena in complex networks. We review the results, concepts, and methods of this rapidly developing field. Here we mostly consider two closely related classes of these critical phenomena, namely structural phase transitions in the network architectures and transitions in cooperative models on networks as substrates. We also discuss systems where a network and interacting agents on it influence each other. We overview a wide range of critical phenomena in equilibrium and growing networks including the birth of the giant connected component, percolation, k-core percolation, phenomena near epidemic thresholds, condensation transitions, critical phenomena in spin models placed on networks, synchronization, and self-organized criticality effects in interacting systems on networks. We also discuss strong finite size effects in these systems and highlight open problems and perspectives.<|reference_end|>
arxiv
@article{dorogovtsev2007critical, title={Critical phenomena in complex networks}, author={S. N. Dorogovtsev, A. V. Goltsev, J. F. F. Mendes}, journal={Rev. Mod. Phys. 80, 1275 (2008)}, year={2007}, doi={10.1103/RevModPhys.80.1275}, archivePrefix={arXiv}, eprint={0705.0010}, primaryClass={cond-mat.stat-mech cs.NI math-ph math.MP physics.soc-ph} }
dorogovtsev2007critical
arxiv-193
0705.0017
Checking Equivalence of Quantum Circuits and States
<|reference_start|>Checking Equivalence of Quantum Circuits and States: Quantum computing promises exponential speed-ups for important simulation and optimization problems. It also poses new CAD problems that are similar to, but more challenging, than the related problems in classical (non-quantum) CAD, such as determining if two states or circuits are functionally equivalent. While differences in classical states are easy to detect, quantum states, which are represented by complex-valued vectors, exhibit subtle differences leading to several notions of equivalence. This provides flexibility in optimizing quantum circuits, but leads to difficult new equivalence-checking issues for simulation and synthesis. We identify several different equivalence-checking problems and present algorithms for practical benchmarks, including quantum communication and search circuits, which are shown to be very fast and robust for hundreds of qubits.<|reference_end|>
arxiv
@article{viamontes2007checking, title={Checking Equivalence of Quantum Circuits and States}, author={George F. Viamontes, Igor L. Markov, John P. Hayes}, journal={Proc. Int'l Conf. on Computer-Aided Design (ICCAD), pp. 69-74, San Jose, CA, November 2007.}, year={2007}, archivePrefix={arXiv}, eprint={0705.0017}, primaryClass={quant-ph cs.ET} }
viamontes2007checking
arxiv-194
0705.0025
Can the Internet cope with stress?
<|reference_start|>Can the Internet cope with stress?: When will the Internet become aware of itself? In this note the problem is approached by asking an alternative question: Can the Internet cope with stress? By extrapolating the psychological difference between coping and defense mechanisms a distributed software experiment is outlined which could reject the hypothesis that the Internet is not a conscious entity.<|reference_end|>
arxiv
@article{lisewski2007can, title={Can the Internet cope with stress?}, author={Andreas Martin Lisewski}, journal={arXiv preprint arXiv:0705.0025}, year={2007}, archivePrefix={arXiv}, eprint={0705.0025}, primaryClass={cs.HC cs.AI} }
lisewski2007can
arxiv-195
0705.0043
Joint Detection and Identification of an Unobservable Change in the Distribution of a Random Sequence
<|reference_start|>Joint Detection and Identification of an Unobservable Change in the Distribution of a Random Sequence: This paper examines the joint problem of detection and identification of a sudden and unobservable change in the probability distribution function (pdf) of a sequence of independent and identically distributed (i.i.d.) random variables to one of finitely many alternative pdf's. The objective is quick detection of the change and accurate inference of the ensuing pdf. Following a Bayesian approach, a new sequential decision strategy for this problem is revealed and is proven optimal. Geometrical properties of this strategy are demonstrated via numerical examples.<|reference_end|>
arxiv
@article{dayanik2007joint, title={Joint Detection and Identification of an Unobservable Change in the Distribution of a Random Sequence}, author={Savas Dayanik, Christian Goulding, H. Vincent Poor}, journal={arXiv preprint arXiv:0705.0043}, year={2007}, archivePrefix={arXiv}, eprint={0705.0043}, primaryClass={cs.IT math.IT} }
dayanik2007joint
arxiv-196
0705.0044
Reliable Memories Built from Unreliable Components Based on Expander Graphs
<|reference_start|>Reliable Memories Built from Unreliable Components Based on Expander Graphs: In this paper, memories built from components subject to transient faults are considered. A fault-tolerant memory architecture based on low-density parity-check codes is proposed and the existence of reliable memories for the adversarial failure model is proved. The proof relies on the expansion property of the underlying Tanner graph of the code. An equivalence between the Taylor-Kuznetsov (TK) scheme and Gallager B algorithm is established and the results are extended to the independent failure model. It is also shown that the proposed memory architecture has lower redundancy compared to the TK scheme. The results are illustrated with specific numerical examples.<|reference_end|>
arxiv
@article{chilappagari2007reliable, title={Reliable Memories Built from Unreliable Components Based on Expander Graphs}, author={Shashi Kiran Chilappagari and Bane Vasic}, journal={arXiv preprint arXiv:0705.0044}, year={2007}, archivePrefix={arXiv}, eprint={0705.0044}, primaryClass={cs.IT math.IT} }
chilappagari2007reliable
arxiv-197
0705.0081
Constructions of q-Ary Constant-Weight Codes
<|reference_start|>Constructions of q-Ary Constant-Weight Codes: This paper introduces a new combinatorial construction for q-ary constant-weight codes which yields several families of optimal codes and asymptotically optimal codes. The construction reveals intimate connection between q-ary constant-weight codes and sets of pairwise disjoint combinatorial designs of various types.<|reference_end|>
arxiv
@article{chee2007constructions, title={Constructions of q-Ary Constant-Weight Codes}, author={Yeow Meng Chee, San Ling}, journal={IEEE Transactions on Information Theory, Vol. 53, No. 1, January 2007, pp. 135-146}, year={2007}, doi={10.1109/TIT.2006.887499}, archivePrefix={arXiv}, eprint={0705.0081}, primaryClass={cs.IT math.IT} }
chee2007constructions
arxiv-198
0705.0085
An efficient centralized binary multicast network coding algorithm for any cyclic network
<|reference_start|>An efficient centralized binary multicast network coding algorithm for any cyclic network: We give an algorithm for finding network encoding and decoding equations for error-free multicasting networks with multiple sources and sinks. The algorithm given is efficient (polynomial complexity) and works on any kind of network (acyclic, link cyclic, flow cyclic, or even in the presence of knots). The key idea will be the appropriate use of the delay (both natural and additional) during the encoding. The resulting code will always work with finite delay with binary encoding coefficients.<|reference_end|>
arxiv
@article{diez2007an, title={An efficient centralized binary multicast network coding algorithm for any cyclic network}, author={Angela I. Barbero Diez and Oyvind Ytrehus}, journal={arXiv preprint arXiv:0705.0085}, year={2007}, archivePrefix={arXiv}, eprint={0705.0085}, primaryClass={cs.IT math.IT} }
diez2007an
arxiv-199
0705.0086
About the domino problem in the hyperbolic plane, a new solution: complement
<|reference_start|>About the domino problem in the hyperbolic plane, a new solution: complement: In this paper, we complete the construction of paper arXiv:cs.CG/0701096v2. Together with the proof contained in arXiv:cs.CG/0701096v2, this paper definitely proves that the general problem of tiling the hyperbolic plane with {\it \`a la} Wang tiles is undecidable.<|reference_end|>
arxiv
@article{margenstern2007about, title={About the domino problem in the hyperbolic plane, a new solution: complement}, author={Maurice Margenstern}, journal={M. Margenstern, "The domino problem of the hyperbolic plane is undecidable", Theoretical Computer Science, vol. 407, (2008), 29-84}, year={2007}, doi={10.1016/j.tcs.2008.04.038}, archivePrefix={arXiv}, eprint={0705.0086}, primaryClass={cs.CG cs.DM} }
margenstern2007about
arxiv-200
0705.0123
An Energy Efficiency Perspective on Training for Fading Channels
<|reference_start|>An Energy Efficiency Perspective on Training for Fading Channels: In this paper, the bit energy requirements of training-based transmission over block Rayleigh fading channels are studied. Pilot signals are employed to obtain the minimum mean-square-error (MMSE) estimate of the channel fading coefficients. Energy efficiency is analyzed in the worst case scenario where the channel estimate is assumed to be perfect and the error in the estimate is considered as another source of additive Gaussian noise. It is shown that bit energy requirement grows without bound as the snr goes to zero, and the minimum bit energy is achieved at a nonzero snr value below which one should not operate. The effect of the block length on both the minimum bit energy and the snr value at which the minimum is achieved is investigated. Flash training schemes are analyzed and shown to improve the energy efficiency in the low-snr regime. Energy efficiency analysis is also carried out when peak power constraints are imposed on pilot signals.<|reference_end|>
arxiv
@article{gursoy2007an, title={An Energy Efficiency Perspective on Training for Fading Channels}, author={Mustafa Cenk Gursoy}, journal={arXiv preprint arXiv:0705.0123}, year={2007}, doi={10.1109/ISIT.2007.4557387}, archivePrefix={arXiv}, eprint={0705.0123}, primaryClass={cs.IT math.IT} }
gursoy2007an