source
sequence | target
stringlengths 95
1.47k
|
---|---|
[
"abstract: We offer a short tour into the interactive interpretation of sequential programs. We emphasize streamlike computation — that is, computation of successive bits of information upon request. The core of the approach surveyed here dates back to the work of Berry and the author on sequential algorithms on concrete data structures in the late seventies, culminating in the design of the programming language CDS, in which the semantics of programs of any type can be explored interactively. Around one decade later, two major insights of Cartwright and Felleisen on one hand, and of Lamarche on the other hand gave new, decisive impulses to the study of sequentiality. Cartwright and Felleisen observed that sequential algorithms give a direct semantics to control operators like call-cc and proposed to include explicit errors both in the syntax and in the semantics of the language PCF. Lamarche (unpublished) connected sequential algorithms to linear logic and games. The successful program of games semantics has spanned over the nineties until now, starting with syntax-independent characterizations of the term model of PCF by Abramsky, Jagadeesan, and Malacaria on one hand, and by Hyland and Ong on the other hand.",
"@cite_1: Go back to An-fang, the Peace Square at An-Fang, the Beginning Place at An-Fang, where all things start (…) An-Fang was near a city, the only living city with a pre-atomic name (…) The headquarters of the People Programmer was at An-Fang, and there the mistake happened: A ruby trembled. Two tourmaline nets failed to rectify the laser beam. A diamond noted the error. Both the error and the correction went into the general computer. Cordwainer Smith The Dead Lady of Clown Town, 1964.",
"@cite_2: We show that Kleene's theory of unimonotone functions strictly relates to the theory of sequentiality originated by the full abstraction problem for PCF. Unimonotone functions are defined via a class of oracles, which turn out to be alternative descriptions of a subclass of Berry-Curien's sequential algorithms."
] | Sequential algorithms turned out to be quite central in the study of sequentiality. First, let us mention that Kleene has developed (for lower types) similar notions @cite_1 , under the nice name of oracles, in his late works on the semantics of higher order recursion theory (see @cite_2 for a detailed comparison). |
[
"abstract: We offer a short tour into the interactive interpretation of sequential programs. We emphasize streamlike computation — that is, computation of successive bits of information upon request. The core of the approach surveyed here dates back to the work of Berry and the author on sequential algorithms on concrete data structures in the late seventies, culminating in the design of the programming language CDS, in which the semantics of programs of any type can be explored interactively. Around one decade later, two major insights of Cartwright and Felleisen on one hand, and of Lamarche on the other hand gave new, decisive impulses to the study of sequentiality. Cartwright and Felleisen observed that sequential algorithms give a direct semantics to control operators like call-cc and proposed to include explicit errors both in the syntax and in the semantics of the language PCF. Lamarche (unpublished) connected sequential algorithms to linear logic and games. The successful program of games semantics has spanned over the nineties until now, starting with syntax-independent characterizations of the term model of PCF by Abramsky, Jagadeesan, and Malacaria on one hand, and by Hyland and Ong on the other hand.",
"@cite_1: We present a cartesian closed category of dI-domains with coherence and strongly stable functions which provides a new model of PCF, where terms are interpreted by functions and where, at first order, all functions are sequential. We show how this model can be refined in such a way that the theory it induces on the terms of PCF be strictly finer than the theory induced by the Scott model of continuous functions.",
"@cite_2: We prove that, in the hierarchy of simple types based on the type of natural numbers, any finite strongly stable function is equal to the application of the semantics of a PCF-definable functional to some strongly stable (generally not PCF-definable) functionals of type two. Applying a logical relation technique, we derive from this result that the strongly stable model of PCF is the extensional collapse of its sequential algorithms model.",
"@cite_3: In the previous chapter, we saw how the model PC offers a ‘maximal’ class of partial computable functionals strictly extending SF (in the sense of the poset ( J ( N _ ) ) of Subsection 3.6.4). In the present chapter, we show that SF can also be extended in a very different direction to yield another class SR of ‘computable’ functionals which is in some sense incompatible with PC. This class was first identified by Bucciarelli and Ehrhard [45] as the class of strongly stable functionals; later work by Ehrhard [69], van Oosten [294] and Longley [176] established the computational significance of these functionals, investigated their theory in some detail, and provided a range of alternative characterizations."
] | Two important models of functions that have been constructed since turned out to be the extensional collapse (i.e. the hereditary quotient equating sequential algorithms computing the same function, i.e. (in the affine case) two algorithms @math and @math such that @math ): Bucciarelli and Ehrhard's model of strongly stable functions @cite_1 @cite_2 , and Longley's model of sequentially realizable functionals @cite_3 . The first model arose from an algebraic characterization of sequential (first-order) functions, that carries over to all types. The second one is a realizability model over a combinatory algebra in which the interaction at work in sequential algorithms is encoded. |
[
"abstract: in this paper we describe a method which allows agents to dynamically select protocols and roles when they need to execute collaborative tasks",
"@cite_1: This article presents Gaia: a methodology for agent-oriented analysis and design. The Gaia methodology is both general, in that it is applicable to a wide range of multi-agent systems, and comprehensive, in that it deals with both the macro-level (societ al) and the micro-level (agent) aspects of systems. Gaia is founded on the view of a multi-agent system as a computational organisation consisting of various interacting roles. We illustrate Gaia through a case study (an agent-based business process management system).",
"@cite_2: To solve complex problems, agents work cooperatively with other agents in heterogeneous environments. We are interested in coordinating the local behavior of individual agents to provide an appropriate system-level behavior. The use of intelligent agents provides an even greater amount of flexibility to the ability and configuration of the system itself. With these new intricacies, software development is becoming increasingly difficult. Therefore, it is critical that our processes for building the inherently complex distributed software that must run in this environment be adequate for the task. This paper introduces a methodology for designing these systems of interacting agents."
] | Protocols selection in agents interactions design is something generally done at design time. Indeed, most of the agent-oriented design methodologies ( @cite_1 and @cite_2 to quote a few) all make designers decide which role agents should play for each single interaction. However dynamic behaviours and openness in MAS demand greater flexibility. |
[
"abstract: in this paper we describe a method which allows agents to dynamically select protocols and roles when they need to execute collaborative tasks",
"@cite_1: Coordination of agent activities is a key problem in multiagent systems. Set in a larger decision theoretic context, the existence of coordination problems leads to difficulty in evaluating the utility of a situation. This in turn makes defining optimal policies for sequential decision processes problematic. We propose a method for solving sequential multi-agent decision problems by allowing agents to reason explicitly about specific coordination mechanisms. We define an extension of value iteration in which the system's state space is augmented with the state of the coordination mechanism adopted, allowing agents to reason about the short and long term prospects for coordination, the long term consequences of (mis)coordination, and make decisions to engage or avoid coordination problems based on expected value. We also illustrate the benefits of mechanism generalization.",
"@cite_2: This paper presents a framework that enables autonomous agents to dynamically select the mechanism they employ in order to coordinate their inter-related activities. Adopting this framework means coordination mechanisms move from the realm of being imposed upon the system at design time, to something that the agents select at run-time in order to fit their prevailing circumstances and their current coordination needs. Empirical analysis is used to evaluate the effect of various design alternatives for the agent's decision making mechanisms and for the coordination mechanisms themselves."
] | To date, there have been some efforts to overcome this limitation. introduces more flexibility in agents' coordination but it only applies to planning mechanisms of the individual agents. @cite_1 also proposes a framework based on multi-agent Markov decision processes. Rather than identifying a coordination mechanism which suits best for a situation, this work deals with optimal reasoning within the context of a given coordination mechanism. @cite_2 proposed a framework that enables autonomous agents to dynamically select the mechanism they employ in order to coordinate their inter-related activities. Using this framework, agents select their coordination mechanisms reasoning about the rewards they can obtain from collaborative tasks execution as well as the probability for these tasks to succeed. |
[
"abstract: This report presents Jartege, a tool which allows random generation of unit tests for Java classes specified in JML. JML (Java Modeling Language) is a specification language for Java which allows one to write invariants for classes, and pre- and postconditions for operations. As in the JML-JUnit tool, we use JML specifications on the one hand to eliminate irrelevant test cases, and on the other hand as a test oracle. Jartege randomly generates test cases, which consist of a sequence of constructor and method calls for the classes under test. The random aspect of the tool can be parameterized by associating weights to classes and operations, and by controlling the number of instances which are created for each class under test. The practical use of Jartege is illustrated by a small case study.",
"@cite_1: Writing unit test code is labor-intensive, hence it is often not done as an integral part of programming. However, unit testing is a practical approach to increasing the correctness and quality of software; for example, the Extreme Programming approach relies on frequent unit testing. In this paper we present a new approach that makes writing unit tests easier. It uses a formal specification language's runtime assertion checker to decide whether methods are working correctly, thus automating the writing of unit test oracles. These oracles can be easily combined with hand-written test data. Instead of writing testing code, the programmer writes formal specifications (e.g., pre- and postconditions). This makes the programmer's task easier, because specifications are more concise and abstract than the equivalent test code, and hence more readable and maintainable. Furthermore, by using specifications in testing, specification errors are quickly discovered, so the specifications are more likely to provide useful documentation and inputs to other tools. We have implemented this idea using the Java Modeling Language (JML) and the JUnit testing framework, but the approach could be easily implemented with other combinations of formal specification languages and unit test tools."
] | Our work has been widely inspired by the JML-JUnit approach @cite_1 . The JML-JUnit tool generates test cases for a method which consist of a combination of calls of this method with various parameter values. The tester must supply the object invoking the method and the parameter values. With this approach, interesting values could easily be forgotten by the tester. Moreover, as a test case only consists of one method call, it is not possible to detect errors which result of several calls of different methods. At last, the JML-JUnit approach compels the user to construct the test data, which may require the call of several constructors. Our approach thus has the advantage of being more automatic, and of being able to detect more potential errors. |
[
"abstract: We propose a simple distributed hash table called ReCord, which is a generalized version of Randomized-Chord and offers improved tradeoffs in performance and topology maintenance over existing P2P systems. ReCord is scalable and can be easily implemented as an overlay network, and offers a good tradeoff between the node degree and query latency. For instance, an @math -node ReCord with @math node degree has an expected latency of @math hops. Alternatively, it can also offer @math hops latency at a higher cost of @math node degree. Meanwhile, simulations of the dynamic behaviors of ReCord are studied.",
"@cite_1: Consider a set of shared objects in a distributed network, where several copies of each object may exist at any given time. To ensure both fast access to the objects as well as efficient utilization of network resources, it is desirable that each access request be satisfied by a copy \"close\" to the requesting node. Unfortunately, it is not clear how to efficiently achieve this goal in a dynamic, distributed environment in which large numbers of objects are continuously being created, replicated, and destroyed. In this paper, we design a simple randomized algorithm for accessing shared objects that tends to satisfy each access request with a nearby copy. The algorithm is based on a novel mechanism to maintain and distribute information about object locations, and requires only a small amount of additional memory at each node. We analyze our access scheme for a class of cost functions that captures the hierarchical nature of wide-area networks. We show that under the particular cost model considered: (i) the expected cost of an individual access is asymptotically optimal, and (ii) if objects are sufficiently large, the memory used for objects dominates the additional memory used by our algorithm with high probability. We also address dynamic changes in both the network as well as the set of object copies.",
"@cite_2: This paper presents the design and evaluation of Pastry, a scalable, distributed object location and routing substrate for wide-area peer-to-peer ap- plications. Pastry performs application-level routing and object location in a po- tentially very large overlay network of nodes connected via the Internet. It can be used to support a variety of peer-to-peer applications, including global data storage, data sharing, group communication and naming. Each node in the Pastry network has a unique identifier (nodeId). When presented with a message and a key, a Pastry node efficiently routes the message to the node with a nodeId that is numerically closest to the key, among all currently live Pastry nodes. Each Pastry node keeps track of its immediate neighbors in the nodeId space, and notifies applications of new node arrivals, node failures and recoveries. Pastry takes into account network locality; it seeks to minimize the distance messages travel, according to a to scalar proximity metric like the number of IP routing hops. Pastry is completely decentralized, scalable, and self-organizing; it automatically adapts to the arrival, departure and failure of nodes. Experimental results obtained with a prototype implementation on an emulated network of up to 100,000 nodes confirm Pastry's scalability and efficiency, its ability to self-organize and adapt to node failures, and its good network locality properties.",
"@cite_3: In today’s chaotic network, data and services are mobile and replicated widely for availability, durability, and locality. Components within this infrastructure interact in rich and complex ways, greatly stressing traditional approaches to name service and routing. This paper explores an alternative to traditional approaches called Tapestry. Tapestry is an overlay location and routing infrastructure that provides location-independent routing of messages directly to the closest copy of an object or service using only point-to-point links and without centralized resources. The routing and directory information within this infrastructure is purely soft state and easily repaired. Tapestry is self-administering, faulttolerant, and resilient under load. This paper presents the architecture and algorithms of Tapestry and explores their advantages through a number of experiments."
] | Plaxton @cite_3 proposed a distributed routing protocol based on hypercubes for a static network with given collection of nodes. Plaxton's algorithm uses the technique to locate the shared resources on an overlay network in which each node only maintains a small-sized routing table. Pastry @cite_2 and Tapestry @cite_3 use Plaxton's scheme in the dynamic distributed environment. The difference between them is that Pastry uses routing scheme, whereas Tapestry uses scheme. The number of bits per digit for both Tapestry and Pastry can be reconfigured but it remains fixed during run-time. Both Pastry and Tapestry can build the overlay topology using proximity neighbor selection. However, it is still unclear whether there is any better approach to achieve globally effective routing. |
[
"abstract: We propose a simple distributed hash table called ReCord, which is a generalized version of Randomized-Chord and offers improved tradeoffs in performance and topology maintenance over existing P2P systems. ReCord is scalable and can be easily implemented as an overlay network, and offers a good tradeoff between the node degree and query latency. For instance, an @math -node ReCord with @math node degree has an expected latency of @math hops. Alternatively, it can also offer @math hops latency at a higher cost of @math node degree. Meanwhile, simulations of the dynamic behaviors of ReCord are studied.",
"@cite_1: Even though they were introduced only a few years ago, peer-to-peer (P2P) filesharing systems are now one of the most popular Internet applications and have become a major source of Internet traffic. Thus, it is extremely important that these systems be scalable. Unfortunately, the initial designs for P2P systems have significant scaling problems; for example, Napster has a centralized directory service, and Gnutella employs a flooding based search mechanism that is not suitable for large systems."
] | It is difficult to say which one of above proposed DHTs is best". Each routing algorithm offers some insight on routing in overlay network. One appropriate strategy is to combine these insights and formulate an even better scheme @cite_1 . |
[
"abstract: Quantum information processing is at the crossroads of physics, mathematics and computer science. It is concerned with what we can and cannot do with quantum information that goes beyond the abilities of classical information processing devices. Communication complexity is an area of classical computer science that aims at quantifying the amount of communication necessary to solve distributed computational problems. Quantum communication complexity uses quantum mechanics to reduce the amount of communication that would be classically required.",
"@cite_1: A proof of Bell's theorem using two maximally entangled states of two qubits is presented. It exhibits a similar logical structure to Hardy's argument of nonlocality without inequalities''. However, it works for 100 of the runs of a certain experiment. Therefore, it can also be viewed as a Greenberger-Horne-Zeilinger-like proof involving only two spacelike separated regions.",
"@cite_2: Bell’s theorem [1] refutes local theories based on Einstein, Podolsky, and Rosen’s (EPR’s) “elements of reality” [2]. A recently introduced proof without inequalities [3] presents the same logical structure as that of Hardy’s proof [4], but exhibits a greater contradiction between EPR local elements of reality and quantum mechanics. Here a simpler version of the proof in [3] will be introduced. This new version parallels Mermin’s reformulation [5] of Greenberger, Horne, and Zeilinger’s (GHZ’s) proof [6] and, besides being simpler, it emphasizes the fact that [3] is also an “all versus nothing” [7] or GHZ-type proof of Bell’s theorem, albeit with only two observers. In addition, this new approach will allow us to derive an inequality between correlation functions which is violated by quantum mechanics. Moreover, this new version will also constitute the basis for a new stateindependent proof of the Kochen-Specker (KS) theorem [8]. The whole set of new results provides a wider perspective on the relations between the most relevant proofs of no hidden variables. Consider four qubits, labeled 1, 2, 3, 4, prepared in the state jc1234 1 j0011 2 j0110 2 j1001 1 j1100 , (1) which, as can be easily checked, is the product of two singlet states, jc 2 13 ≠j c 2 24. Let us suppose that qubits 1 and 2 fly apart from qubits 3 and 4, and that an observer, Alice, performs measurements on qubits 1 and 2, while in a spacelike separated region a second observer, Bob, performs measurements on qubits 3"
] | There are other pseudo-telepathy games that are related to the magic square game. Ad 'an Cabello's game @cite_1 @cite_2 does not resembles the magic square game on first approach. However, closer analysis reveals that the two games are totally equivalent! |
[
"abstract: Quantum information processing is at the crossroads of physics, mathematics and computer science. It is concerned with what we can and cannot do with quantum information that goes beyond the abilities of classical information processing devices. Communication complexity is an area of classical computer science that aims at quantifying the amount of communication necessary to solve distributed computational problems. Quantum communication complexity uses quantum mechanics to reduce the amount of communication that would be classically required.",
"@cite_1: A proof of Bell’s theorem without inequalities and involving only two observers is given by suitably extending a proof of the Bell-Kochen-Specker theorem due to Mermin. This proof is generalized to obtain an inequality-free proof of Bell’s theorem for a set of n Bell states (with n odd) shared between two distant observers. A generalized CHSH inequality is formulated for n Bell states shared symmetrically between two observers and it is shown that quantum mechanics violates this inequality by an amount that grows exponentially with increasing n."
] | Also, Aravind has generalized his own magic square idea @cite_1 to a two-player pseudo-telepathy game in which the players share @math Bell states, @math being an arbitrary odd number larger than 1. |
[
"abstract: We study the asymptotic growth of the diameter of a graph obtained by adding sparse “long” edges to a square box in article amsmath,amsfonts empty . We focus on the cases when an edge between x and y is added with probability decaying with the Euclidean distance as |x − y|−s+o(1) when |x − y| → ∞. For s ∈ (d, 2d) we show that the graph diameter for the graph reduced to a box of side L scales like (log L)Δ+o(1) where Δ−1 := log2(2d s). In particular, the diameter grows about as fast as the typical graph distance between two vertices at distance L. We also show that a ball of radius r in the intrinsic metric on the (infinite) graph will roughly coincide with a ball of radius exp r1 Δ+o(1) in the Euclidean metric. © 2010 Wiley Periodicals, Inc. Random Struct. Alg., 39, 210-227, 2011 (Reproduction, by any means, of the entire article for non-commercial purposes is permitted without charge.)",
"@cite_1: Consider a one-dimensional independent bond percolation model withpj denoting the probability of an occupied bond between integer sitesi andi±j,j≧1. Ifpj is fixed forj≧2 and ( j )j2pj>1, then (unoriented) percolation occurs forp1 sufficiently close to 1. This result, analogous to the existence of spontaneous magnetization in long range one-dimensional Ising models, is proved by an inductive series of bounds based on a renormalization group approach using blocks of variable size. Oriented percolation is shown to occur forp1 close to 1 if ( j )jspj>0 for somes<2. Analogous results are valid for one-dimensional site-bond percolation models.",
"@cite_2: The problem of long-range percolation in one dimension is proposed. The authors consider a one-dimensional bond percolation system with bonds connecting an infinite number of neighbours where the occupation probability for the nth nearest-neighbour bond pn varies as p1 ns. Using the transfer-matrix method, they find that when s>2 only the short-range percolation exists; namely the system percolates only when p1=1. A transition to long-range percolation is found at s=2 where the percolation threshold drops suddenly from the short-range value p1c=1 to the long-range value p1c=0.",
"@cite_3: We consider one dimensional percolation models for which the occupation probability of a bond −Kx,y, has a slow power decay as a function of the bond's length. For independent models — and with suitable reformulations also for more general classes of models, it is shown that: i) no percolation is possible if for short bondsKx,y≦p =1. This dichotomy resembles one for the magnetization in 1 |x−y|2 Ising models which was first proposed by Thouless and further supported by the renormalization group flow equations of Anderson, Yuval, and Hamann. The proofs of the above percolation phenomena involve (rigorous) renormalization type arguments of a different sort.",
"@cite_4: We rigorously establish the existence of an intermediate ordered phase in one-dimensional 1 |x−y|2 percolation, Ising and Potts models. The Ising model truncated two-point function has a power law decay exponent θ which ranges from its low (and high) temperature value of two down to zero as the inverse temperature and nearest neighbor coupling vary. Similar results are obtained for percolation and Potts models.",
"@cite_5: We study the behavior of the random walk on the infinite cluster of independent long-range percolation in dimensions d= 1,2, where x and y are connected with probability ( ). We show that if d s>2d, then there is no infinite cluster at criticality. This result is extended to the free random cluster model. A second corollary is that when d≥& 2 and d>s>2d we can erase all long enough bonds and still have an infinite cluster. The proof of recurrence in two dimensions is based on general stability results for recurrence in random electrical networks. In particular, we show that i.i.d. conductances on a recurrent graph of bounded degree yield a recurrent electrical network."
] | Long-range percolation, of which our model is an example, originated in the mathema -ti -cal-physics literature as a model that exhibits a phase transition even in spatial dimension one (e.g., Newman and Schulman @cite_1 , Schulman @cite_2 , Aizenman and Newman @cite_3 , Imbrie and Newman @cite_4 ). It soon became clear that @math and @math are two distinguished values; for @math the model is essentially mean-field (or complete-graph) alike, for @math the behavior is more or less as for the nearest-neighbor percolation. The regime @math turned out to be quite interesting; indeed, it is the only general class of percolation models with Euclidean (or amenable) geometry where one can prove absence of percolation at the percolation threshold (Berger @cite_5 ). In all dimensions, the model with @math has a natural continuum scaling limit. |
[
"abstract: We study the asymptotic growth of the diameter of a graph obtained by adding sparse “long” edges to a square box in article amsmath,amsfonts empty . We focus on the cases when an edge between x and y is added with probability decaying with the Euclidean distance as |x − y|−s+o(1) when |x − y| → ∞. For s ∈ (d, 2d) we show that the graph diameter for the graph reduced to a box of side L scales like (log L)Δ+o(1) where Δ−1 := log2(2d s). In particular, the diameter grows about as fast as the typical graph distance between two vertices at distance L. We also show that a ball of radius r in the intrinsic metric on the (infinite) graph will roughly coincide with a ball of radius exp r1 Δ+o(1) in the Euclidean metric. © 2010 Wiley Periodicals, Inc. Random Struct. Alg., 39, 210-227, 2011 (Reproduction, by any means, of the entire article for non-commercial purposes is permitted without charge.)",
"@cite_1: Bounds for the diameter and expansion of the graphs created by long-range percolation on the cycle ℤ Nℤ are given. © 2001 John Wiley & Sons, Inc. Random Struct. Alg., 19: 102–111, 2001",
"@cite_2: Bounds for the diameter and expansion of the graphs created by long-range percolation on the cycle ℤ Nℤ are given. © 2001 John Wiley & Sons, Inc. Random Struct. Alg., 19: 102–111, 2001",
"@cite_3: The uniform spanning forest (USF) in ℤ d is the weak limit of random, uniformly chosen, spanning trees in [−n, n] d . Pemantle [11] proved that the USF consists a.s. of a single tree if and only if d ≤ 4. We prove that any two components of the USF in ℤ d are adjacent a.s. if 5 ≤ d ≤ 8, but not if d ≥ 9. More generally, let N(x, y) be the minimum number of edges outside the USF in a path joining x and y in ℤ d . Then @math",
"@cite_4: We consider the following long-range percolation model: an undirected graph with the node set 0, 1, ..., N d, has edges (x, y) selected with probability ≈ β ||x -y||s if ||x - y|| ' η1 > η2 > 1, it is at most Nη2 when s = 2d, and is at least Nη1 when d = 1, s = 2, β > 1 or when s > 2d. We also provide a simple proof that the diameter is at most log O(1) N with high probability, when d > s > 2d, established previously in [2]."
] | Recently, long-range percolation has been invoked as a fruitful source of graphs with non-trivial growth properties. Our interest was stirred by the work of Benjamini and Berger @cite_1 who proposed (and studied) long-range percolation as a model of social networks. It is this context where the graph distance scaling, and volume growth, are particularly of much interest. Thanks to numerous contributions that followed @cite_1 , this scaling is now known for most values of @math and @math . Explicitly, for @math , a corollary to the main result of Benjamini, Kesten, Peres and Schramm @cite_3 asserts that almost surely. As @math , the right-hand side tends to infinity and so, at @math , we expect @math . And, indeed, the precise growth rate in this case has been established by Coppersmith, Gamarnik and Sviridenko @cite_4 , where @math '' means that the ratio of left and right-hand side is a random variable that is bounded away from zero and infinity with probability tending to one. |
[
"abstract: We study the asymptotic growth of the diameter of a graph obtained by adding sparse “long” edges to a square box in article amsmath,amsfonts empty . We focus on the cases when an edge between x and y is added with probability decaying with the Euclidean distance as |x − y|−s+o(1) when |x − y| → ∞. For s ∈ (d, 2d) we show that the graph diameter for the graph reduced to a box of side L scales like (log L)Δ+o(1) where Δ−1 := log2(2d s). In particular, the diameter grows about as fast as the typical graph distance between two vertices at distance L. We also show that a ball of radius r in the intrinsic metric on the (infinite) graph will roughly coincide with a ball of radius exp r1 Δ+o(1) in the Euclidean metric. © 2010 Wiley Periodicals, Inc. Random Struct. Alg., 39, 210-227, 2011 (Reproduction, by any means, of the entire article for non-commercial purposes is permitted without charge.)",
"@cite_1: Bounds for the diameter and expansion of the graphs created by long-range percolation on the cycle ℤ Nℤ are given. © 2001 John Wiley & Sons, Inc. Random Struct. Alg., 19: 102–111, 2001",
"@cite_2: We consider the following long-range percolation model: an undirected graph with the node set 0, 1, ..., N d, has edges (x, y) selected with probability ≈ β ||x -y||s if ||x - y|| ' η1 > η2 > 1, it is at most Nη2 when s = 2d, and is at least Nη1 when d = 1, s = 2, β > 1 or when s > 2d. We also provide a simple proof that the diameter is at most log O(1) N with high probability, when d > s > 2d, established previously in [2]."
] | For @math , the present paper states @math . Here we note that @math as @math which, formally, is in agreement with . For @math we in turn have @math and so, at @math , a polylogarithmic growth is no longer sustainable. Instead, for the case of the decay @math one expects that where @math varies through @math as @math sweeps through @math . This claim is supported by upper and lower bounds in somewhat restricted one-dimensional cases (Benjamini and Berger @cite_1 , Coppersmith, Gamarnik and Sviridenko @cite_2 ). However, even the existence of a sharp exponent @math has been elusive so far. |
[
"abstract: We study the asymptotic growth of the diameter of a graph obtained by adding sparse “long” edges to a square box in article amsmath,amsfonts empty . We focus on the cases when an edge between x and y is added with probability decaying with the Euclidean distance as |x − y|−s+o(1) when |x − y| → ∞. For s ∈ (d, 2d) we show that the graph diameter for the graph reduced to a box of side L scales like (log L)Δ+o(1) where Δ−1 := log2(2d s). In particular, the diameter grows about as fast as the typical graph distance between two vertices at distance L. We also show that a ball of radius r in the intrinsic metric on the (infinite) graph will roughly coincide with a ball of radius exp r1 Δ+o(1) in the Euclidean metric. © 2010 Wiley Periodicals, Inc. Random Struct. Alg., 39, 210-227, 2011 (Reproduction, by any means, of the entire article for non-commercial purposes is permitted without charge.)",
"@cite_1: Bounds for the diameter and expansion of the graphs created by long-range percolation on the cycle ℤ Nℤ are given. © 2001 John Wiley & Sons, Inc. Random Struct. Alg., 19: 102–111, 2001",
"@cite_2: We consider long-range percolation in dimension @math , where distinct sites @math and @math are connected with probability @math . Assuming that @math is translation invariant and that @math with @math , we show that the graph distance is at least linear with the Euclidean distance.",
"@cite_3: We prove large deviation estimates at the correct order for the graph distance of two sites lying in the same cluster of an independent percolation process. We improve earlier results of Gartner and Molchanov and Grimmett and Marstrand and answer affirmatively a conjecture of Kozlov."
] | For @math one expects @cite_1 the same behavior as for the original graph. And indeed, the linear asymptotic, has been established by Berger @cite_2 . For the nearest-neighbor percolation case, this statement goes back to the work of Antal and Pisztora @cite_3 . |
[
"abstract: We study the asymptotic growth of the diameter of a graph obtained by adding sparse “long” edges to a square box in article amsmath,amsfonts empty . We focus on the cases when an edge between x and y is added with probability decaying with the Euclidean distance as |x − y|−s+o(1) when |x − y| → ∞. For s ∈ (d, 2d) we show that the graph diameter for the graph reduced to a box of side L scales like (log L)Δ+o(1) where Δ−1 := log2(2d s). In particular, the diameter grows about as fast as the typical graph distance between two vertices at distance L. We also show that a ball of radius r in the intrinsic metric on the (infinite) graph will roughly coincide with a ball of radius exp r1 Δ+o(1) in the Euclidean metric. © 2010 Wiley Periodicals, Inc. Random Struct. Alg., 39, 210-227, 2011 (Reproduction, by any means, of the entire article for non-commercial purposes is permitted without charge.)",
"@cite_1: We study the behavior of the random walk on the infinite cluster of independent long-range percolation in dimensions d= 1,2, where x and y are connected with probability ( ). We show that if d s>2d, then there is no infinite cluster at criticality. This result is extended to the free random cluster model. A second corollary is that when d≥& 2 and d>s>2d we can erase all long enough bonds and still have an infinite cluster. The proof of recurrence in two dimensions is based on general stability results for recurrence in random electrical networks. In particular, we show that i.i.d. conductances on a recurrent graph of bounded degree yield a recurrent electrical network.",
"@cite_2: We provide an estimate, sharp up to poly-logarithmic factors, of the asymptotic almost sure mixing time of the graph created by long-range percolation on the cycle of length N ( @math ). While it is known that the asymptotic almost sure diameter drops from linear to poly-logarithmic as the exponent s decreases below 2 [4, 9], the asymptotic almost sure mixing time drops from N2 only to Ns-1 (up to poly-logarithmic factors)."
] | Further motivation comes from the recent interest in diffusive properties of graphs arising via long-range percolation. An early work in this respect was that of Berger @cite_1 who characterized regimes of recurrence and transience for the simple random walk on such graphs. Benjamini, Berger and Yadin @cite_2 later showed that the mixing time @math of the random walk on @math in @math scales like with an apparent jump in the exponent when @math passes through 2. Misumi found estimates on the effective resistance in @math that exhibit a similar transition. |
[
"abstract: We study the asymptotic growth of the diameter of a graph obtained by adding sparse “long” edges to a square box in article amsmath,amsfonts empty . We focus on the cases when an edge between x and y is added with probability decaying with the Euclidean distance as |x − y|−s+o(1) when |x − y| → ∞. For s ∈ (d, 2d) we show that the graph diameter for the graph reduced to a box of side L scales like (log L)Δ+o(1) where Δ−1 := log2(2d s). In particular, the diameter grows about as fast as the typical graph distance between two vertices at distance L. We also show that a ball of radius r in the intrinsic metric on the (infinite) graph will roughly coincide with a ball of radius exp r1 Δ+o(1) in the Euclidean metric. © 2010 Wiley Periodicals, Inc. Random Struct. Alg., 39, 210-227, 2011 (Reproduction, by any means, of the entire article for non-commercial purposes is permitted without charge.)",
"@cite_1: In this paper, we derive upper bounds for the heat kernel of the simple random walk on the infinite cluster of a supercritical long range percolation process. For any @math and for any exponent @math giving the rate of decay of the percolation process, we show that the return probability decays like @math up to logarithmic corrections, where @math denotes the time the walk is run. Moreover, our methods also yield generalized bounds on the spectral gap of the dynamics and on the diameter of the largest component in a box. Besides its intrinsic interest, the main result is needed for a companion paper studying the scaling limit of simple random walk on the infinite cluster."
] | Very recently, precise bounds for the heat kernel and spectral gap of such random walks have been derived by Crawford and Sly @cite_1 . These are claimed to lead to the proof that the law of such random walks scales to @math -stable processes for @math in @math and @math in @math . For @math on the increasing side of these regimes, the random walk is expected to scale to Brownian motion. |
[
"abstract: We consider the general question of estimating decay of correlations for non-uniformly expanding maps, for classes of observables which are much larger than the usual class of Holder continuous functions. Our results give new estimates for many non-uniformly expanding systems, including Manneville-Pomeau maps, many one-dimensional systems with critical points, and Viana maps. In many situations, we also obtain a Central Limit Theorem for a much larger class of observables than usual. Our main tool is an extension of the coupling method introduced by L.-S. Young for estimating rates of mixing on certain non-uniformly expanding tower maps.",
"@cite_1: Subshifts of finite type a key symbolic model smooth uniformly expanding dynamics piecewise expanding systems hyperbolic systems."
] | Let us now mention some other results concerning estimates on decay of correlations for non-H "older observables. Most of these are stated in the context of one-sided finite alphabet shift maps, or subshifts of finite type. (For a comprehensive discussion of shift maps and equilibrium measures, we suggest the book of Baladi, @cite_1 .) Shift maps are relatively simple dynamical systems, but are often used to more complicated systems via a semi-conjugacy, in much the same way that each of the examples we consider can be represented by a suitable (see ). Where a system @math being coded has an invariant measure @math which is absolutely continuous with respect to Lebesgue measure, @math is an equilibrium measure for the potential @math , where @math is the Jacobian with respect to Lebesgue measure. Most results for shift maps work with an equilibruim measure given by a potential @math which is H "older continuous (in terms of the usual metric on shift spaces - two sequences are said to be distance @math apart if they agree for exactly the first @math symbols). This assumption corresponds to assuming good distortion for @math . |
[
"abstract: We consider the general question of estimating decay of correlations for non-uniformly expanding maps, for classes of observables which are much larger than the usual class of Holder continuous functions. Our results give new estimates for many non-uniformly expanding systems, including Manneville-Pomeau maps, many one-dimensional systems with critical points, and Viana maps. In many situations, we also obtain a Central Limit Theorem for a much larger class of observables than usual. Our main tool is an extension of the coupling method introduced by L.-S. Young for estimating rates of mixing on certain non-uniformly expanding tower maps.",
"@cite_1: Resume On etudie la vitesse de convergence vers l'etat d'equilibre pour des dynamiques markoviennes non holderiennes. On obtient une estimation de la vitesse de melange sur un sous-espace B dense dans l'espace des fonctions continues. En outre, on montre que le spectre de l'operateur de Perron-Frobenius, restreint aB, est un disque ferme dont chaque point est une valeur propre. Ceci implique que la vitesse de convergence vers l'etat d'equilibre ne peut pas etre exponentielle.",
"@cite_2: We present an upper bound on the mixing rate of the equilibrium state of a dynamical system defined by the one-sided shift and a non Holder potential of summable variations. The bound follows from an estimation of the relaxation speed of chains with complete connections with summable decay, which is obtained via a explicit coupling between pairs of chains with different histories."
] | The have been various results concerned primarily with weakening the assumption on the regularity of @math , and obtaining (slower) upper bounds for the rate of mixing with respect to the corresponding equilibrium measures. Kondah, Maume and Schmitt ( @cite_1 ) used a method of Birkhoff cones and projective metrics, Bressaud, Fernandez and Galves ( @cite_2 ) used a coupling method (different from the one described here), with estimates given in terms of , and Pollicott ( ) introduced a method involving composing transfer operators with conditional expectations. Each of these results has slightly different assumptions and gives slightly different estimates, but in each case a number of different classes of potentials are considered, and estimates are given for for observables of some similar regularity to (usually than) the potential. In particular, in all three examples polynomial mixing is given for a potential and observables with variations decaying at suitable polynomial rates. |
[
"abstract: We consider the general question of estimating decay of correlations for non-uniformly expanding maps, for classes of observables which are much larger than the usual class of Holder continuous functions. Our results give new estimates for many non-uniformly expanding systems, including Manneville-Pomeau maps, many one-dimensional systems with critical points, and Viana maps. In many situations, we also obtain a Central Limit Theorem for a much larger class of observables than usual. Our main tool is an extension of the coupling method introduced by L.-S. Young for estimating rates of mixing on certain non-uniformly expanding tower maps.",
"@cite_1: In this note we present an axiomatic approach to the decay of correlations for maps of arbitrary dimension with indifferent periodic points. As applications, we apply our results to the well-known Manneville–Pomeau equation and the inhomogeneous diophantine approximation algorithm."
] | Finally, we mention a result which applies directly to certain non-uniformly expanding systems, rather than to a symbolic space or tower. Pollicott and Yuri ( @cite_1 ) consider a class of maps of arbitrary dimension with a single indifferent periodic orbit and a given Markov structure, including in particular the Manneville-Pomeau interval maps. The class of observables considered is dynamically defined; each observable is required to be Lipschitz with respect to a Markov partition corresponding to some induced map, chosen to have good distortion properties. This class includes all functions which are Lipschitz with respect to the manifold, and while some estimates are weaker than comparable results for H "older observables, bounds are obtained for some observables which cannot be dealt with at all by our methods, such as certain unbounded functions. |
[
"abstract: In recent years, the amount of information on the Internet has increased exponentially developing great interest in selective information dissemination systems. The publish subscribe paradigm is particularly suited for designing systems for routing information and requests according to their content throughout wide-area network of brokers. Current publish subscribe systems use limited syntax-based content routing but since publishers and subscribers are anonymous and decoupled in time, space and location, often over wide-area network boundary, they do not necessarily speak the same language. Consequently, adding semantics to current publish subscribe systems is important. In this paper we identify and examine the issues in developing semantic-based content routing for publish subscribe broker networks.",
"@cite_1: A method for integrating separately developed information resources that overcomes incompatibilities in syntax and semantics and permits the resources to be accessed and modified coherently is described. The method provides logical connectivity among the information resources via a semantic service layer that automates the maintenance of data integrity and provides an approximation of global data integration across systems. This layer is a fundamental part of the Carnot architecture, which provides tools for interoperability across global enterprises. >",
"@cite_2: There has been an explosion in the types, availability and volume of data accessible in an information system, thanks to the World Wide Web (the Web) and related inter-networking technologies. In this environment, there is a critical need to replace or complement earlier database integration approaches and current browsing and keyword-based techniques with concept-based approaches. Ontologies are increasingly becoming accepted as an important part of any concept or semantics based solution, and there is increasing realization that any viable solution will need to support multiple ontologies that may be independently developed and managed. In particular, we consider the use of concepts from pre-existing real world domain ontologies for describing the content of the underlying data repositories. The most challenging issue in this approach is that of vocabulary sharing, which involves dealing with the use of different terms or concepts to describe similar information. In this paper, we describe the architecture, design and implementation of the OBSERVER system. Brokering across the domain ontologies is enabled by representing and utilizing interontology relationships such as (but not limited to) synonyms, hyponyms and hypernyms across terms in different ontologies. User queries are rewritten by using these relationships to obtain translations across ontologies. Well established metrics like precision and recall based on the extensions underlying the concepts are used to estimate the loss of information, if any.",
"@cite_3: Large organizations need to exchange information among many separately developed systems. In order for this exchange to be useful, the individual systems must agree on the meaning of their exchanged data. That is, the organization must ensure semantic interoperability . This paper provides a theory of semantic values as a unit of exchange that facilitates semantic interoperability betweeen heterogeneous information systems. We show how semantic values can either be stored explicitly or be defined by environments . A system architecture is presented that allows autonomous components to share semantic values. The key component in this architecture is called the context mediator , whose job is to identify and construct the semantic values being sent, to determine when the exchange is meaningful, and to convert the semantic values to the form required by the receiver. Our theory is then applied to the relational model. We provide an interpretation of standard SQL queries in which context conversions and manipulations are transparent to the user. We also introduce an extension of SQL, called Context-SQL (C-SQL), in which the context of a semantic value can be explicitly accessed and updated. Finally, we describe the implementation of a prototype context mediator for a relational C-SQL system."
] | Some systems @cite_1 use inference engines to discover semantic relationships between data from ontology representations. Inference engines usually have specialized languages for expressing queries different from the language used to retrieve data, therefore user queries have to be either expressed in or translated into the language of the inference engine. The ontology is either global (i.e., domain independent) or domain-specific (i.e., only a single domain) ontology. Domain-specific ontologies are smaller and more commonly found than global ontologies because they are easier to specify. Additionally, there are systems that use mapping functions exclusively and do not have inference engines @cite_2 @cite_3 . In these systems, mapping functions serve the role of an inference engine. |
[
"abstract: In recent years, the amount of information on the Internet has increased exponentially developing great interest in selective information dissemination systems. The publish subscribe paradigm is particularly suited for designing systems for routing information and requests according to their content throughout wide-area network of brokers. Current publish subscribe systems use limited syntax-based content routing but since publishers and subscribers are anonymous and decoupled in time, space and location, often over wide-area network boundary, they do not necessarily speak the same language. Consequently, adding semantics to current publish subscribe systems is important. In this paper we identify and examine the issues in developing semantic-based content routing for publish subscribe broker networks.",
"@cite_1: Semantic Web Services are a promising combination of Semantic Web and Web service technology, aiming at providing means of automatically executing, discovering and composing semantically marked-up Web services. We envision peer-to-peer networks which allow for carrying out searches in real-time on permanently reconfiguring networks to be an ideal infrastructure for deploying a network of Semantic Web Service providers. However, P2P networks evolving in an unorganized manner suffer from serious scalability problems, limiting the number of nodes in the network, creating network overload and pushing search times to unacceptable limits. We address these problems by imposing a deterministic shape on P2P networks: We propose a graph topology which allows for very efficient broadcast and search, and we provide an efficient topology construction and maintenance algorithm which, crucial to symmetric peer-to-peer networks, does neither require a central server nor super nodes in the network. We show how our scheme can be made even more efficient by using a globally known ontology to determine the organization of peers in the graph topology, allowing for efficient concept-based search.",
"@cite_2: Metadata for the World Wide Web is important, but metadata for Peer-to-Peer (P2P) networks is absolutely crucial. In this paper we discuss the open source project Edutella which builds upon metadata standards defined for the WWW and aims to provide an RDF-based metadata infrastructure for P2P applications, building on the recently announced JXTA Framework. We describe the goals and main services this infrastructure will provide and the architecture to connect Edutella Peers based on exchange of RDF metadata. As the query service is one of the core services of Edutella, upon which other services are built, we specify in detail the Edutella Common Data Model (ECDM) as basis for the Edutella query exchange language (RDF-QEL-i) and format implementing distributed queries over the Edutella network. Finally, we shortly discuss registration and mediation services, and introduce the prototype and application scenario for our current Edutella aware peers."
] | To improve scalability, peer-to-peer database systems are looking in the direction of semantic routing. HyperCuP @cite_1 uses common ontology to dynamically cluster peers based on the data they contain. A cluster is identified using a more general concept then those associated with its members in the ontology. Concepts in the ontology map to cluster addresses so a node can determine appropriate route for a query by looking up more general concepts of the query terms in the concept hierarchy. Edutella @cite_2 uses query hubs (functionally similar to brokers) to collect user metadata and present the peer-to-peer network as a virtual database which users query. All queries are routed though a query hub which forwards queries only to those nodes that can answer it. |
[
"abstract: Cyclic debugging requires repeatable executions. As non-deterministic or real-time systems typically do not have the potential to provide this, special methods are required. One such method is replay, a process that requires monitoring of a running system and logging of the data produced by that monitoring. We shall discuss the process of preparing the replay, a part of the process that has not been very well described before.",
"@cite_1: To support incremental replay of message-passing applications, processes must periodically checkpoint and the content of some messages must be logged, to break dependencies of the current state of the execution on past events. The paper presents a new adaptive logging algorithm that dynamically decides whether to log a message based on dependencies the incoming message introduces on past events of the execution. The paper discusses the implementation issues of the algorithm and evaluates its performances on several applications, showing how it improves previously known schemes.",
"@cite_2: Abstract For testing of sequential software it is usually sufficient to provide the same input (and program state) in order to reproduce the output. For real-time systems (RTS), on the other hand, we need also to control, or observe, the timing and order of the inputs. If the system additionally is multitasking, we also need to take timing and the concurrency of the executing tasks into account. In this paper we present a method for deterministic testing of multitasking RTS, which allows explorative investigations of real-time system behavior. The method includes an analysis technique that given a set of tasks and a schedule derives all execution orderings that can occur during run-time. These orderings correspond to the possible inter-leavings of the executing tasks. The method also includes a testing strategy that using the derived execution orderings can achieve deterministic, and even reproducible, testing of RTS. Since, each execution ordering can be regarded as a sequential program, it becomes possible to use techniques for testing of sequential software in testing multitasking real-time system software. We also show how this analysis and testing strategy can be extended to encompass distributed computations, communication latencies and the effects of global clock synchronization. The number of execution orderings is an objective measure of the testability of a system since it indicates how many behaviors the system can exhibit during runtime. In the pursuit of finding errors we must thus cover all these execution orderings. The fewer the orderings the better the testability."
] | Zambonelli and Netzer @cite_1 proposed a method that, by taking on-line decisions on whether to log or not to log a monitored event, deviates from the strict FIFO-solution. However, sometimes logging will debit the system with a jitter in the execution time, and so will also the algorithm it self. As larger jitter will force more extensive efforts for validation @cite_2 , an increase in jitter is counterproductive to the validation effort. |
[
"abstract: Denial of Service (DoS) attacks are one of the most challenging threats to Internet security. An attacker typically compromises a large number of vulnerable hosts and uses them to flood the victim's site with malicious traffic, clogging its tail circuit and interfering with normal traffic. At present, the network operator of a site under attack has no other resolution but to respond manually by inserting filters in the appropriate edge routers to drop attack traffic. However, as DoS attacks become increasingly sophisticated, manual filter propagation becomes unacceptably slow or even infeasible. In this paper, we present Active Internet Traffic Filtering, a new automatic filter propagation protocol. We argue that this system provides a guaranteed, significant level of protection against DoS attacks in exchange for a reasonable, bounded amount of router resources. We also argue that the proposed system cannot be abused by a malicious node to interfere with normal Internet operation. Finally, we argue that it retains its efficiency in the face of continued Internet growth.",
"@cite_1: The current Internet infrastructure has very few built-in protection mechanisms, and is therefore vulnerable to attacks and failures. In particular, recent events have illustrated the Internet's vulnerability to both denial of service (DoS) attacks and flash crowds in which one or more links in the network (or servers at the edge of the network) become severely congested. In both DoS attacks and flash crowds the congestion is due neither to a single flow, nor to a general increase in traffic, but to a well-defined subset of the traffic --- an aggregate. This paper proposes mechanisms for detecting and controlling such high bandwidth aggregates. Our design involves both a local mechanism for detecting and controlling an aggregate at a single router, and a cooperative pushback mechanism in which a router can ask upstream routers to control an aggregate. While certainly not a panacea, these mechanisms could provide some needed relief from flash crowds and flooding-style DoS attacks. The presentation in this paper is a first step towards a more rigorous evaluation of these mechanisms."
] | In @cite_1 Mahajan propose mechanisms for detecting and controlling high bandwidth traffic aggregates. One part of their work discusses how a node determines whether it is congested and how it identifies the aggregate(s) responsible for the congestion. In contrast, we start from the point where the node has identified the undesired flow(s). In that sense, their work and our work are complementary. Another part of their work discusses how much to rate-limit an annoying aggregate due to a DoS attack or a flash crowd. In contrast, our mechanism focuses on DoS attack traffic and attempts to limit it to rate @math . We believe that DoS attacks should be addressed separately from flash crowds: Flash crowd aggregates are created by legitimate traffic. Therefore, it makes sense to rate-limit them instead of completely blocking them. On the contrary, DoS attack traffic aims at disrupting the victim's operation. Therefore, it makes sense to block it. Blocking a traffic flow is simpler and cheaper than rate-limiting it. Moreover, DoS attack traffic is generated by malicious compromised nodes. Therefore, it demands a more intelligent defense mechanism. |
[
"abstract: Denial of Service (DoS) attacks are one of the most challenging threats to Internet security. An attacker typically compromises a large number of vulnerable hosts and uses them to flood the victim's site with malicious traffic, clogging its tail circuit and interfering with normal traffic. At present, the network operator of a site under attack has no other resolution but to respond manually by inserting filters in the appropriate edge routers to drop attack traffic. However, as DoS attacks become increasingly sophisticated, manual filter propagation becomes unacceptably slow or even infeasible. In this paper, we present Active Internet Traffic Filtering, a new automatic filter propagation protocol. We argue that this system provides a guaranteed, significant level of protection against DoS attacks in exchange for a reasonable, bounded amount of router resources. We also argue that the proposed system cannot be abused by a malicious node to interfere with normal Internet operation. Finally, we argue that it retains its efficiency in the face of continued Internet growth.",
"@cite_1: Denial of service (DoS) attack on the Internet has become a pressing problem. In this paper, we describe and evaluate route-based distributed packet filtering (DPF), a novel approach to distributed DoS (DDoS) attack prevention. We show that DPF achieves proactiveness and scalability, and we show that there is an intimate relationship between the effectiveness of DPF at mitigating DDoS attack and power-law network topology.The salient features of this work are two-fold. First, we show that DPF is able to proactively filter out a significant fraction of spoofed packet flows and prevent attack packets from reaching their targets in the first place. The IP flows that cannot be proactively curtailed are extremely sparse so that their origin can be localized---i.e., IP traceback---to within a small, constant number of candidate sites. We show that the two proactive and reactive performance effects can be achieved by implementing route-based filtering on less than 20 of Internet autonomous system (AS) sites. Second, we show that the two complementary performance measures are dependent on the properties of the underlying AS graph. In particular, we show that the power-law structure of Internet AS topology leads to connectivity properties which are crucial in facilitating the observed performance effects."
] | In @cite_1 Park and Lee propose DPF (Distributed Packet Filtering), a distributed ingress-filtering mechanism for pro-actively blocking spoofed flows. In contrast, AITF aims at blocking undesired -- including spoofed -- flows as close as possible to their sources. Thus, it cannot be replaced by DPF. On the other hand, DPF blocks most spoofed flows they reach their destination i.e., DPF is proactive, whereas AITF is reactive. In that sense, DPF and AITF are complementary. |
[
"abstract: Denial of Service (DoS) attacks are one of the most challenging threats to Internet security. An attacker typically compromises a large number of vulnerable hosts and uses them to flood the victim's site with malicious traffic, clogging its tail circuit and interfering with normal traffic. At present, the network operator of a site under attack has no other resolution but to respond manually by inserting filters in the appropriate edge routers to drop attack traffic. However, as DoS attacks become increasingly sophisticated, manual filter propagation becomes unacceptably slow or even infeasible. In this paper, we present Active Internet Traffic Filtering, a new automatic filter propagation protocol. We argue that this system provides a guaranteed, significant level of protection against DoS attacks in exchange for a reasonable, bounded amount of router resources. We also argue that the proposed system cannot be abused by a malicious node to interfere with normal Internet operation. Finally, we argue that it retains its efficiency in the face of continued Internet growth.",
"@cite_1: Denial of service (DoS) attacks continue to threaten the reliability of networking systems. Previous approaches for protecting networks from DoS attacks are reactive in that they wait for an attack to be launched before taking appropriate measures to protect the network. This leaves the door open for other attacks that use more sophisticated methods to mask their traffic.We propose an architecture called Secure Overlay Services (SOS) that proactively prevents DoS attacks, geared toward supporting Emergency Services or similar types of communication. The architecture is constructed using a combination of secure overlay tunneling, routing via consistent hashing, and filtering. We reduce the probability of successful attacks by (i) performing intensive filtering near protected network edges, pushing the attack point perimeter into the core of the network, where high-speed routers can handle the volume of attack traffic, and (ii) introducing randomness and anonymity into the architecture, making it difficult for an attacker to target nodes along the path to a specific SOS-protected destination.Using simple analytical models, we evaluate the likelihood that an attacker can successfully launch a DoS attack against an SOS-protected network. Our analysis demonstrates that such an architecture reduces the likelihood of a successful attack to minuscule levels."
] | In @cite_1 Keromytis propose SOS (Secure Overlay Services), an architecture for pro-actively protecting against DoS attacks the communication between a pre-determined location and a specific set of users who have authorized access to communicate with that location. In contrast, AITF addresses the more general problem of protecting against DoS attacks any location accessible to all Internet users. |
[
"abstract: The Lovasz Local Lemma due to Erdos and Lovasz is a powerful tool in proving the existence of rare events. We present an extension of this lemma, which works well when the event to be shown to exist is a conjunction of individual events, each of which asserts that a random variable does not deviate much from its mean. As applications, we consider two classes of NP-hard integer programs: minimax and covering integer programs. A key technique, randomized rounding of linear relaxations, was developed by Raghavan and Thompson to derive good approximation algorithms for such problems. We use our extension of the Local Lemma to prove that randomized rounding produces, with non-zero probability, much better feasible solutions than known before, if the constraint matrices of these integer programs are column-sparse (e.g., routing using short paths, problems on hypergraphs with small dimension degree). This complements certain well-known results from discrepancy theory. We also generalize the method of pessimistic estimators due to Raghavan, to obtain constructive (algorithmic) versions of our results for covering integer programs.",
"@cite_1: We study the relation between a class of 0–1 integer linear programs and their rational relaxations. We give a randomized algorithm for transforming an optimal solution of a relaxed problem into a provably good solution for the 0–1 problem. Our technique can be a of extended to provide bounds on the disparity between the rational and 0–1 optima for a given problem instance.",
"@cite_2: We study the relation between a class of 0–1 integer linear programs and their rational relaxations. We give a randomized algorithm for transforming an optimal solution of a relaxed problem into a provably good solution for the 0–1 problem. Our technique can be a of extended to provide bounds on the disparity between the rational and 0–1 optima for a given problem instance."
] | To see what problems MIPs model, note, from constraints (i) and (iii) of MIPs, that for all @math , any feasible solution will make the set @math have precisely one 1, with all other elements being 0; MIPs thus model many choice'' scenarios. Consider, e.g., global routing in VLSI gate arrays @cite_1 . Given are an undirected graph @math , a function @math , and @math , a set @math of paths in @math , each connecting @math to @math ; we must connect each @math with @math using exactly one path from @math , so that the maximum number of times that any edge in @math is used for, is minimized--an MIP formulation is obvious, with @math being the indicator variable for picking the @math th path in @math . This problem, the vector-selection problem of @cite_1 , and the discrepancy-type problems of , are all modeled by MIPs; many MIP instances, e.g., global routing, are NP-hard. |
[
"abstract: The Lovasz Local Lemma due to Erdos and Lovasz is a powerful tool in proving the existence of rare events. We present an extension of this lemma, which works well when the event to be shown to exist is a conjunction of individual events, each of which asserts that a random variable does not deviate much from its mean. As applications, we consider two classes of NP-hard integer programs: minimax and covering integer programs. A key technique, randomized rounding of linear relaxations, was developed by Raghavan and Thompson to derive good approximation algorithms for such problems. We use our extension of the Local Lemma to prove that randomized rounding produces, with non-zero probability, much better feasible solutions than known before, if the constraint matrices of these integer programs are column-sparse (e.g., routing using short paths, problems on hypergraphs with small dimension degree). This complements certain well-known results from discrepancy theory. We also generalize the method of pessimistic estimators due to Raghavan, to obtain constructive (algorithmic) versions of our results for covering integer programs.",
"@cite_1: We study problems in multiobjective optimization, in which solutions to a combinatorial optimization problem are evaluated with respect to several cost criteria, and we are interested in the trade-off between these objectives (the so-called Pareto curve). We point out that, under very general conditions, there is a polynomially succinct curve that spl epsiv -approximates the Pareto curve, for any spl epsiv >0. We give a necessary and sufficient condition under which this approximate Pareto curve can be constructed in time polynomial in the size of the instance and 1 spl epsiv . In the case of multiple linear objectives, we distinguish between two cases: when the underlying feasible region is convex, then we show that approximating the multi-objective problem is equivalent to approximating the single-objective problem. If however the feasible region is discrete, then we point out that the question reduces to an old and recurrent one: how does the complexity of a combinatorial optimization problem change when its feasible region is intersected with a hyperplane with small coefficients; we report some interesting new findings in this domain. Finally, we apply these concepts and techniques to formulate and solve approximately a cost-time-quality trade-off for optimizing access to the World-Wide Web, in a model first studied by (1996) (which was actually the original motivation for this work)."
] | Next, there is growing interest in optimization, since different participating individuals and or organizations may have different objective functions in a given problem instance; see, e.g., @cite_1 . Motivated by this, we study multi-criteria optimization in the setting of covering problems: |
[
"abstract: The Lovasz Local Lemma due to Erdos and Lovasz is a powerful tool in proving the existence of rare events. We present an extension of this lemma, which works well when the event to be shown to exist is a conjunction of individual events, each of which asserts that a random variable does not deviate much from its mean. As applications, we consider two classes of NP-hard integer programs: minimax and covering integer programs. A key technique, randomized rounding of linear relaxations, was developed by Raghavan and Thompson to derive good approximation algorithms for such problems. We use our extension of the Local Lemma to prove that randomized rounding produces, with non-zero probability, much better feasible solutions than known before, if the constraint matrices of these integer programs are column-sparse (e.g., routing using short paths, problems on hypergraphs with small dimension degree). This complements certain well-known results from discrepancy theory. We also generalize the method of pessimistic estimators due to Raghavan, to obtain constructive (algorithmic) versions of our results for covering integer programs.",
"@cite_1: We study the relation between a class of 0–1 integer linear programs and their rational relaxations. We give a randomized algorithm for transforming an optimal solution of a relaxed problem into a provably good solution for the 0–1 problem. Our technique can be a of extended to provide bounds on the disparity between the rational and 0–1 optima for a given problem instance.",
"@cite_2: We consider the problem of approximating an integer program by first solving its relaxation linear program and \"rounding\" the resulting solution. For several packing problems, we prove probabilistically that there exists an integer solution close to the optimum of the relaxation solution. We then develop a methodology for converting such a probabilistic existence proof to a deterministic approximation algorithm. The methodology mimics the existence proof in a very strong sense."
] | Given an ILP, we can find an optimal solution @math to its LP relaxation efficiently, but need to round fractional entries in @math to integers. The idea of randomized rounding is: given a real @math , round @math to @math with probability @math , and round @math to @math with probability @math . This has the nice property that the mean outcome is @math . Starting with this idea, the analysis of @cite_1 produces an integral solution of value at most @math for MIPs (though phrased a bit differently); this is derandomized in @cite_2 . But this does not exploit the sparsity of @math ; the previously-mentioned result of produces an integral solution of value at most @math . |
[
"abstract: The Lovasz Local Lemma due to Erdos and Lovasz is a powerful tool in proving the existence of rare events. We present an extension of this lemma, which works well when the event to be shown to exist is a conjunction of individual events, each of which asserts that a random variable does not deviate much from its mean. As applications, we consider two classes of NP-hard integer programs: minimax and covering integer programs. A key technique, randomized rounding of linear relaxations, was developed by Raghavan and Thompson to derive good approximation algorithms for such problems. We use our extension of the Local Lemma to prove that randomized rounding produces, with non-zero probability, much better feasible solutions than known before, if the constraint matrices of these integer programs are column-sparse (e.g., routing using short paths, problems on hypergraphs with small dimension degree). This complements certain well-known results from discrepancy theory. We also generalize the method of pessimistic estimators due to Raghavan, to obtain constructive (algorithmic) versions of our results for covering integer programs.",
"@cite_1: We study the relation between a class of 0–1 integer linear programs and their rational relaxations. We give a randomized algorithm for transforming an optimal solution of a relaxed problem into a provably good solution for the 0–1 problem. Our technique can be a of extended to provide bounds on the disparity between the rational and 0–1 optima for a given problem instance.",
"@cite_2: This paper presents fast algorithms that find approximate solutions for a general class of problems, which we call fractional packing and covering problems. The only previously known algorithms for solving these problems are based on general linear programming techniques. The techniques developed in this paper greatly outperform the general methods in many applications, and are extensions of a method previously applied to find approximate solutions to multicommodity flow problems. Our algorithm is a Lagrangian relaxation technique; an important aspect of our results is that we obtain a theoretical analysis of the running time of a Lagrangian relaxation-based algorithm.We give several applications of our algorithms. The new approach yields several orders of magnitude of improvement over the best previously known running times for algorithms for the scheduling of unrelated parallel machines in both the preemptive and the nonpreemptive models, for the job shop problem, for the Held and Karp bound for the traveling salesman problem, for the cutting-stock problem, for the network embedding problem, and for the minimum-cost multicommodity flow problem.",
"@cite_3: Several important NP-hard combinatorial optimization problems can be posed as packing covering integer programs; the randomized rounding technique of Raghavan and Thompson is a powerful tool with which to approximate them well. We present one elementary unifying property of all these integer linear programs and use the FKG correlation inequality to derive an improved analysis of randomized rounding on them. This yields a pessimistic estimator, thus presenting deterministic polynomial-time algorithms for them with approximation guarantees that are significantly better than those known.",
"@cite_4: We study the relation between a class of 0–1 integer linear programs and their rational relaxations. We give a randomized algorithm for transforming an optimal solution of a relaxed problem into a provably good solution for the 0–1 problem. Our technique can be a of extended to provide bounds on the disparity between the rational and 0–1 optima for a given problem instance.",
"@cite_5: Several important NP-hard combinatorial optimization problems can be posed as packing covering integer programs; the randomized rounding technique of Raghavan and Thompson is a powerful tool with which to approximate them well. We present one elementary unifying property of all these integer linear programs and use the FKG correlation inequality to derive an improved analysis of randomized rounding on them. This yields a pessimistic estimator, thus presenting deterministic polynomial-time algorithms for them with approximation guarantees that are significantly better than those known."
] | For CIPs, the idea is to solve the LP relaxation, scale the components of @math suitably, and then perform randomized rounding; see for the details. Starting with this idea, the work of @cite_1 leads to certain approximation bounds; similar bounds are achieved through different means by Plotkin, Shmoys & Tardos @cite_2 . Work of this author @cite_3 improved upon these results by observing a correlation'' property of CIPs, getting an approximation ratio of @math . Thus, while the work of @cite_1 gives a general approximation bound for MIPs, the result of gives good results for sparse MIPs. For CIPs, the current-best results are those of @cite_3 ; however, no better results were known for sparse CIPs. |
[
"abstract: The Lovasz Local Lemma due to Erdos and Lovasz is a powerful tool in proving the existence of rare events. We present an extension of this lemma, which works well when the event to be shown to exist is a conjunction of individual events, each of which asserts that a random variable does not deviate much from its mean. As applications, we consider two classes of NP-hard integer programs: minimax and covering integer programs. A key technique, randomized rounding of linear relaxations, was developed by Raghavan and Thompson to derive good approximation algorithms for such problems. We use our extension of the Local Lemma to prove that randomized rounding produces, with non-zero probability, much better feasible solutions than known before, if the constraint matrices of these integer programs are column-sparse (e.g., routing using short paths, problems on hypergraphs with small dimension degree). This complements certain well-known results from discrepancy theory. We also generalize the method of pessimistic estimators due to Raghavan, to obtain constructive (algorithmic) versions of our results for covering integer programs.",
"@cite_1: We give a worst-case analysis for two greedy heuristics for the integer programming problem minimize cx , Ax (ge) b , 0 (le) x (le) u , x integer, where the entries in A, b , and c are all nonnegative. The first heuristic is for the case where the entries in A and b are integral, the second only assumes the rows are scaled so that the smallest nonzero entry is at least 1. In both cases we compare the ratio of the value of the greedy solution to that of the integer optimal. The error bound grows logarithmically in the maximum column sum of A for both heuristics.",
"@cite_2: Worst-case bounds are given on the performance of the greedy heuristic for a continuous version of the set covering problem. This generalizes results of Chvatal, Johnson and Lovasz for the 0-1 covering problem. The results for the greedy heuristic and for other heuristics are obtained by treating the covering problem as a limiting case of a generalized location problem for which worst-case results are known. An alternative approach involving dual greedy heuristics leads also to worst-case bounds for continuous packing problems.",
"@cite_3: We give a worst-case analysis for two greedy heuristics for the integer programming problem minimize cx , Ax (ge) b , 0 (le) x (le) u , x integer, where the entries in A, b , and c are all nonnegative. The first heuristic is for the case where the entries in A and b are integral, the second only assumes the rows are scaled so that the smallest nonzero entry is at least 1. In both cases we compare the ratio of the value of the greedy solution to that of the integer optimal. The error bound grows logarithmically in the maximum column sum of A for both heuristics.",
"@cite_4: Worst-case bounds are given on the performance of the greedy heuristic for a continuous version of the set covering problem. This generalizes results of Chvatal, Johnson and Lovasz for the 0-1 covering problem. The results for the greedy heuristic and for other heuristics are obtained by treating the covering problem as a limiting case of a generalized location problem for which worst-case results are known. An alternative approach involving dual greedy heuristics leads also to worst-case bounds for continuous packing problems.",
"@cite_5: Worst-case bounds are given on the performance of the greedy heuristic for a continuous version of the set covering problem. This generalizes results of Chvatal, Johnson and Lovasz for the 0-1 covering problem. The results for the greedy heuristic and for other heuristics are obtained by treating the covering problem as a limiting case of a generalized location problem for which worst-case results are known. An alternative approach involving dual greedy heuristics leads also to worst-case bounds for continuous packing problems."
] | A key corollary of our results is that for families of instances of CIPs, we get a good ( @math or @math ) integrality gap if @math grows at least as fast as @math . Bounds on the result of a greedy algorithm for CIPs relative to the optimal solution, are known @cite_1 @cite_2 . Our bound improves that of @cite_1 and is incomparable with @cite_2 ; for any given @math , @math , and the unit vector @math , our bound improves on @cite_2 if @math is more than a certain threshold. As it stands, randomized rounding produces such improved solutions for several CIPs only with a very low, sometimes exponentially small, probability. Thus, it does not imply a randomized algorithm, often. To this end, we generalize Raghavan's method of pessimistic estimators to derive an algorithmic (polynomial-time) version of our results for CIPs, in . |
[
"abstract: We present a method for solving service allocation problems in which a set of services must be allocated to a set of agents so as to maximize a global utility. The method is completely distributed so it can scale to any number of services without degradation. We first formalize the service allocation problem and then present a simple hill-climbing, a global hill-climbing, and a bidding-protocol algorithm for solving it. We analyze the expected performance of these algorithms as a function of various problem parameters such as the branching factor and the number of agents. Finally, we use the sensor allocation problem, an instance of a service allocation problem, to show the bidding protocol at work. The simulations also show that phase transition on the expected quality of the solution exists as the amount of communication between agents increases.",
"@cite_1: 1. Conceptual outline of current evolutionary theory PART I: ADAPTATION ON THE EDGE OF CHAOS 2. The structure of rugged fitness landscapes 3. Biological implications of rugged fitness landscapes 4. The structure of adaptive landscapes underlying protein evolution 5. Self organization and adaptation in complex systems 6. Coevolving complex systems PART II: THE CRYSTALLIZATION OF LIFE 7. The origins of life: a new view 8. The origin of a connected metabolism 9. Autocatalytic polynucleotide systems: hypercycles, spin glasses and coding 10. Random grammars PART III: ORDER AND ONTOGENY 11. The architecture of genetic regulatory circuits and its evolution 12. Differentiation: the dynamical behaviors of genetic regulatory networks 13. Selection for gene expression in cell type 14. Morphology, maps and the spatial ordering of integrated tissues"
] | There is ongoing work in the field of complexity that attempts to study they dynamics of complex adaptive systems @cite_1 . Our approach is based on ideas borrowed from the use of NK landscapes for the analysis of co-evolving systems. As such, we are using some of the results from that field. However, complexity theory is more concerned with explaining the dynamic behavior of existing systems, while we are more concerned with the engineering of multiagent systems for distributed service allocation. |
[
"abstract: We present a method for solving service allocation problems in which a set of services must be allocated to a set of agents so as to maximize a global utility. The method is completely distributed so it can scale to any number of services without degradation. We first formalize the service allocation problem and then present a simple hill-climbing, a global hill-climbing, and a bidding-protocol algorithm for solving it. We analyze the expected performance of these algorithms as a function of various problem parameters such as the branching factor and the number of agents. Finally, we use the sensor allocation problem, an instance of a service allocation problem, to show the bidding protocol at work. The simulations also show that phase transition on the expected quality of the solution exists as the amount of communication between agents increases.",
"@cite_1: This paper surveys the emerging science of how to design a “COllective INtelligence” (COIN). A COIN is a large multi-agent system where: i) There is little to no centralized communication or control. ii) There is a provided world utility function that rates the possible histories of the full system. In particular, we are interested in COINs in which each agent runs a reinforcement learning (RL) algorithm. The conventional approach to designing large distributed systems to optimize a world utility does not use agents running RL algorithms. Rather, that approach begins with explicit modeling of the dynamics of the overall system, followed by detailed hand-tuning of the interactions between the components to ensure that they “cooperate” as far as the world utility is concerned. This approach is labor-intensive, often results in highly nonrobust systems, and usually results in design techniques that have limited applicability. In contrast, we wish to solve the COIN design problem implicitly, via the “adaptive” character of the RL algorithms of each of the agents. This approach introduces an entirely new, profound design problem: Assuming the RL algorithms are able to achieve high rewards, what reward functions for the individual agents will, when pursued by those agents, result in high world utility? In other words, what reward functions will best ensure that we do not have phenomena like the tragedy of the commons, Braess’s paradox, or the liquidity trap? Although still very young, research specifically concentrating on the COIN design problem has already resulted in successes in artificial domains, in particular in packet-routing, the leader-follower problem, and in variants of Arthur’s El Farol bar problem. It is expected that as it matures and draws upon other disciplines related to COINs, this research will greatly expand the range of tasks addressable by human engineers. Moreover, in addition to drawing on them, such a fully developed science of COIN design may provide much insight into other already established scientific fields, such as economics, game theory, and population biology."
] | The Collective Intelligence (COIN) framework @cite_1 shares many of the same goals of our research. They start with a global utility function from which they derive the rewards functions for each agent. The agents are assumed to use some form of reinforcement learning. They show that the global utility is maximized when using their prescribed reward functions. They do not, however, consider how agent communication might affect the individual agent's utility landscape. |
[
"abstract: We present a method for solving service allocation problems in which a set of services must be allocated to a set of agents so as to maximize a global utility. The method is completely distributed so it can scale to any number of services without degradation. We first formalize the service allocation problem and then present a simple hill-climbing, a global hill-climbing, and a bidding-protocol algorithm for solving it. We analyze the expected performance of these algorithms as a function of various problem parameters such as the branching factor and the number of agents. Finally, we use the sensor allocation problem, an instance of a service allocation problem, to show the bidding protocol at work. The simulations also show that phase transition on the expected quality of the solution exists as the amount of communication between agents increases.",
"@cite_1: Abstract We envision a future in which the global economy and the Internet will merge, evolving into an information economy bustling with billions of economically motivated software agents that exchange information goods and services with humans and other agents. Economic software agents will differ in important ways from their human counterparts, and these differences may have significant beneficial or harmful effects upon the global economy. It is therefore important to consider the economic incentives and behaviors of economic software agents, and to use every available means to anticipate their collective interactions. We survey research conducted by the Information Economies group at IBM Research aimed at understanding collective interactions among agents that dynamically price information goods or services. In particular, we study the potential impact of widespread shopbot usage on prices, the price dynamics that may ensue from various mixtures of automated pricing agents (or “pricebots”), the potential use of machine-learning algorithms to improve profits, and more generally the interplay among learning, optimization, and dynamics in agent-based information economies. These studies illustrate both beneficial and harmful collective behaviors that can arise in such systems, suggest possible cures for some of the undesired phenomena, and raise fundamental theoretical issues, particularly in the realms of multi-agent learning and dynamic optimization.",
"@cite_2: Agent architectures need to organize themselves and adapt dynamically to changing circumstances without top-down control from a system operator. Some researchers provide this capability with complex agents that emulate human intelligence and reason explicitly about their coordination, reintroducing many of the problems of complex system design and implementation that motivated increasing software localization in the first place. Naturally occurring systems of simple agents (such as populations of insects or other animals) suggest that this retreat is not necessary. This paper summarizes several studies of such systems, and derives from them a set of general principles that artificial multi-agent systems can use to support overall system behavior significantly more complex than the behavior of the individuals agents. Copyright Kluwer Academic Publishers 1997"
] | The task allocation problem has been studied in , but the service allocation problem we present in this paper has received very little attention. There is also work being done on the analysis of the dynamics of multiagent systems for other domains such as e-commerce @cite_1 and automated manufacturing @cite_2 . It is possible that extensions to our approach will shed some light into the dynamics of these domains. |
[
"abstract: Let HN denote the problem of determining whether a system of multivariate polynomials with integer coefficients has a complex root. It has long been known that HN in P implies P=NP and, thanks to recent work of Koiran, it is now known that the truth of the Generalized Riemann Hypothesis (GRH) yields the implication that HN not in NP implies P is not equal to NP. We show that the assumption of GRH in the latter implication can be replaced by either of two more plausible hypotheses from analytic number theory. The first is an effective short interval Prime Ideal Theorem with explicit dependence on the underlying field, while the second can be interpreted as a quantitative statement on the higher moments of the zeroes of Dedekind zeta functions. In particular, both assumptions can still hold even if GRH is false. We thus obtain a new application of Dedekind zero estimates to computational algebraic geometry. Along the way, we also apply recent explicit algebraic and analytic estimates, some due to Silberman and Sombra, which may be of independent interest.",
"@cite_1: The parallel arithmetic complexities of matrix inversion, solving systems of linear equations, computing determinants and computing the characteristic polynomial of a matrix are shown to have the same growth rate. Algorithms are given that compute these problems in @math steps using a number of processors polynomial in n. (n is the order of the matrix of the problem.)"
] | This result immediately implies that @math can be solved by solving a linear system with @math variables and equations over the rationals (with total bit-size @math ), thus easily yielding @math . That @math then follows immediately from the fact that linear algebra can be efficiently parallelized @cite_1 . |
[
"abstract: The CL-SciSumm 2016 shared task introduced an interesting problem: given a document D and a piece of text that cites D, how do we identify the text spans of D being referenced by the piece of text? The shared task provided the first annotated dataset for studying this problem. We present an analysis of our continued work in improving our system’s performance on this task. We demonstrate how topic models and word embeddings can be used to surpass the previously best performing system.",
"@cite_1: Researchers and scientists increasingly find themselves in the position of having to quickly understand large amounts of technical material. Our goal is to effectively serve this need by using bibliometric text mining and summarization techniques to generate summaries of scientific literature. We show how we can use citations to produce automatically generated, readily consumable, technical extractive summaries. We first propose C-LexRank, a model for summarizing single scientific articles based on citations, which employs community detection and extracts salient information-rich sentences. Next, we further extend our experiments to summarize a set of papers, which cover the same scienti fic topic. We generate extractive summaries of a set of Question Answering (QA) and Dependency Parsing (DP) papers, their abstracts, and their citation sentences and show that citations have unique information amenable to creating a summary.",
"@cite_2: The number of research publications in various disciplines is growing exponentially. Researchers and scientists are increasingly finding themselves in the position of having to quickly understand large amounts of technical material. In this paper we present the first steps in producing an automatically generated, readily consumable, technical survey. Specifically we explore the combination of citation information and summarization techniques. Even though prior work (, 2006) argues that citation text is unsuitable for summarization, we show that in the framework of multi-document survey creation, citation texts can play a crucial role.",
"@cite_3: The old Asian legend about the blind men and the elephant comes to mind when looking at how different authors of scientific papers describe a piece of related prior work. It turns out that different citations to the same paper often focus on different aspects of that paper and that neither provides a full description of its full set of contributions. In this article, we will describe our investigation of this phenomenon. We studied citation summaries in the context of research papers in the biomedical domain. A citation summary is the set of citing sentences for a given article and can be used as a surrogate for the actual article in a variety of scenarios. It contains information that was deemed by peers to be important. Our study shows that citation summaries overlap to some extent with the abstracts of the papers and that they also differ from them in that they focus on different aspects of these papers than do the abstracts. In addition to this, co-cited articles (which are pairs of articles cited by another article) tend to be similar. We show results based on a lexical similarity metric called cohesion to justify our claims. © 2008 Wiley Periodicals, Inc."
] | In the pilot task, we focus on citations and the text spans they cite in the original article. The importance of citations for summarization is discussed in @cite_1 , which compared summaries that were based on three different things: only the reference article; only the abstract; and, only citations. The best results were based on citations. @cite_2 also showed that the information from citations is different from that which can be gleaned from just the abstract or reference article. However, it is cautioned that citations often focus on very specific aspects of a paper @cite_3 . |
[
"abstract: The CL-SciSumm 2016 shared task introduced an interesting problem: given a document D and a piece of text that cites D, how do we identify the text spans of D being referenced by the piece of text? The shared task provided the first annotated dataset for studying this problem. We present an analysis of our continued work in improving our system’s performance on this task. We demonstrate how topic models and word embeddings can be used to surpass the previously best performing system.",
"@cite_1: Citations play an essential role in navigating academic literature and following chains of evidence in research. With the growing availability of large digital archives of scientific papers, the automated extraction and analysis of citations is becoming increasingly relevant. However, existing approaches to citation extraction still fall short of the high accuracy required to build more sophisticated and reliable tools for citation analysis and corpus navigation. In this paper, we present techniques for high accuracy extraction of citations and references from academic papers. By collecting multiple sources of evidence about entities from documents, and integrating citation extraction, reference segmentation, and citation-reference matching, we are able to significantly improve performance in subtasks including citation identification, author named entity recognition, and citation-reference matching. Applying our algorithm to previously-unseen documents, we demonstrate high F-measure performance of 0.980 for citation extraction, 0.983 for author named entity recognition, and 0.948 for citation-reference matching.",
"@cite_2: Scientific papers revolve around citations, and for many discourse level tasks one needs to know whose work is being talked about at any point in the discourse. In this paper, we introduce the scientific attribution task, which links different linguistic expressions to citations. We discuss the suitability of different evaluation metrics and evaluate our classification approach to deciding attribution both intrinsically and in an extrinsic evaluation where information about scientific attribution is shown to improve performance on Argumentative Zoning, a rhetorical classification task.",
"@cite_3: In citation-based summarization, text written by several researchers is leveraged to identify the important aspects of a target paper. Previous work on this problem focused almost exclusively on its extraction aspect (i.e. selecting a representative set of citation sentences that highlight the contribution of the target paper). Meanwhile, the fluency of the produced summaries has been mostly ignored. For example, diversity, readability, cohesion, and ordering of the sentences included in the summary have not been thoroughly considered. This resulted in noisy and confusing summaries. In this work, we present an approach for producing readable and cohesive citation-based summaries. Our experiments show that the proposed approach outperforms several baselines in terms of both extraction quality and fluency."
] | Because of this recognized importance of citation information, research has also been done on properly tagging or marking the actual citation. Powley and Dale @cite_1 give insight into recognizing text that is a citation. Siddharthan and Teufel demonstrate how this is useful in reducing the noise when comparing citation text to reference text @cite_2 . Siddharthan and Teufel also introduce scientific attribution'' which can help in discourse classification. The importance of discourse classification is further developed in @cite_3 : they were able to show how identifying the discourse facets helps produce coherent summaries. |
[
"abstract: The CL-SciSumm 2016 shared task introduced an interesting problem: given a document D and a piece of text that cites D, how do we identify the text spans of D being referenced by the piece of text? The shared task provided the first annotated dataset for studying this problem. We present an analysis of our continued work in improving our system’s performance on this task. We demonstrate how topic models and word embeddings can be used to surpass the previously best performing system.",
"@cite_1: ●● ● ● ● To summarize is to reduce in complexity, and hence in length, while retaining some of the essential qualities of the original. This paper focusses on document extracts, a particular kind of computed document summary. Document extracts consisting of roughly 20 of the original cart be as informative as the full text of a document, which suggests that even shorter extracts may be useful indicative summmies. The trends in our results are in agreement with those of Edmundson who used a subjectively weighted combination of features as opposed to training the feature weights using a corpus.",
"@cite_2: In this paper, a method based on part-of-speech tagging (PoS) is used for bibliographic reference structure. This method operates on a roughly structured ASCII file, produced by OCR. Because of the heterogeneity of the reference structure, the method acts in a bottom-up way, without an a priori model, gathering structural elements from basic tags to sub-fields and fields. Significant tags are first grouped in homogeneous classes according to their grammar categories and then reduced in canonical forms corresponding to record fields: \"authors\", \"title\", \"conference name\", \"date\", etc. Non labelled tokens are integrated in one or another field by either applying PoS correction rules or using a structure model generated from well-detected records. The designed prototype operates with a great satisfaction on different record layouts and character recognition qualities. Without manual intervention, 96.6 words are correctly attributed, and about 75.9 references are completely segmented from 2500 references.",
"@cite_3: ●● ● ● ● To summarize is to reduce in complexity, and hence in length, while retaining some of the essential qualities of the original. This paper focusses on document extracts, a particular kind of computed document summary. Document extracts consisting of roughly 20 of the original cart be as informative as the full text of a document, which suggests that even shorter extracts may be useful indicative summmies. The trends in our results are in agreement with those of Edmundson who used a subjectively weighted combination of features as opposed to training the feature weights using a corpus."
] | The choice of proper features is very important in handling citation text. Previous research @cite_1 @cite_2 gives insight into these features. We find in @cite_1 an in-depth analysis of the usefulness of certain features. As a result, we have used it to guide our selection of which features to include. |
[
"abstract: The CL-SciSumm 2016 shared task introduced an interesting problem: given a document D and a piece of text that cites D, how do we identify the text spans of D being referenced by the piece of text? The shared task provided the first annotated dataset for studying this problem. We present an analysis of our continued work in improving our system’s performance on this task. We demonstrate how topic models and word embeddings can be used to surpass the previously best performing system.",
"@cite_1: Identifying background (context) information in scientific articles can help scholars understand major contributions in their research area more easily. In this paper, we propose a general framework based on probabilistic inference to extract such context information from scientific papers. We model the sentences in an article and their lexical similarities as a Markov Random Field tuned to detect the patterns that context data create, and employ a Belief Propagation mechanism to detect likely context sentences. We also address the problem of generating surveys of scientific papers. Our experiments show greater pyramid scores for surveys generated using such context information rather than citation sentences alone.",
"@cite_2: A citing sentence is one that appears in a scientific article and cites previous work. Citing sentences have been studied and used in many applications. For example, they have been used in scientific paper summarization, automatic survey generation, paraphrase identification, and citation function classification. Citing sentences that cite multiple papers are common in scientific writing. This observation should be taken into consideration when using citing sentences in applications. For instance, when a citing sentence is used in a summary of a scientific paper, only the fragments of the sentence that are relevant to the summarized paper should be included in the summary. In this paper, we present and compare three different approaches for identifying the fragments of a citing sentence that are related to a given target reference. Our methods are: word classification, sequence labeling, and segment classification. Our experiments show that segment classification achieves the best results."
] | In addition to these features, we have to consider that multiple citation markers may be present in a sentence. Thus, only certain parts of a sentence may be relevant to identifying the target of a particular citation marker. Qazvinian and Radev @cite_1 share an approach to find the fragment of a sentence that applies to a citation, especially in the case of sentences with multiple citation markers. The research of Abu-Jbara and Radev @cite_2 further argues that a fragment need not always be continguous. |
[
"abstract: The region-based Convolutional Neural Network (CNN) detectors such as Faster R-CNN or R-FCN have already shown promising results for object detection by combining the region proposal subnetwork and the classification subnetwork together. Although R-FCN has achieved higher detection speed while keeping the detection performance, the global structure information is ignored by the position-sensitive score maps. To fully explore the local and global properties, in this paper, we propose a novel fully convolutional network, named as CoupleNet, to couple the global structure with local parts for object detection. Specifically, the object proposals obtained by the Region Proposal Network (RPN) are fed into the the coupling module which consists of two branches. One branch adopts the position-sensitive RoI (PSRoI) pooling to capture the local part information of the object, while the other employs the RoI pooling to encode the global and context information. Next, we design different coupling strategies and normalization ways to make full use of the complementary advantages between the global and local branches. Extensive experiments demonstrate the effectiveness of our approach. We achieve state-of-the-art results on all three challenging datasets, i.e. a mAP of 82.7 on VOC07, 80.4 on VOC12, and 34.4 on COCO. Codes will be made publicly available.",
"@cite_1: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"@cite_2: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"@cite_3: We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.",
"@cite_4: How can a single fully convolutional neural network (FCN) perform on object detection? We introduce DenseBox, a unified end-to-end FCN framework that directly predicts bounding boxes and object class confidences through all locations and scales of an image. Our contribution is two-fold. First, we show that a single FCN, if designed and optimized carefully, can detect multiple different objects extremely accurately and efficiently. Second, we show that when incorporating with landmark localization during multi-task learning, DenseBox further improves object detection accuray. We present experimental results on public benchmark datasets including MALF face detection and KITTI car detection, that indicate our DenseBox is the state-of-the-art system for detecting challenging objects such as faces and cars.",
"@cite_5: Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101."
] | In order to leverage the great success of deep neural networks for image classification @cite_1 , considerable object detection methods based on deep learning have been proposed @cite_2 . Although there are end-to-end detection frameworks, like SSD @cite_3 , YOLO and DenseBox @cite_4 , region-based systems ( Fast Faster R-CNN and R-FCN ) still dominate the detection accuracy on generic benchmarks @cite_5 . |
[
"abstract: While prior work on context-based music recommendation focused on fixed set of contexts (e.g. walking, driving, jogging), we propose to use multiple sensors and external data sources to describe momentary (ephemeral) context in a rich way with a very large number of possible states (e.g. jogging fast along in downtown of Sydney under a heavy rain at night being tired and angry). With our approach, we address the problems which current approaches face: 1) a limited ability to infer context from missing or faulty sensor data; 2) an inability to use contextual information to support novel content discovery.",
"@cite_1: In this paper, we present Smart Diary, a novel smartphone based framework that analyzes mobile sensing data to infer, predict, and summarize people's daily activities, such as their behavioral patterns and life styles. Such activities are then used as the basis for knowledge representation, which generates personal digital diaries in an automatic manner. As users do not need to intentionally participate into this process, Smart Diary is able to make inferences and predictions based on a wide range of information sources, such as the phones' sensor readings, locations, and interaction history with the users, by integrating such information into a sustainable mining model. This model is specifically developed to handle heterogeneous and noisy sensing data, and is made to be extensible in that users can define their own logic rules to express short-term, mid-term, and long-term event patterns and predictions. Our evaluation results are based on the Android platform, and they demonstrate that the Smart Diary framework can provide accurate and easy-to-read diaries for end users without their interventions.",
"@cite_2: There are many studies that collect and store life log for personal memory. The paper explains how a system can create someone's life log in an inexpensive way to share daily life events with family or friends through socialnetwork or messaging. In the modern world where people are usually busier than ever, family members are geographically distributed due to globalization of companies and humans are inundated with more information than they can process, ambient communications through mobile media or internet based communication can provide rich social connections to friends and family. People can stay connected to their loving ones ubiquitously that they care about by sharing awareness information in a passive way. For users who wish to have a persistent existence in a virtual world - to let their friends know about their current activity or to inform their caretakers - new technology is needed. Research that aims to bridge real life and the virtual worlds (e.g., Second Life, Face book etc.) to simulate virtual living or logging daily events, while challenging and promising, is currently rare. Only very recently the mapping of real-world activities to virtual worlds has been attempted by processing multiple sensors data along with inference logic for realworld activities. Detecting or inferring human activity using such simple sensor data is often inaccurate, insufficient and expensive. Hence, this paper proposes to infer human activity from environmental sound cues and common sense knowledge, which is an inexpensive alternative to other sensors (e.g., accelerometers) based approaches. Because of their ubiquity, we believe that mobile phones or hand-held devices (HHD) are ideal channels to achieve a seamless integration between the physical and virtual worlds. Therefore, the paper presents a prototype to log daily events by a mobile phone based application by inferring activities from environmental sound cues. To the best of our knowledge, this system pioneers the use of environmental sound based activity recognition in mobile computing to reflect one's real-world activity in virtual worlds.",
"@cite_3: Knowing the users' personality can be a strategic advantage for the design of adaptive and personalized user interfaces. In this paper, we present the results of a first trial conducted with the aim of inferring people's personality traits based on their mobile phone call behavior. Initial findings corroborate the efficacy of using call detail records (CDR) and Social Network Analysis (SNA) of the call graph to infer the Big Five personality factors. On-going work includes a large-scale study that shall refine the accuracy of the models with a reduced margin of error.",
"@cite_4: In many cases, visitors come to a museum in small groups. In such cases, the visitors’ social context has an impact on their museum visit experience. Knowing the social context may allow a system to provide socially aware services to the visitors. Evidence of the social context can be gained from observing monitoring the visitors’ social behavior. However, automatic identification of a social context requires, on the one hand, identifying typical social behavior patterns and, on the other, using relevant sensors that measure various signals and reason about them to detect the visitors’ social behavior. We present such typical social behavior patterns of visitor pairs, identified by observations, and then the instrumentation, detection process, reasoning, and analysis of measured signals that enable us to detect the visitors’ social behavior. Simple sensors’ data, such as proximity to other visitors, proximity to museum points of interest, and visitor orientation are used to detect social synchronization, attention to the social companion, and interest in museum exhibits. The presented approach may allow future research to offer adaptive services to museum visitors based on their social context to support their group visit experience better.",
"@cite_5: SenSay is a context-aware mobile phone that adapts to dynamically changing environmental and physiological states. In addition to manipulating ringer volume, vibration, and phone alerts, SenSay can provide remote callers with the ability to communicate the urgency of their calls, make call suggestions to users when they are idle, and provide the caller with feedback on the current status of the SenSay user. A number of sensors including accelerometers, light, and microphones are mounted at various points on the body to provide data about the user’s context. A decision module uses a set of rules to analyze the sensor data and manage a state machine composed of uninterruptible, idle, active and normal states. Results from our threshold analyses show a clear delineation can be made among several user states by examining sensor data trends. SenSay augments its contextual knowledge by tapping into applications such as electronic calendars, address books, and task lists."
] | The type of contextual recommendations that can be made is shaped by sensors and signal processing used. Nowadays it is possible to accurately detect activities such as biking, driving, running, or walking based on smartphone sensors @cite_5 , or based on environmental sound cues @cite_2 . It is also possible to detect personality traits based on phone call patterns and social network data of the user @cite_3 . Similarly, interest in an object can be inferred based on ambient noise levels, and positions of people and objects in relation to each other @cite_4 . In the SenSay system, phone settings and preferences are set based on detected environmental and physiological states @cite_5 . |
[
"abstract: While prior work on context-based music recommendation focused on fixed set of contexts (e.g. walking, driving, jogging), we propose to use multiple sensors and external data sources to describe momentary (ephemeral) context in a rich way with a very large number of possible states (e.g. jogging fast along in downtown of Sydney under a heavy rain at night being tired and angry). With our approach, we address the problems which current approaches face: 1) a limited ability to infer context from missing or faulty sensor data; 2) an inability to use contextual information to support novel content discovery.",
"@cite_1: As the World Wide Web becomes a large source of digital music, the music recommendation system has got a great demand. There are several music recommendation systems for both commercial and academic areas, which deal with the user preference as fixed. However, since the music preferred by a user may change depending on the contexts, the conventional systems have inherent problems. This paper proposes a context-aware music recommendation system (CA-MRS) that exploits the fuzzy system, Bayesian networks and the utility theory in order to recommend appropriate music with respect to the context. We have analyzed the recommendation process and performed a subjective test to show the usefulness of the proposed system.",
"@cite_2: In this paper, we present a new user heartbeat and preference aware music recommendation system. The system can not only recommend a music playlist based on the user’s music preference but also the music playlist is generated based on the user’s heartbeat. If the user’s heartbeat is higher than the normal heartbeat which is 60-100 beats per minutes (age 18 and over) or 70-100 beats per minutes (age 6-18), the system generates a user preferred music playlist using Markov decision process to transfer the user’s heartbeat back to the normal range with the minimum time cost; if the user’s heartbeat is normal, the system generates a user preferred music playlist to keep the user’s heartbeat within the normal range; If the user’s heartbeat is lower than the normal heartbeat, the system generates a user preferred music playlist using Markov decision process to uplift the user’s heartbeat back to the normal range with the minimum time cost.",
"@cite_3: The amount of music consumed while on the move has been spiraling during the past couple of years, which requests for intelligent music recommendation techniques. In this demo paper, we introduce a context-aware mobile music player named \"Mobile Music Genius\" (MMG), which seamlessly adapts the music playlist on the fly, according to the user context. It makes use of a comprehensive set of features derived from sensor data, spatiotemporal information, and user interaction to learn which kind of music a listeners prefers in which context. We describe the automatic creation and adaptation of playlists and present results of a study that investigates the capabilities of the gathered user context features to predict the listener's music preference.",
"@cite_4: Mobile devices such as smart phones are becoming popular, and realtime access to multimedia data in different environments is getting easier. With properly equipped communication services, users can easily obtain the widely distributed videos, music, and documents they want. Because of its usability and capacity requirements, music is more popular than other types of multimedia data. Documents and videos are difficult to view on mobile phones' small screens, and videos' large data size results in high overhead for retrieval. But advanced compression techniques for music reduce the required storage space significantly and make the circulation of music data easier. This means that users can capture their favorite music directly from the Web without going to music stores. Accordingly, helping users find music they like in a large archive has become an attractive but challenging issue over the past few years.",
"@cite_5: We present a system to automatically generate soundtracks for user-generated outdoor videos (UGV) based on concurrently captured contextual sensor information with mobile apps for the ACM Multimedia 2012 Google challenge: Automatic Music Video Generation. Our method addresses the use case of making \"a video much more attractive for sharing by adding a matching soundtrack to it.\" Our system correlates viewable scene information from sensors with geographic contextual tags from OpenStreetMap. The co-occurance of geo-tags and mood tags are investigated from a set of categories of the web site Foursquare.com and a mapping from geo-tags to mood tags is obtained. Finally, a music retrieval component returns music based on matching mood tags. The experimental results show that our system can successfully create soundtracks that are related to the mood and situation of UGVs and therefore enhance the enjoyment of viewers. Our system sends only sensor data to a cloud service and is therefore bandwidth efficient since video data does not need to be transmitted for analysis."
] | With improvements in smartphone technology, there is a lot of potential for using rich contextual information to improve recommendations, in particular considering that people prefer to listen to different music in different contexts . Among the first to propose a context-aware music recommendation system are @cite_1 . They used weather data (from sensors and external data sources), and user information, to predict the appropriate music genre, tempo, and mood. Music can also be recommended based on user's heart beat to bring its rate to a normal level @cite_3 ; activities detected automatically (e.g. running, walking, sleeping, working, studying, and shopping) @cite_3 ; driving style, road type, landscape, sleepiness, traffic conditions, mood, weather, natural phenomena @cite_4 ; and emotional state to help to transition to a desired state . Soundtracks have also been recommended for smartphone videos based on location (using GPS and compass data for orientation), and extra information from 3rd party services such as Foursquare @cite_5 . |
[
"abstract: While prior work on context-based music recommendation focused on fixed set of contexts (e.g. walking, driving, jogging), we propose to use multiple sensors and external data sources to describe momentary (ephemeral) context in a rich way with a very large number of possible states (e.g. jogging fast along in downtown of Sydney under a heavy rain at night being tired and angry). With our approach, we address the problems which current approaches face: 1) a limited ability to infer context from missing or faulty sensor data; 2) an inability to use contextual information to support novel content discovery.",
"@cite_1: A goal for the creation and improvement of music recommendation is to retrieve users' preferences and select the music adapting to the preferences. Although the existing researches achieved a certain degree of success and inspired future researches to get more progress, problem of the cold start recommendation and the limitation to the similar music have been pointed out. Hence we incorporate concept of serendipity using 'renso' alignments over Linked Data to satisfy the users' music playing needs. We first collect music-related data from Last.fm, Yahoo! Local, Twitter and LyricWiki, and then create the 'renso' relation on the Music Linked Data. Our system proposes a way of finding suitable but novel music according to the users' contexts. Finally, preliminary experiments confirm balance of accuracy and serendipity of the music recommendation."
] | These examples use sensors and external data sources for music recommendation. Some of these context-aware music discovery systems recommend not just relevant, but new music to users @cite_1 . Our contribution is to combine rich context in a way that is a) fault tolerant, and b) aims to facilitate music discovery, by constructing a momentary ephemeral context. |
[
"abstract: Learning visual representations with self-supervised learning has become popular in computer vision. The idea is to design auxiliary tasks where labels are free to obtain. Most of these tasks end up providing data to learn specific kinds of invariance useful for recognition. In this paper, we propose to exploit different self-supervised approaches to learn representations invariant to (i) inter-instance variations (two objects in the same class should have similar features) and (ii) intra-instance variations (viewpoint, pose, deformations, illumination, etc). Instead of combining two approaches with multi-task learning, we argue to organize and reason the data with multiple variations. Specifically, we propose to generate a graph with millions of objects mined from hundreds of thousands of videos. The objects are connected by two types of edges which correspond to two types of invariance: \"different instances but a similar viewpoint and category\" and \"different viewpoints of the same instance\". By applying simple transitivity on the graph with these edges, we can obtain pairs of images exhibiting richer visual invariance. We use this data to train a Triplet-Siamese network with VGG16 as the base architecture and apply the learned representations to different recognition tasks. For object detection, we achieve 63.2 mAP on PASCAL VOC 2007 using Fast R-CNN (compare to 67.3 with ImageNet pre-training). For the challenging COCO dataset, our method is surprisingly close (23.5 ) to the ImageNet-supervised counterpart (24.4 ) using the Faster R-CNN framework. We also show that our network can perform significantly better than the ImageNet network in the surface normal estimation task.",
"@cite_1: The spatial receptive fields of simple cells in mammalian striate cortex have been reasonably well described physiologically and can be characterized as being localized, oriented, and ban@ass, comparable with the basis functions of wavelet transforms. Previously, we have shown that these receptive field properties may be accounted for in terms of a strategy for producing a sparse distribution of output activity in response to natural images. Here, in addition to describing this work in a more expansive fashion, we examine the neurobiological implications of sparse coding. Of particular interest is the case when the code is overcomplete--i.e., when the number of code elements is greater than the effective dimensionality of the input space. Because the basis functions are non-orthogonal and not linearly independent of each other, sparsifying the code will recruit only those basis functions necessary for representing a given input, and so the input-output function will deviate from being purely linear. These deviations from linearity provide a potential explanation for the weak forms of non-linearity observed in the response properties of cortical simple cells, and they further make predictions about the expected interactions among units in response to naturalistic stimuli. © 1997 Elsevier Science Ltd",
"@cite_2: Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.",
"@cite_3: There has been much interest in unsupervised learning of hierarchical generative models such as deep belief networks. Scaling such models to full-sized, high-dimensional images remains a difficult problem. To address this problem, we present the convolutional deep belief network, a hierarchical generative model which scales to realistic image sizes. This model is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Key to our approach is probabilistic max-pooling, a novel technique which shrinks the representations of higher layers in a probabilistically sound way. Our experiments show that the algorithm learns useful high-level visual features, such as object parts, from unlabeled images of objects and natural scenes. We demonstrate excellent performance on several visual recognition tasks and show that our model can perform hierarchical (bottom-up and top-down) inference over full-sized images.",
"@cite_4: We consider the problem of building high- level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a 9-layered locally connected sparse autoencoder with pooling and local contrast normalization on a large dataset of images (the model has 1 bil- lion connections, the dataset has 10 million 200x200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a clus- ter with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental re- sults reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bod- ies. Starting with these learned features, we trained our network to obtain 15.8 accu- racy in recognizing 20,000 object categories from ImageNet, a leap of 70 relative im- provement over the previous state-of-the-art.",
"@cite_5: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.",
"@cite_6: Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization appears to often get stuck in poor solutions. recently introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. In the context of the above optimization problem, we study this algorithm empirically and explore variants to better understand its success and extend it to cases where the inputs are continuous or where the structure of the input distribution is not revealing enough about the variable to be predicted in a supervised task. Our experiments also confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.",
"@cite_7: While Boltzmann Machines have been successful at unsupervised learning and density modeling of images and speech data, they can be very sensitive to noise in the data. In this paper, we introduce a novel model, the Robust Boltzmann Machine (RoBM), which allows Boltzmann Machines to be robust to corruptions. In the domain of visual recognition, the RoBM is able to accurately deal with occlusions and noise by using multiplicative gating to induce a scale mixture of Gaussians over pixels. Image denoising and in-painting correspond to posterior inference in the RoBM. Our model is trained in an unsupervised fashion with unlabeled noisy data and can learn the spatial structure of the occluders. Compared to standard algorithms, the RoBM is significantly better at recognition and denoising on several face databases.",
"@cite_8: We consider the problem of building high- level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a 9-layered locally connected sparse autoencoder with pooling and local contrast normalization on a large dataset of images (the model has 1 bil- lion connections, the dataset has 10 million 200x200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a clus- ter with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental re- sults reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bod- ies. Starting with these learned features, we trained our network to obtain 15.8 accu- racy in recognizing 20,000 object categories from ImageNet, a leap of 70 relative im- provement over the previous state-of-the-art.",
"@cite_9: We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples."
] | Unsupervised learning of visual representations is a research area of particular interest. Approaches to unsupervised learning can be roughly categorized into two main streams: (i) generative models, and (ii) self-supervised learning. Earlier methods for generative models include Anto-Encoders @cite_1 @cite_2 @cite_3 @cite_4 and Restricted Boltzmann Machines (RBMs) @cite_5 @cite_6 @cite_7 . For example, Le al @cite_4 trained a multi-layer auto-encoder on a large-scale dataset of YouTube videos: although no label is provided, some neurons in high-level layers can recognize cats and human faces. Recent generative models such as Generative Adversarial Networks @cite_9 and Variational Auto-Encoders are capable of generating more realistic images. The generated examples or the neural networks that learn to generate examples can be exploited to learn representations of data . |
[
"abstract: Learning visual representations with self-supervised learning has become popular in computer vision. The idea is to design auxiliary tasks where labels are free to obtain. Most of these tasks end up providing data to learn specific kinds of invariance useful for recognition. In this paper, we propose to exploit different self-supervised approaches to learn representations invariant to (i) inter-instance variations (two objects in the same class should have similar features) and (ii) intra-instance variations (viewpoint, pose, deformations, illumination, etc). Instead of combining two approaches with multi-task learning, we argue to organize and reason the data with multiple variations. Specifically, we propose to generate a graph with millions of objects mined from hundreds of thousands of videos. The objects are connected by two types of edges which correspond to two types of invariance: \"different instances but a similar viewpoint and category\" and \"different viewpoints of the same instance\". By applying simple transitivity on the graph with these edges, we can obtain pairs of images exhibiting richer visual invariance. We use this data to train a Triplet-Siamese network with VGG16 as the base architecture and apply the learned representations to different recognition tasks. For object detection, we achieve 63.2 mAP on PASCAL VOC 2007 using Fast R-CNN (compare to 67.3 with ImageNet pre-training). For the challenging COCO dataset, our method is surprisingly close (23.5 ) to the ImageNet-supervised counterpart (24.4 ) using the Faster R-CNN framework. We also show that our network can perform significantly better than the ImageNet network in the surface normal estimation task.",
"@cite_1: Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation.",
"@cite_2: We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.",
"@cite_3: Understanding how images of objects and scenes behave in response to specific ego-motions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose to exploit proprioceptive motor signals to provide unsupervised regularization in convolutional neural networks to learn visual representations from egocentric video. Specifically, we enforce that our learned features exhibit equivariance, i.e, they respond predictably to transformations associated with distinct ego-motions. With three datasets, we show that our unsupervised feature learning approach significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in static images from a disjoint domain.",
"@cite_4: The dominant paradigm for feature learning in computer vision relies on training neural networks for the task of object recognition using millions of hand labelled images. Is it possible to learn useful features for a diverse set of visual tasks using any other form of supervision? In biology, living organisms developed the ability of visual perception for the purpose of moving and acting in the world. Drawing inspiration from this observation, in this work we investigate if the awareness of egomotion can be used as a supervisory signal for feature learning. As opposed to the knowledge of class labels, information about egomotion is freely available to mobile agents. We show that given the same number of training images, features learnt using egomotion as supervision compare favourably to features learnt using class-label as supervision on visual tasks of scene recognition, object recognition, visual odometry and keypoint matching.",
"@cite_5: Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics. This is why pixel-space video prediction may be viewed as a promising avenue for unsupervised feature learning. In addition, while optical flow has been a very studied problem in computer vision for a long time, future frame prediction is rarely approached. Still, many vision applications could benefit from the knowledge of the next frames of videos, that does not require the complexity of tracking every pixel trajectories. In this work, we train a convolutional network to generate future frames given an input sequence. To deal with the inherently blurry predictions obtained from the standard Mean Squared Error (MSE) loss function, we propose three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. We compare our predictions to different published results based on recurrent neural networks on the UCF101 dataset",
"@cite_6: Data-driven approaches for edge detection have proven effective and achieve top results on modern benchmarks. However, all current data-driven edge detectors require manual supervision for training in the form of hand-labeled region segments or object boundaries. Specifically, human annotators mark semantically meaningful edges which are subsequently used for training. Is this form of strong, high-level supervision actually necessary to learn to accurately detect edges? In this work we present a simple yet effective approach for training edge detectors without human supervision. To this end we utilize motion, and more specifically, the only input to our method is noisy semi-dense matches between frames. We begin with only a rudimentary knowledge of edges (in the form of image gradients), and alternate between improving motion estimation and edge detection in turn. Using a large corpus of video data, we show that edge detectors trained using our unsupervised scheme approach the performance of the same methods trained with full supervision (within 3-5 ). Finally, we show that when using a deep network for the edge detector, our approach provides a novel pre-training scheme for object detection.",
"@cite_7: This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation. Specifically, we use unsupervised motion-based segmentation on videos to obtain segments, which we use as pseudo ground truth to train a convolutional network to segment objects from a single frame. Given the extensive evidence that motion plays a key role in the development of the human visual system, we hope that this straightforward approach to unsupervised learning will be more effective than cleverly designed pretext tasks studied in the literature. Indeed, our extensive experiments show that this is the case. When used for transfer learning on object detection, our representation significantly outperforms previous unsupervised approaches across multiple settings, especially when training data for the target task is scarce.",
"@cite_8: Current state-of-the-art classification and detection algorithms train deep convolutional networks using labeled data. In this work we study unsupervised feature learning with convolutional networks in the context of temporally coherent unlabeled data. We focus on feature learning from unlabeled video data, using the assumption that adjacent video frames contain semantically similar information. This assumption is exploited to train a convolutional pooling auto-encoder regularized by slowness and sparsity priors. We establish a connection between slow feature learning and metric learning. Using this connection we define \"temporal coherence\" -- a criterion which can be used to set hyper-parameters in a principled and automated manner. In a transfer learning experiment, we show that the resulting encoder can be used to define a more semantically coherent metric without the use of labels.",
"@cite_9: Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation.",
"@cite_10: Understanding how images of objects and scenes behave in response to specific ego-motions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose to exploit proprioceptive motor signals to provide unsupervised regularization in convolutional neural networks to learn visual representations from egocentric video. Specifically, we enforce that our learned features exhibit equivariance, i.e, they respond predictably to transformations associated with distinct ego-motions. With three datasets, we show that our unsupervised feature learning approach significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in static images from a disjoint domain.",
"@cite_11: Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics. This is why pixel-space video prediction may be viewed as a promising avenue for unsupervised feature learning. In addition, while optical flow has been a very studied problem in computer vision for a long time, future frame prediction is rarely approached. Still, many vision applications could benefit from the knowledge of the next frames of videos, that does not require the complexity of tracking every pixel trajectories. In this work, we train a convolutional network to generate future frames given an input sequence. To deal with the inherently blurry predictions obtained from the standard Mean Squared Error (MSE) loss function, we propose three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. We compare our predictions to different published results based on recurrent neural networks on the UCF101 dataset",
"@cite_12: This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation. Specifically, we use unsupervised motion-based segmentation on videos to obtain segments, which we use as pseudo ground truth to train a convolutional network to segment objects from a single frame. Given the extensive evidence that motion plays a key role in the development of the human visual system, we hope that this straightforward approach to unsupervised learning will be more effective than cleverly designed pretext tasks studied in the literature. Indeed, our extensive experiments show that this is the case. When used for transfer learning on object detection, our representation significantly outperforms previous unsupervised approaches across multiple settings, especially when training data for the target task is scarce.",
"@cite_13: This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations.",
"@cite_14: Given a grayscale photograph as input, this paper attacks the problem of hallucinating a plausible color version of the photograph. This problem is clearly underconstrained, so previous approaches have either relied on significant user interaction or resulted in desaturated colorizations. We propose a fully automatic approach that produces vibrant and realistic colorizations. We embrace the underlying uncertainty of the problem by posing it as a classification task and use class-rebalancing at training time to increase the diversity of colors in the result. The system is implemented as a feed-forward pass in a CNN at test time and is trained on over a million color images. We evaluate our algorithm using a “colorization Turing test,” asking human participants to choose between a generated and ground truth color image. Our method successfully fools humans on 32 of the trials, significantly higher than previous methods. Moreover, we show that colorization can be a powerful pretext task for self-supervised feature learning, acting as a cross-channel encoder. This approach results in state-of-the-art performance on several feature learning benchmarks.",
"@cite_15: We propose split-brain autoencoders, a straightforward modification of the traditional autoencoder architecture, for unsupervised representation learning. The method adds a split to the network, resulting in two disjoint sub-networks. Each sub-network is trained to perform a difficult task -- predicting one subset of the data channels from another. Together, the sub-networks extract features from the entire input signal. By forcing the network to solve cross-channel prediction tasks, we induce a representation within the network which transfers well to other, unseen tasks. This method achieves state-of-the-art performance on several large-scale transfer learning benchmarks.",
"@cite_16: We develop a fully automatic image colorization system. Our approach leverages recent advances in deep networks, exploiting both low-level and semantic representations. As many scene elements naturally appear according to multimodal color distributions, we train our model to predict per-pixel color histograms. This intermediate output can be used to automatically generate a color image, or further manipulated prior to image formation. On both fully and partially automatic colorization tasks, we outperform existing methods. We also explore colorization as a vehicle for self-supervised visual representation learning.",
"@cite_17: We investigate and improve self-supervision as a drop-in replacement for ImageNet pretraining, focusing on automatic colorization as the proxy task. Self-supervised training has been shown to be more promising for utilizing unlabeled data than other, traditional unsupervised learning methods. We build on this success and evaluate the ability of our self-supervised network in several contexts. On VOC segmentation and classification tasks, we present results that are state-of-the-art among methods not using ImageNet labels for pretraining representations. Moreover, we present the first in-depth analysis of self-supervision via colorization, concluding that formulation of the loss, training details and network architecture play important roles in its effectiveness. This investigation is further expanded by revisiting the ImageNet pretraining paradigm, asking questions such as: How much training data is needed? How many labels are needed? How much do features change when fine-tuned? We relate these questions back to self-supervision by showing that colorization provides a similarly powerful supervisory signal as various flavors of ImageNet pretraining.",
"@cite_18: This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations.",
"@cite_19: Given a grayscale photograph as input, this paper attacks the problem of hallucinating a plausible color version of the photograph. This problem is clearly underconstrained, so previous approaches have either relied on significant user interaction or resulted in desaturated colorizations. We propose a fully automatic approach that produces vibrant and realistic colorizations. We embrace the underlying uncertainty of the problem by posing it as a classification task and use class-rebalancing at training time to increase the diversity of colors in the result. The system is implemented as a feed-forward pass in a CNN at test time and is trained on over a million color images. We evaluate our algorithm using a “colorization Turing test,” asking human participants to choose between a generated and ground truth color image. Our method successfully fools humans on 32 of the trials, significantly higher than previous methods. Moreover, we show that colorization can be a powerful pretext task for self-supervised feature learning, acting as a cross-channel encoder. This approach results in state-of-the-art performance on several feature learning benchmarks.",
"@cite_20: We propose split-brain autoencoders, a straightforward modification of the traditional autoencoder architecture, for unsupervised representation learning. The method adds a split to the network, resulting in two disjoint sub-networks. Each sub-network is trained to perform a difficult task -- predicting one subset of the data channels from another. Together, the sub-networks extract features from the entire input signal. By forcing the network to solve cross-channel prediction tasks, we induce a representation within the network which transfers well to other, unseen tasks. This method achieves state-of-the-art performance on several large-scale transfer learning benchmarks."
] | Self-supervised learning is another popular stream for learning invariant features. Visual invariance can be captured by the same instance scene taken in a sequence of video frames @cite_16 @cite_2 @cite_3 @cite_15 @cite_5 @cite_6 @cite_7 @cite_8 . For example, Wang and Gupta @cite_16 leverage tracking of objects in videos to learn visual invariance within individual objects; Jayaraman and Grauman @cite_3 train a Siamese network to model the ego-motion between two frames in a scene; Mathieu al @cite_5 propose to learn representations by predicting future frames; Pathak al @cite_7 train a network to segment the foreground objects where are acquired via motion cues. On the other hand, common characteristics of different object instances can also be mined from data @cite_13 @cite_14 @cite_15 @cite_16 @cite_17 . For example, relative positions of image patches @cite_13 may reflect feasible spatial layouts of objects; possible colors can be inferred @cite_14 @cite_15 if the networks can relate colors to object appearances. Rather than rely on temporal changes in video, these methods are able to exploit still images. |
[
"abstract: Learning visual representations with self-supervised learning has become popular in computer vision. The idea is to design auxiliary tasks where labels are free to obtain. Most of these tasks end up providing data to learn specific kinds of invariance useful for recognition. In this paper, we propose to exploit different self-supervised approaches to learn representations invariant to (i) inter-instance variations (two objects in the same class should have similar features) and (ii) intra-instance variations (viewpoint, pose, deformations, illumination, etc). Instead of combining two approaches with multi-task learning, we argue to organize and reason the data with multiple variations. Specifically, we propose to generate a graph with millions of objects mined from hundreds of thousands of videos. The objects are connected by two types of edges which correspond to two types of invariance: \"different instances but a similar viewpoint and category\" and \"different viewpoints of the same instance\". By applying simple transitivity on the graph with these edges, we can obtain pairs of images exhibiting richer visual invariance. We use this data to train a Triplet-Siamese network with VGG16 as the base architecture and apply the learned representations to different recognition tasks. For object detection, we achieve 63.2 mAP on PASCAL VOC 2007 using Fast R-CNN (compare to 67.3 with ImageNet pre-training). For the challenging COCO dataset, our method is surprisingly close (23.5 ) to the ImageNet-supervised counterpart (24.4 ) using the Faster R-CNN framework. We also show that our network can perform significantly better than the ImageNet network in the surface normal estimation task.",
"@cite_1: The goal of this paper is to discover a set of discriminative patches which can serve as a fully unsupervised mid-level visual representation. The desired patches need to satisfy two requirements: 1) to be representative, they need to occur frequently enough in the visual world; 2) to be discriminative, they need to be different enough from the rest of the visual world. The patches could correspond to parts, objects, \"visual phrases\", etc. but are not restricted to be any one of them. We pose this as an unsupervised discriminative clustering problem on a huge dataset of image patches. We use an iterative procedure which alternates between clustering and training discriminative classifiers, while applying careful cross-validation at each step to prevent overfitting. The paper experimentally demonstrates the effectiveness of discriminative patches as an unsupervised mid-level visual representation, suggesting that it could be used in place of visual words for many tasks. Furthermore, discriminative patches can also be used in a supervised regime, such as scene classification, where they demonstrate state-of-the-art performance on the MIT Indoor-67 dataset.",
"@cite_2: Recent work on mid-level visual representations aims to capture information at the level of complexity higher than typical \"visual words\", but lower than full-blown semantic objects. Several approaches [5,6,12,23] have been proposed to discover mid-level visual elements, that are both 1) representative, i.e., frequently occurring within a visual dataset, and 2) visually discriminative. However, the current approaches are rather ad hoc and difficult to analyze and evaluate. In this work, we pose visual element discovery as discriminative mode seeking, drawing connections to the the well-known and well-studied mean-shift algorithm [2, 1, 4, 8]. Given a weakly-labeled image collection, our method discovers visually-coherent patch clusters that are maximally discriminative with respect to the labels. One advantage of our formulation is that it requires only a single pass through the data. We also propose the Purity-Coverage plot as a principled way of experimentally analyzing and evaluating different visual discovery approaches, and compare our method against prior work on the Paris Street View dataset of [5]. We also evaluate our method on the task of scene classification, demonstrating state-of-the-art performance on the MIT Scene-67 dataset.",
"@cite_3: This paper addresses the well-established problem of unsupervised object discovery with a novel method inspired by weakly-supervised approaches. In particular, the ability of an object patch to predict the rest of the object (its context) is used as supervisory signal to help discover visually consistent object clusters. The main contributions of this work are: 1) framing unsupervised clustering as a leave-one-out context prediction task; 2) evaluating the quality of context prediction by statistical hypothesis testing between thing and stuff appearance models; and 3) an iterative region prediction and context alignment approach that gradually discovers a visual object cluster together with a segmentation mask and fine-grained correspondences. The proposed method outperforms previous unsupervised as well as weakly-supervised object discovery approaches, and is shown to provide correspondences detailed enough to transfer keypoint annotations.",
"@cite_4: Given a large dataset of images, we seek to automatically determine the visually similar object and scene classes together with their image segmentation. To achieve this we combine two ideas: (i) that a set of segmented objects can be partitioned into visual object classes using topic discovery models from statistical text analysis; and (ii) that visual object classes can be used to assess the accuracy of a segmentation. To tie these ideas together we compute multiple segmentations of each image and then: (i) learn the object classes; and (ii) choose the correct segmentations. We demonstrate that such an algorithm succeeds in automatically discovering many familiar objects in a variety of image datasets, including those from Caltech, MSRC and LabelMe.",
"@cite_5: We seek to discover the object categories depicted in a set of unlabelled images. We achieve this using a model developed in the statistical text literature: probabilistic latent semantic analysis (pLSA). In text analysis, this is used to discover topics in a corpus using the bag-of-words document representation. Here we treat object categories as topics, so that an image containing instances of several categories is modeled as a mixture of topics. The model is applied to images by using a visual analogue of a word, formed by vector quantizing SIFT-like region descriptors. The topic discovery approach successfully translates to the visual domain: for a small set of objects, we show that both the object categories and their approximate spatial layout are found without supervision. Performance of this unsupervised method is compared to the supervised approach of (2003) on a set of unseen images containing only one object per image. We also extend the bag-of-words vocabulary to include 'doublets' which encode spatially local co-occurring regions. It is demonstrated that this extended vocabulary gives a cleaner image segmentation. Finally, the classification and segmentation methods are applied to a set of images containing multiple objects per image. These results demonstrate that we can successfully build object class models from an unsupervised analysis of images.",
"@cite_6: Dimensionality reduction involves mapping a set of high dimensional input points onto a low dimensional manifold so that 'similar\" points in input space are mapped to nearby points on the manifold. We present a method - called Dimensionality Reduction by Learning an Invariant Mapping (DrLIM) - for learning a globally coherent nonlinear function that maps the data evenly to the output manifold. The learning relies solely on neighborhood relationships and does not require any distancemeasure in the input space. The method can learn mappings that are invariant to certain transformations of the inputs, as is demonstrated with a number of experiments. Comparisons are made to other techniques, in particular LLE.",
"@cite_7: Multilabel image annotation is one of the most important challenges in computer vision with many real-world applications. While existing work usually use conventional visual features for multilabel annotation, features based on Deep Neural Networks have shown potential to significantly boost performance. In this work, we propose to leverage the advantage of such features and analyze key components that lead to better performances. Specifically, we show that a significant performance gain could be obtained by combining convolutional architectures with approximate top- @math ranking objectives, as thye naturally fit the multilabel tagging problem. Our experiments on the NUS-WIDE dataset outperforms the conventional visual features by about 10 , obtaining the best reported performance in the literature.",
"@cite_8: Learning fine-grained image similarity is a challenging task. It needs to capture between-class and within-class image differences. This paper proposes a deep ranking model that employs deep learning techniques to learn similarity metric directly from images. It has higher learning capability than models based on hand-crafted features. A novel multiscale network structure has been developed to describe the images effectively. An efficient triplet sampling algorithm is also proposed to learn the model with distributed asynchronized stochastic gradient. Extensive experiments show that the proposed algorithm outperforms models based on hand-crafted visual features and deep classification models."
] | Our work is also closely related to mid-level patch clustering @cite_1 @cite_2 @cite_3 and unsupervised discovery of semantic classes @cite_4 @cite_5 as we attempt to find reliable clusters in our affinity graph. In addition, the ranking function used in this paper is related to deep metric learning with Siamese architectures @cite_6 @cite_7 @cite_8 . |
[
"abstract: Learning visual representations with self-supervised learning has become popular in computer vision. The idea is to design auxiliary tasks where labels are free to obtain. Most of these tasks end up providing data to learn specific kinds of invariance useful for recognition. In this paper, we propose to exploit different self-supervised approaches to learn representations invariant to (i) inter-instance variations (two objects in the same class should have similar features) and (ii) intra-instance variations (viewpoint, pose, deformations, illumination, etc). Instead of combining two approaches with multi-task learning, we argue to organize and reason the data with multiple variations. Specifically, we propose to generate a graph with millions of objects mined from hundreds of thousands of videos. The objects are connected by two types of edges which correspond to two types of invariance: \"different instances but a similar viewpoint and category\" and \"different viewpoints of the same instance\". By applying simple transitivity on the graph with these edges, we can obtain pairs of images exhibiting richer visual invariance. We use this data to train a Triplet-Siamese network with VGG16 as the base architecture and apply the learned representations to different recognition tasks. For object detection, we achieve 63.2 mAP on PASCAL VOC 2007 using Fast R-CNN (compare to 67.3 with ImageNet pre-training). For the challenging COCO dataset, our method is surprisingly close (23.5 ) to the ImageNet-supervised counterpart (24.4 ) using the Faster R-CNN framework. We also show that our network can perform significantly better than the ImageNet network in the surface normal estimation task.",
"@cite_1: This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations.",
"@cite_2: Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation."
] | Our generic framework can be instantiated by any two self-supervised methods that can respectively learn inter- intra-instance invariance. In this paper we adopt Doersch al's @cite_1 context prediction method to build inter-instance invariance, and Wang and Gupta's @cite_2 tracking method to build intra-instance invariance. We analyze their behaviors as follows. |
[
"abstract: Learning visual representations with self-supervised learning has become popular in computer vision. The idea is to design auxiliary tasks where labels are free to obtain. Most of these tasks end up providing data to learn specific kinds of invariance useful for recognition. In this paper, we propose to exploit different self-supervised approaches to learn representations invariant to (i) inter-instance variations (two objects in the same class should have similar features) and (ii) intra-instance variations (viewpoint, pose, deformations, illumination, etc). Instead of combining two approaches with multi-task learning, we argue to organize and reason the data with multiple variations. Specifically, we propose to generate a graph with millions of objects mined from hundreds of thousands of videos. The objects are connected by two types of edges which correspond to two types of invariance: \"different instances but a similar viewpoint and category\" and \"different viewpoints of the same instance\". By applying simple transitivity on the graph with these edges, we can obtain pairs of images exhibiting richer visual invariance. We use this data to train a Triplet-Siamese network with VGG16 as the base architecture and apply the learned representations to different recognition tasks. For object detection, we achieve 63.2 mAP on PASCAL VOC 2007 using Fast R-CNN (compare to 67.3 with ImageNet pre-training). For the challenging COCO dataset, our method is surprisingly close (23.5 ) to the ImageNet-supervised counterpart (24.4 ) using the Faster R-CNN framework. We also show that our network can perform significantly better than the ImageNet network in the surface normal estimation task.",
"@cite_1: This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations."
] | The context prediction task in @cite_1 randomly samples a patch (blue in Figure ) and one of its eight neighbors (red), and trains the network to predict their relative position, defined as an 8-way classification problem. In the first two examples in Figure , the context prediction model is able to predict that the leg" patch is below the face'' patch of the cat, indicating that the model has learned some commonality of spatial layout from the training data. However, the model would fail if the pose, viewpoint, or deformation of the object is changed drastically, , in the third example of Figure --- unless the dataset is diversified and large enough to include gradually changing poses, it is hard for the models to learn that the changed pose can be of the same object type. |
[
"abstract: Learning visual representations with self-supervised learning has become popular in computer vision. The idea is to design auxiliary tasks where labels are free to obtain. Most of these tasks end up providing data to learn specific kinds of invariance useful for recognition. In this paper, we propose to exploit different self-supervised approaches to learn representations invariant to (i) inter-instance variations (two objects in the same class should have similar features) and (ii) intra-instance variations (viewpoint, pose, deformations, illumination, etc). Instead of combining two approaches with multi-task learning, we argue to organize and reason the data with multiple variations. Specifically, we propose to generate a graph with millions of objects mined from hundreds of thousands of videos. The objects are connected by two types of edges which correspond to two types of invariance: \"different instances but a similar viewpoint and category\" and \"different viewpoints of the same instance\". By applying simple transitivity on the graph with these edges, we can obtain pairs of images exhibiting richer visual invariance. We use this data to train a Triplet-Siamese network with VGG16 as the base architecture and apply the learned representations to different recognition tasks. For object detection, we achieve 63.2 mAP on PASCAL VOC 2007 using Fast R-CNN (compare to 67.3 with ImageNet pre-training). For the challenging COCO dataset, our method is surprisingly close (23.5 ) to the ImageNet-supervised counterpart (24.4 ) using the Faster R-CNN framework. We also show that our network can perform significantly better than the ImageNet network in the surface normal estimation task.",
"@cite_1: Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation.",
"@cite_2: Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation."
] | On the other hand, these changes can be more successfully captured by the visual tracking method presented in @cite_1 , , see A A' and B B' in Figure . But by tracking an identical instance we cannot associate different instances of the same semantics. Thus we expect the representations learned in @cite_1 are weak in handling the variations between different objects in the same category. |
[
"abstract: Unsupervised domain translation has recently achieved impressive performance with rapidly developed generative adversarial network (GAN) and availability of sufficient training data. However, existing domain translation frameworks form in a disposable way where the learning experiences are ignored. In this work, we take this research direction toward unsupervised meta domain translation problem. We propose a meta translation model called MT-GAN to find parameter initialization of a conditional GAN, which can quickly adapt for a new domain translation task with limited training samples. In the meta-training procedure, MT-GAN is explicitly fine-tuned with a primary translation task and a synthesized dual translation task. Then we design a meta-optimization objective to require the fine-tuned MT-GAN to produce good generalization performance. We demonstrate effectiveness of our model on ten diverse two-domain translation tasks and multiple face identity translation tasks. We show that our proposed approach significantly outperforms the existing domain translation methods when using no more than @math training samples in each image domain.",
"@cite_1: We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"@cite_2: Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.",
"@cite_3: In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
"@cite_4: Recent advances in Generative Adversarial Networks (GANs) have shown impressive results for task of facial expression synthesis. The most successful architecture is StarGAN, that conditions GANs’ generation process with images of a specific domain, namely a set of images of persons sharing the same expression. While effective, this approach can only generate a discrete number of expressions, determined by the content of the dataset. To address this limitation, in this paper, we introduce a novel GAN conditioning scheme based on Action Units (AU) annotations, which describes in a continuous manifold the anatomical facial movements defining a human expression. Our approach allows controlling the magnitude of activation of each AU and combine several of them. Additionally, we propose a fully unsupervised strategy to train the model, that only requires images annotated with their activated AUs, and exploit attention mechanisms that make our network robust to changing backgrounds and lighting conditions. Extensive evaluation show that our approach goes beyond competing conditional generators both in the capability to synthesize a much wider range of expressions ruled by anatomically feasible muscle movements, as in the capacity of dealing with images in the wild.",
"@cite_5: We present a new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs). Conditional GANs have enabled a variety of applications, but the results are often limited to low-resolution and still far from realistic. In this work, we generate 2048 A— 1024 visually appealing results with a novel adversarial loss, as well as new multi-scale generator and discriminator architectures. Furthermore, we extend our framework to interactive visual manipulation with two additional features. First, we incorporate object instance segmentation information, which enables object manipulations such as removing adding objects and changing the object category. Second, we propose a method to generate diverse results given the same input, allowing users to edit the object appearance interactively. Human opinion studies demonstrate that our method significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.",
"@cite_6: We study the problem of 3D object generation. We propose a novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits of our model are three-fold: first, the use of an adversarial criterion, instead of traditional heuristic criteria, enables the generator to capture object structure implicitly and to synthesize high-quality 3D objects; second, the generator establishes a mapping from a low-dimensional probabilistic space to the space of 3D objects, so that we can sample objects without a reference image or CAD models, and explore the 3D object manifold; third, the adversarial discriminator provides a powerful 3D shape descriptor which, learned without supervision, has wide applications in 3D object recognition. Experiments demonstrate that our method generates high-quality 3D objects, and our unsupervisedly learned features achieve impressive performance on 3D object recognition, comparable with those of supervised learning methods.",
"@cite_7: Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feedforward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones. Code, demo and models are available at: https: github.com JiahuiYu generative_inpainting."
] | In recent years, generative adversarial network (GAN) @cite_1 has gained a wide range of interests in generative modeling. In a GAN, a generator is trained to produce fake but plausible images, while a discriminator is trained to distinguish difference between real and fake images. Conditional generative adversarial network (CGAN) @cite_2 is the conditional version of GAN in which the generator is feeded with noise vector together with additional data (e.g., class labels) that conditions on both the generator and discriminator. Deep convolutional generative adversarial network (DCGAN) @cite_3 is an extensive exploration of convolution neural network architectures in GAN and contributes to improve the quality of image synthesis. GANs have been successfully leveraged to many image generation applications @cite_5 @cite_5 @cite_6 @cite_7 . Our method adopts the adversarial loss to render images from the generators to be real in the target domain and make meta-training performance improve meta-learners' generalization. |
[
"abstract: In this paper, we present a novel model to detect lane regions and extract lane departure events (changes and incursions) from challenging, lower-resolution videos recorded with mobile cameras. Our algorithm used a Mask-RCNN based lane detection model as pre-processor. Recently, deep learning-based models provide state-of-the-art technology for object detection combined with segmentation. Among the several deep learning architectures, convolutional neural networks (CNNs) outperformed other machine learning models, especially for region proposal and object detection tasks. Recent development in object detection has been driven by the success of region proposal methods and region-based CNNs (R-CNNs). Our algorithm utilizes lane segmentation mask for detection and Fix-lag Kalman filter for tracking, rather than the usual approach of detecting lane lines from single video frames. The algorithm permits detection of driver lane departures into left or right lanes from continuous lane detections. Preliminary results show promise for robust detection of lane departure events. The overall sensitivity for lane departure events on our custom test dataset is 81.81 .",
"@cite_1: A lane-detection system is an important component of many intelligent transportation systems. We present a robust lane-detection-and-tracking algorithm to deal with challenging scenarios such as a lane curvature, worn lane markings, lane changes, and emerging, ending, merging, and splitting lanes. We first present a comparative study to find a good real-time lane-marking classifier. Once detection is done, the lane markings are grouped into lane-boundary hypotheses. We group left and right lane boundaries separately to effectively handle merging and splitting lanes. A fast and robust algorithm, based on random-sample consensus and particle filtering, is proposed to generate a large number of hypotheses in real time. The generated hypotheses are evaluated and grouped based on a probabilistic framework. The suggested framework effectively combines a likelihood-based object-recognition algorithm with a Markov-style process (tracking) and can also be applied to general-part-based object-tracking problems. An experimental result on local streets and highways shows that the suggested algorithm is very reliable."
] | Early works in lane detection and departure warning system date back to the 1990s. Previously proposed methods in this area can be classified as low-level image feature based, machine deep learning (DL) based approaches, or a hybrid between the two. The most widely used LDW systems are either vision-based (e.g., histogram analysis, Hough transformation) or more recently on DL. In general, vision-based and DL lane detection systems start by capturing images using a selected type of sensor, pre-processing the image, followed by lane line detection and tracking. While many types of sensors have been proposed for capturing lanes images such as radars, laser range, lidar, active infrared etc., the most widely used device is a mobile camera. An alternative to vision- and DL-based systems is the use of global-positioning systems (GPS) combined with Geographic Information Systems @cite_1 . However, current LDW based on GPS can be unreliable, mainly because of the often poor reliability and resolution of GPS location and speed detection, signal loss (e.g., in covered areas), and inaccurate map databases. Due to these limitations, most modern research conducted in LDW involves a utilization of Neural Networks-based solutions in some form. |
[
"abstract: In this paper, we present a novel model to detect lane regions and extract lane departure events (changes and incursions) from challenging, lower-resolution videos recorded with mobile cameras. Our algorithm used a Mask-RCNN based lane detection model as pre-processor. Recently, deep learning-based models provide state-of-the-art technology for object detection combined with segmentation. Among the several deep learning architectures, convolutional neural networks (CNNs) outperformed other machine learning models, especially for region proposal and object detection tasks. Recent development in object detection has been driven by the success of region proposal methods and region-based CNNs (R-CNNs). Our algorithm utilizes lane segmentation mask for detection and Fix-lag Kalman filter for tracking, rather than the usual approach of detecting lane lines from single video frames. The algorithm permits detection of driver lane departures into left or right lanes from continuous lane detections. Preliminary results show promise for robust detection of lane departure events. The overall sensitivity for lane departure events on our custom test dataset is 81.81 .",
"@cite_1: ALVINN (Autonomous Land Vehicle In a Neural Network) is a 3-layer back-propagation network designed for the task of road following. Currently ALVINN takes images from a camera and a laser range finder as input and produces as output the direction the vehicle should travel in order to follow the road. Training has been conducted using simulated road images. Successful tests on the Carnegie Mellon autonomous navigation test vehicle indicate that the network can effectively follow real roads under certain field conditions. The representation developed to perform the task differs dramatically when the network is trained under various conditions, suggesting the possibility of a novel adaptive autonomous navigation system capable of tailoring its processing to the conditions at hand.",
"@cite_2: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"@cite_3: State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn."
] | Neural Networks have been a subject of investigation in the autonomous vehicles field for a while. Among the very first attempts to use a neural network for vehicle navigation, ALNINN is considered a pioneer and one of the most influential paper. This model is comprised of a shallow neural network that predicts actions out of captured images from a forward facing camera mounted on-board a vehicle, with few obstacles, leading to the potential use of neural networks for autonomous navigation. More recently, advances in object detection such as the contribution made by DL and Region Convolutional Neural Network (R-CNN) @cite_2 in combination with Region Proposal Network (RPN) @cite_3 have created models such as Mask R-CNN that provide state of the art predictions. New trends in Neural Network object detection include segmentation, which we applied in our model as an estimator for LDW. |
[
"abstract: In this paper, we present a novel model to detect lane regions and extract lane departure events (changes and incursions) from challenging, lower-resolution videos recorded with mobile cameras. Our algorithm used a Mask-RCNN based lane detection model as pre-processor. Recently, deep learning-based models provide state-of-the-art technology for object detection combined with segmentation. Among the several deep learning architectures, convolutional neural networks (CNNs) outperformed other machine learning models, especially for region proposal and object detection tasks. Recent development in object detection has been driven by the success of region proposal methods and region-based CNNs (R-CNNs). Our algorithm utilizes lane segmentation mask for detection and Fix-lag Kalman filter for tracking, rather than the usual approach of detecting lane lines from single video frames. The algorithm permits detection of driver lane departures into left or right lanes from continuous lane detections. Preliminary results show promise for robust detection of lane departure events. The overall sensitivity for lane departure events on our custom test dataset is 81.81 .",
"@cite_1: Abstract Statistics show that worldwide motor vehicle collisions lead to significant deaths and disabilities as well as substantial financial costs to both society and the individuals involved. Unintended lane departure is a leading cause of road fatalities by the collision. To reduce the number of traffic accidents and to improve driver’s safety lane departure warning (LDW), the system has emerged as a promising tool. Vision-based lane detection and departure warning system has been investigated over two decades. During this period, many different problems related to lane detection and departure warning have been addressed. This paper provides an overview of current LDW system, describing in particular pre-processing, lane models, lane de Ntection techniques and departure warning system.",
"@cite_2: This paper describes the generic obstacle and lane detection system (GOLD), a stereo vision-based hardware and software architecture to be used on moving vehicles to increment road safety. Based on a full-custom massively parallel hardware, it allows to detect both generic obstacles (without constraints on symmetry or shape) and the lane position in a structured environment (with painted lane markings) at a rate of 10 Hz. Thanks to a geometrical transform supported by a specific hardware module, the perspective effect is removed from both left and right stereo images; the left is used to detect lane markings with a series of morphological filters, while both remapped stereo images are used for the detection of free-space in front of the vehicle. The output of the processing is displayed on both an on-board monitor and a control-panel to give visual feedbacks to the driver. The system was tested on the mobile laboratory (MOB-LAB) experimental land vehicle, which was driven for more than 3000 km along extra-urban roads and freeways at speeds up to 80 km h, and demonstrated its robustness with respect to shadows and changing illumination conditions, different road textures, and vehicle movement.",
"@cite_3: This paper presents a lane-departure identification (LDI) system of a traveling vehicle on a structured road with lane marks. As is the case with modified version of the previous EDF-based LDI approach [J.W. Lee, A machine vision system for lane-departure detection, CVIU 86 (2002) 52-78], the new system increases the number of lane-related parameters and introduces departure ratios to determine the instant of lane departure and a linear regression (LR) to minimize wrong decisions due to noise effects. To enhance the robustness of LDI, we conceive of a lane boundary pixel extractor (LBPE) capable of extracting pixels expected to be on lane boundaries. Then, the Hough transform utilizes the pixels from the LBPE to provide the lane-related parameters such as an orientation and a location parameter. The fundamental idea of the proposed LDI is based on an observation that the ratios of orientations and location parameters of left- and right-lane boundaries are equal to one as far as the optical axis of a camera mounted on a vehicle is coincident with the center of lane. The ratios enable the lane-related parameters and the symmetrical property of both lane boundaries to be connected. In addition, the LR of the lane-related parameters of a series of successive images plays the role of determining the trend of a vehicle's traveling direction and the error of the LR is used to avoid a wrong LDI. We show the efficiency of the proposed LDI system with some real images.",
"@cite_4: This paper presents a feature-based machine vision system for estimating lane-departure of a traveling vehicle on a road. The system uses edge information to define an edge distribution function (EDF), the histogram of edge magnitudes with respect to edge orientation angle. The EDF enables the edge-related information and the lane-related information to be connected. Examining the EDF by the shape parameters of the local maxima and the symmetry axis results in identifying whether a change in the traveling direction of a vehicle has occurred. The EDF minimizes the effect of noise and the use of heuristics, and eliminates the task of localizing lane marks. The proposed system enhances the adaptability to cope with the random and dynamic environment of a road scene and leads to a reliable lane-departure warning system."
] | Image feature-based lane detection is a well researched area of computer vision @cite_1 . The majority of existing image-based methods use detected lane line features such as colors, gray-scale intensities, and textural information to perform edge detection. This approach is very sensitive to illumination and environmental conditions. On the Generic Obstacle and Lane Detection system proposed by Bertozzi and Broggi @cite_3 , lane detection was done using inverse perspective mapping to remove the perspective effect and horizontal black-white-black transaction. Their methodology was able to locate lane markings even in the presence of shadows or other artifacts in about 95 In 2005, Lee and Yi @cite_3 introduced the use of Sobel operator plus non-local maximum suppression (NLMS). It was built upon methods previously proposed by Lee @cite_4 proposing linear lane model and edge distribution function (EDF) as well as lane boundary pixel extractor (LBPE) plus Hough transform. The model was able to overcome weak points of the EDF based lane-departure identification (LDI) system by increasing lane parameters. The LBPE improved the robustness of lane detection by minimizing missed detections and false positives (FPs) by taking advantage of linear regression analysis. Despite improvements, the model performed poorly at detecting curved lanes. |
[
"abstract: In this paper, we present a novel model to detect lane regions and extract lane departure events (changes and incursions) from challenging, lower-resolution videos recorded with mobile cameras. Our algorithm used a Mask-RCNN based lane detection model as pre-processor. Recently, deep learning-based models provide state-of-the-art technology for object detection combined with segmentation. Among the several deep learning architectures, convolutional neural networks (CNNs) outperformed other machine learning models, especially for region proposal and object detection tasks. Recent development in object detection has been driven by the success of region proposal methods and region-based CNNs (R-CNNs). Our algorithm utilizes lane segmentation mask for detection and Fix-lag Kalman filter for tracking, rather than the usual approach of detecting lane lines from single video frames. The algorithm permits detection of driver lane departures into left or right lanes from continuous lane detections. Preliminary results show promise for robust detection of lane departure events. The overall sensitivity for lane departure events on our custom test dataset is 81.81 .",
"@cite_1: A lane-detection system is an important component of many intelligent transportation systems. We present a robust lane-detection-and-tracking algorithm to deal with challenging scenarios such as a lane curvature, worn lane markings, lane changes, and emerging, ending, merging, and splitting lanes. We first present a comparative study to find a good real-time lane-marking classifier. Once detection is done, the lane markings are grouped into lane-boundary hypotheses. We group left and right lane boundaries separately to effectively handle merging and splitting lanes. A fast and robust algorithm, based on random-sample consensus and particle filtering, is proposed to generate a large number of hypotheses in real time. The generated hypotheses are evaluated and grouped based on a probabilistic framework. The suggested framework effectively combines a likelihood-based object-recognition algorithm with a Markov-style process (tracking) and can also be applied to general-part-based object-tracking problems. An experimental result on local streets and highways shows that the suggested algorithm is very reliable."
] | Some of the low-level image feature based models include an initial layer to normalize illumination across consecutive images, other methods rely on filters or statistic models such as random sample consensus (RANSAC) @cite_1 . Lately, approaches have been incorporating machine learning, more specifically, deep learning in regards to increase image quality before detection is conducted. However, image feature-based approaches require continuous lane detections and often fail to detect lanes when edges and colors are not clearly delineated (noisy), which results in inability to capture local image feature based information. End-to-end learning from deep neural networks substantially improves model robustness in the face of noisy images or roadway features by learning useful features from deeper layers of convolution. |
[
"abstract: In this paper, we present a novel model to detect lane regions and extract lane departure events (changes and incursions) from challenging, lower-resolution videos recorded with mobile cameras. Our algorithm used a Mask-RCNN based lane detection model as pre-processor. Recently, deep learning-based models provide state-of-the-art technology for object detection combined with segmentation. Among the several deep learning architectures, convolutional neural networks (CNNs) outperformed other machine learning models, especially for region proposal and object detection tasks. Recent development in object detection has been driven by the success of region proposal methods and region-based CNNs (R-CNNs). Our algorithm utilizes lane segmentation mask for detection and Fix-lag Kalman filter for tracking, rather than the usual approach of detecting lane lines from single video frames. The algorithm permits detection of driver lane departures into left or right lanes from continuous lane detections. Preliminary results show promise for robust detection of lane departure events. The overall sensitivity for lane departure events on our custom test dataset is 81.81 .",
"@cite_1: In this paper, we introduce a robust lane detection method based on the combined convolutional neural network (CNN) with random sample consensus (RANSAC) algorithm. At first, we calculate edges in an image using a hat shape kernel and then detect lanes using the CNN combined with the RANSAC. If the road scene is simple, we can easily detect the lane by using the RANSAC algorithm only. But if the road scene is complex and includes roadside trees, fence, or intersection etc., then it is hard to detect lanes robustly because of noisy edges. To alleviate that problem, we use CNN in the lane detection before and after applying the RANSAC algorithm. In training process of CNN, input data consist of edge images in a region of interest (ROI) and target data become the images that have only drawn real white color lane in black background. The CNN structure consists of 8 layers with 3 convolutional layers, 2 subsampling layers and multi-layer perceptron (MLP) including 3 fully-connected layers. Convolutional and subsampling layers are hierarchically arranged and their arrangement represents a deep structure in deep learning. As a result, proposed lane detection algorithm successfully eliminates noise lines and the performance is found to be better than other formal line detection algorithms such as RANSAC and hough transform.",
"@cite_2: Numerous groups have applied a variety of deep learning techniques to computer vision problems in highway perception scenarios. In this paper, we presented a number of empirical evaluations of recent deep learning advances. Computer vision, combined with deep learning, has the potential to bring about a relatively inexpensive, robust solution to autonomous driving. To prepare deep learning for industry uptake and practical applications, neural networks will require large data sets that represent all possible driving environments and scenarios. We collect a large data set of highway data and apply deep learning and computer vision algorithms to problems such as car and lane detection. We show how existing convolutional neural networks (CNNs) can be used to perform lane and vehicle detection while running at frame rates required for a real-time system. Our results lend credence to the hypothesis that deep learning holds promise for autonomous driving.",
"@cite_3: Accurate traffic sign detection, from vehicle-mounted cameras, is an important task for autonomous driving and driver assistance. It is a challenging task especially when the videos acquired from mobile cameras on portable devices are low-quality. In this paper, we focus on naturalistic videos captured from vehicle-mounted cameras. It has been shown that Region-based Convolutional Neural Networks provide high accuracy rates in object detection tasks. Yet, they are computationally expensive, and often require a GPU for faster training and processing. In this paper, we present a new method, incorporating Aggregate Channel Features and Chain Code Histograms, with the goal of much faster training and testing, and comparable or better performance without requiring specialized processors. Our test videos cover a range of different weather and daytime scenarios. The experimental results show the promise of the proposed method and a faster performance compared to the other detectors."
] | To create lane detection models that are robust to environmental (e.g., illumination, weather) and road variation (e.g., clarity of lane markings), CNN is becoming an increasingly popular method. Lane detection on the images shown in Fig. (a-d) are near to impossible without using CNN. Kim and Lee @cite_1 combined a CNN with the RANSAC algorithm to detect lanes edges on complex scenes with includes roadside trees, fences, or intersections. In their method, CNN was primarily used to enhance images. @cite_2 , they showed how existing CNNs can be used to perform lane detection while running at frame rates required for a real-time system. Also, @cite_3 discussed how they overcame the difficulties of detecting traffic signs from low-quality noisy videos using chain-code aggregated channel features (ACF)-based model and a CNN model, more specifically Fast-RCNN. |
[
"abstract: In this paper, we present a novel model to detect lane regions and extract lane departure events (changes and incursions) from challenging, lower-resolution videos recorded with mobile cameras. Our algorithm used a Mask-RCNN based lane detection model as pre-processor. Recently, deep learning-based models provide state-of-the-art technology for object detection combined with segmentation. Among the several deep learning architectures, convolutional neural networks (CNNs) outperformed other machine learning models, especially for region proposal and object detection tasks. Recent development in object detection has been driven by the success of region proposal methods and region-based CNNs (R-CNNs). Our algorithm utilizes lane segmentation mask for detection and Fix-lag Kalman filter for tracking, rather than the usual approach of detecting lane lines from single video frames. The algorithm permits detection of driver lane departures into left or right lanes from continuous lane detections. Preliminary results show promise for robust detection of lane departure events. The overall sensitivity for lane departure events on our custom test dataset is 81.81 .",
"@cite_1: In this paper, we propose a Dual-View Convolutional Neutral Network (DVCNN) framework for lane detection. First, to improve the low precision ratios of literature works, a novel DVCNN strategy is designed where the front-view image and the top-view one are optimized simultaneously. In the front-view image, we exclude false detections including moving vehicles, barriers and curbs, while in the top-view image non-club-shaped structures are removed such as ground arrows and words. Second, we present a weighted hat-like filter which not only recalls potential lane line candidates, but also alleviates the disturbance of the gradual textures and reduces most false detections. Third, different from other methods, a global optimization function is designed where the lane line probabilities, lengths, widths, orientations and the amount are all taken into account. After the optimization, the optimal combination composed of true lane lines can be explored. Experiments demonstrate that our algorithm is more accurate and robust than the state-of-the-art."
] | More recently, in @cite_1 , they used a Dual-View Convolutional Neural Network (DVCNN) with hat-like filter and optimized simultaneously the frontal-view and the top-view cameras. The hat-like filter extracts all potential lane line candidates, thus removing most of FPs. With the front-view camera, FPs such as moving vehicles, barriers, and curbs were excluded. Within the top-view image, structures other than lane lines such as ground arrows and words were also removed. |
[
"abstract: In this paper, we present a novel model to detect lane regions and extract lane departure events (changes and incursions) from challenging, lower-resolution videos recorded with mobile cameras. Our algorithm used a Mask-RCNN based lane detection model as pre-processor. Recently, deep learning-based models provide state-of-the-art technology for object detection combined with segmentation. Among the several deep learning architectures, convolutional neural networks (CNNs) outperformed other machine learning models, especially for region proposal and object detection tasks. Recent development in object detection has been driven by the success of region proposal methods and region-based CNNs (R-CNNs). Our algorithm utilizes lane segmentation mask for detection and Fix-lag Kalman filter for tracking, rather than the usual approach of detecting lane lines from single video frames. The algorithm permits detection of driver lane departures into left or right lanes from continuous lane detections. Preliminary results show promise for robust detection of lane departure events. The overall sensitivity for lane departure events on our custom test dataset is 81.81 .",
"@cite_1: Misunderstanding of driver correction behaviors is the primary reason for false warnings of lane-departure-prediction systems. We proposed a learning-based approach to predict unintended lane-departure behaviors and chances of drivers to bring vehicles back to the lane. First, a personalized driver model for lane-departure and lane-keeping behavior is established by combining the Gaussian mixture model and the hidden Markov model. Second, based on this model, we developed an online model-based prediction algorithm to predict the forthcoming vehicle trajectory and judge whether the driver will act a lane departure behavior or correction behavior. We also develop a warning strategy based on the model-based prediction algorithm that allows the lane-departure warning system to be acceptable for drivers according to the predicted trajectory. In addition, the naturalistic driving data of ten drivers were collected to train the personalized driver model and validate this approach. We compared the proposed method with a basic time-to-lane-crossing (TLC) method and a TLC-directional sequence of piecewise lateral slopes (TLC-DSPLS) method. Experimental results show that the proposed approach can reduce the false-warning rate to 3.13 on average at 1-s prediction time."
] | The objective of Lane Departure Prediction (LDP) is to predict if the driver is likely to leave the lane with the goal of warning drivers in advance of the lane departure so that they may correct the error before it occurs (avoiding a potential collision). This improves on LDW systems, which simply alert the driver to the error after it has occurred. LDP algorithms can be classified into one of the following three categories: vehicle-variable-based, vehicle-position estimation, and detection of the lane boundary using real-time captured road images. They all use real-time captured images @cite_1 . |
[
"abstract: In this paper, we present a novel model to detect lane regions and extract lane departure events (changes and incursions) from challenging, lower-resolution videos recorded with mobile cameras. Our algorithm used a Mask-RCNN based lane detection model as pre-processor. Recently, deep learning-based models provide state-of-the-art technology for object detection combined with segmentation. Among the several deep learning architectures, convolutional neural networks (CNNs) outperformed other machine learning models, especially for region proposal and object detection tasks. Recent development in object detection has been driven by the success of region proposal methods and region-based CNNs (R-CNNs). Our algorithm utilizes lane segmentation mask for detection and Fix-lag Kalman filter for tracking, rather than the usual approach of detecting lane lines from single video frames. The algorithm permits detection of driver lane departures into left or right lanes from continuous lane detections. Preliminary results show promise for robust detection of lane departure events. The overall sensitivity for lane departure events on our custom test dataset is 81.81 .",
"@cite_1: In this paper, a technique for the identification of the unwanted lane departure of a traveling vehicle on a road is proposed. A piecewise linear stretching function (PLSF) is used to improve the contrast level of the region of interest (ROI). Lane markings on the road are detected by dividing the ROI into two subregions and applying the Hough transform in each subregion independently. This segmentation approach improves the computational time required for lane detection. For lane departure identification, a distance-based departure measure is computed at each frame, and a necessary warning message is issued to the driver when such measure exceeds a threshold. The novelty of the proposed algorithm is the identification of the lane departure only using three lane-related parameters based on the Euclidean distance transform to estimate the departure measure. The use of the Euclidean distance transform in combination with the PLSF keeps the false alarm around 3 and the lane detection rate above 97 under various lighting conditions. Experimental results indicate that the proposed system can detect lane boundaries in the presence of several image artifacts, such as lighting changes, poor lane markings, and occlusions by a vehicle, and it issues an accurate lane departure warning in a short time interval. The proposed technique shows the efficiency with some real video sequences.",
"@cite_2: The main goal of this paper is to develop a distance to line crossing (DLC) based computation of time to line crossing (TLC). Different computation methods with increasing complexity are provided. A discussion develops the influence of assumptions generally assumed for approximation. A sensitivity analysis with respect to vehicle parameters and positioning is performed. For TLC computation, both straight and curved vehicle paths are considered. The road curvature being another important variable considered in the proposed computations, an observer for its estimation is then proposed. An evaluation over a digitalized test track is first performed. Real data are then collected through an experiment carried out in test tracks with the equipped prototype vehicle. Based on these real data, TLC is then computed with the theoretically proposed methods. The obtained results outlined the necessity to take into consideration vehicle dynamics to use the TLC as a lane departure indicator.",
"@cite_3: Misunderstanding of driver correction behaviors is the primary reason for false warnings of lane-departure-prediction systems. We proposed a learning-based approach to predict unintended lane-departure behaviors and chances of drivers to bring vehicles back to the lane. First, a personalized driver model for lane-departure and lane-keeping behavior is established by combining the Gaussian mixture model and the hidden Markov model. Second, based on this model, we developed an online model-based prediction algorithm to predict the forthcoming vehicle trajectory and judge whether the driver will act a lane departure behavior or correction behavior. We also develop a warning strategy based on the model-based prediction algorithm that allows the lane-departure warning system to be acceptable for drivers according to the predicted trajectory. In addition, the naturalistic driving data of ten drivers were collected to train the personalized driver model and validate this approach. We compared the proposed method with a basic time-to-lane-crossing (TLC) method and a TLC-directional sequence of piecewise lateral slopes (TLC-DSPLS) method. Experimental results show that the proposed approach can reduce the false-warning rate to 3.13 on average at 1-s prediction time."
] | The TLC model has been extensively used on production vehicles @cite_1 . TLC systems evaluate the lane and vehicle state relying on vision-based equipment and perform TLC calculations online using a variety of algorithms. A TLC threshold is used to trigger an alert to the driver. Different computational methods are used with regard to the road geometries and vehicle types. Among these methods, the most common method used is to predict the road boundary, the vehicle trajectory, and then calculate intersection time of the two at the current driving speed. On small curvature roads, the TLC can be computed as the ratio of lateral distance to lateral velocity or the ratio of the distance to the line crossing. @cite_2 Studies suggest that TLC tend to have a higher false alarm rate (FAR) when the vehicle is driven close to lane boundary @cite_3 . |
[
"abstract: In this paper, we present a novel model to detect lane regions and extract lane departure events (changes and incursions) from challenging, lower-resolution videos recorded with mobile cameras. Our algorithm used a Mask-RCNN based lane detection model as pre-processor. Recently, deep learning-based models provide state-of-the-art technology for object detection combined with segmentation. Among the several deep learning architectures, convolutional neural networks (CNNs) outperformed other machine learning models, especially for region proposal and object detection tasks. Recent development in object detection has been driven by the success of region proposal methods and region-based CNNs (R-CNNs). Our algorithm utilizes lane segmentation mask for detection and Fix-lag Kalman filter for tracking, rather than the usual approach of detecting lane lines from single video frames. The algorithm permits detection of driver lane departures into left or right lanes from continuous lane detections. Preliminary results show promise for robust detection of lane departure events. The overall sensitivity for lane departure events on our custom test dataset is 81.81 .",
"@cite_1: Misunderstanding of driver correction behaviors is the primary reason for false warnings of lane-departure-prediction systems. We proposed a learning-based approach to predict unintended lane-departure behaviors and chances of drivers to bring vehicles back to the lane. First, a personalized driver model for lane-departure and lane-keeping behavior is established by combining the Gaussian mixture model and the hidden Markov model. Second, based on this model, we developed an online model-based prediction algorithm to predict the forthcoming vehicle trajectory and judge whether the driver will act a lane departure behavior or correction behavior. We also develop a warning strategy based on the model-based prediction algorithm that allows the lane-departure warning system to be acceptable for drivers according to the predicted trajectory. In addition, the naturalistic driving data of ten drivers were collected to train the personalized driver model and validate this approach. We compared the proposed method with a basic time-to-lane-crossing (TLC) method and a TLC-directional sequence of piecewise lateral slopes (TLC-DSPLS) method. Experimental results show that the proposed approach can reduce the false-warning rate to 3.13 on average at 1-s prediction time."
] | @cite_1 proposed a online learning-based approach to predict unintended lane-departure behaviors (LDB) depending on personalized driver model (PDM) and Hidden Markov Model (HMM). The PDM describes the driver’s lane-keeping and lane-departure behaviors by using a joint-probability density distribution of Gaussian mixture model (GMM) between vehicle speed, relative yaw angle, relative yaw rate, lateral displacement, and road curvature. PDM can discern the characteristics of individual’s driving style. In combination with HMM to estimate the vehicle’s lateral displacement, they were able to reduce the FAR by 3.07. |
[
"abstract: Heterogeneity in wireless network architectures (i.e., the coexistence of 3G, LTE, 5G, WiFi, etc.) has become a key component of current and future generation cellular networks. Simultaneous aggregation of each client's traffic across multiple such radio access technologies (RATs) base stations (BSs) can significantly increase the system throughput, and has become an important feature of cellular standards on multi-RAT integration. Distributed algorithms that can realize the full potential of this aggregation are thus of great importance to operators. In this paper, we study the problem of resource allocation for multi-RAT traffic aggregation in HetNets (heterogeneous networks). Our goal is to ensure that the resources at each BS are allocated so that the aggregate throughput achieved by each client across its RATs satisfies a proportional fairness (PF) criterion. In particular, we provide a simple distributed algorithm for resource allocation at each BS that extends the PF allocation algorithm for a single BS. Despite its simplicity and lack of coordination across the BSs, we show that our algorithm converges to the desired PF solution and provide (tight) bounds on its convergence speed. We also study the characteristics of the optimal solution and use its properties to prove the optimality of our algorithm's outcomes.",
"@cite_1: We consider non-cooperative mobiles, each faced with the problem of which subset of WLANs access points (APs) to connect and multihome to, and how to split its traffic among them. Considering the many users regime, we obtain a potential game model and study its equilibrium. We obtain pricing for which the total throughput is maximized at equilibrium and study the convergence to equilibrium under various evolutionary dynamics. We also study the case where the Internet service provider (ISP) could charge prices greater than that of the cost price mechanism and show that even in this case multihoming is desirable.",
"@cite_2: We consider network utility maximization problems over heterogeneous cellular networks (HetNets) that permit dual connectivity. Dual connectivity (DC) is a feature that targets emerging practical HetNet deployments that will comprise of non-ideal (higher latency) connections between transmission nodes, and has been recently introduced to the LTE-Advanced standard. DC allows for a user to be simultaneously served by a macro node as well as one other (typically micro or pico) node and requires relatively coarser level coordination among serving nodes. For such a DC enabled HetNet we comprehensively analyze the problem of determining an optimal user association that maximizes the weighted sum rate system utility subject to per-user rate constraints, over all feasible associations. Here, in any feasible association each user can be associated with (i.e., configured to receive data from) any one macro node (in a given set of macro nodes) and any one pico node that lies in the chosen macro node's coverage area. We show that, remarkably, this problem can be cast as a non-monotone submodular set function maximization problem, which allows us to construct a constant-factor approximation algorithm. We then consider the proportional fairness (PF) system utility and characterize the PF optimal resource allocation. This enables us to construct an efficient algorithm to determine an association that is optimal up-to an additive constant. We then validate the performance of our algorithms via numerical results.",
"@cite_3: The traffic load of wireless LANs is often unevenly distributed among the access points (APs), which results in unfair bandwidth allocation among users. We argue that the load imbalance and consequent unfair bandwidth allocation can be greatly reduced by intelligent association control. In this paper, we present an efficient solution to determine the user-AP associations for max-min fair bandwidth allocation. We show the strong correlation between fairness and load balancing, which enables us to use load balancing techniques for obtaining optimal max-min fair bandwidth allocation. As this problem is NP-hard, we devise algorithms that achieve constant-factor approximation. In our algorithms, we first compute a fractional association solution, in which users can be associated with multiple APs simultaneously. This solution guarantees the fairest bandwidth allocation in terms of max-min fairness. Then, by utilizing a rounding method, we obtain the integral solution from the fractional solution. We also consider time fairness and present a polynomial-time algorithm for optimal integral solution. We further extend our schemes for the on-line case where users may join and leave dynamically. Our simulations demonstrate that the proposed algorithms achieve close to optimal load balancing (i.e., max-min fairness) and they outperform commonly used heuristics.",
"@cite_4: In multi-rate wireless LANs, throughput-based fair bandwidth allocation can lead to drastically reduced aggregate throughput. To balance aggregate throughput while serving users in a fair manner, proportional fair or time-based fair scheduling has been proposed to apply at each access point (AP). However, since a realistic deployment of wireless LANs can consist of a network of APs, this paper considers proportional fairness in this much wider setting. Our technique is to intelligently associate users with APs to achieve optimal proportional fairness in a network of APs. We propose two approximation algorithms for periodical offline optimization. Our algorithms are the first approximation algorithms in the literature with a tight worst-case guarantee for the NP-hard problem. Our simulation results demonstrate that our algorithms can obtain an aggregate throughput which can be as much as 2.3 times more than that of the max-min fair allocation in 802.11b. While maintaining aggregate throughput, our approximation algorithms outperform the default user-AP association method in the 802.11b standard significantly in terms of fairness."
] | Single-RAT Multi-BS Communication. Prior works have studied the problem of traffic aggregation when a client can simultaneously communicate with multiple same technology BSs. For example, @cite_1 uses game theory to model selfish traffic splitting by each client in WLANs. On the other hand, the resource allocation problem in HetNets is primarily addressed at the BS side. Similarly, @cite_4 proposes an approximation algorithm to address the problem of client association and traffic splitting in LTE DC. Our algorithm (AFRA) goes beyond this and other related work by guaranteeing optimal resource allocation for any number of RATs and BSs . Other works have developed centralized client association algorithms to achieve max-min @cite_3 and proportional fairness @cite_4 in multi-rate WLANs. In contrast, the problem of resource allocation in HetNets needs to be solved in a fully distributed manner. |
[
"abstract: Representing defeasibility is an important issue in common sense reasoning. In reasoning about action and change, this issue becomes more difficult because domain and action related defeasible information may conflict with general inertia rules. Furthermore, different types of defeasible information may also interfere with each other during the reasoning. In this paper, we develop a prioritized logic programming approach to handle defeasibilities in reasoning about action. In particular, we propose three action languages AT ^ 0 , AT ^ 1 and AT ^ 2 which handle three types of defeasibilities in action domains named defeasible constraints, defeasible observations and actions with defeasible and abnormal effects respectively. Each language with a higher superscript can be viewed as an extension of the language with a lower superscript. These action languages inherit the simple syntax of A language but their semantics is developed in terms of transition systems where transition functions are defined based on prioritized logic programs. By illustrating various examples, we show that our approach eventually provides a powerful mechanism to handle various defeasibilities in temporal prediction and postdiction. We also investigate semantic properties of these three action languages and characterize classes of action domains that present more desirable solutions in reasoning about action within the underlying action languages.",
"@cite_1: Recent research on reasoning about action has shown that the traditional logic form of domain constraints is problematic to represent ramifications of actions that are related to causality of domains. To handle this problem properly, as proposed by some researchers, it is necessary to describe causal relations of domains explicitly in action theories. In this paper, we address this problem from a new point of view. Specifically, unlike other researchers viewing causal relations as some kind of inference rules, we distinguish causal relations between defeasible and non-defeasible cases. It turns out that a causal theory in our formalism can be specified by using Reiter's default logic. Based on this idea, we propose a causality-based minimal change approach for representing effects of actions, and argue that our approach provides more plausible solutions for the ramification and qualification problems compared with other related work. We also describe a logic programming approximation to compute causal theories of actions which provides an implementational basis for our approach.",
"@cite_2: Recent research on reasoning about action has shown that the traditional logic form of domain constraints is problematic to represent ramifications of actions that are related to causality of domains. To handle this problem properly, as proposed by some researchers, it is necessary to describe causal relations of domains explicitly in action theories. In this paper, we address this problem from a new point of view. Specifically, unlike other researchers viewing causal relations as some kind of inference rules, we distinguish causal relations between defeasible and non-defeasible cases. It turns out that a causal theory in our formalism can be specified by using Reiter's default logic. Based on this idea, we propose a causality-based minimal change approach for representing effects of actions, and argue that our approach provides more plausible solutions for the ramification and qualification problems compared with other related work. We also describe a logic programming approximation to compute causal theories of actions which provides an implementational basis for our approach.",
"@cite_3: The need to make default assumptions is frequently encountered in reasoning about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non-monotonicity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occuring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.",
"@cite_4: Ginsberg and Smith [6, 7] propose a new method for reasoning about action, which they term a possible worlds approach (PWA). The PWA is an elegant, simple, and potentially very powerful domain-independent technique that has proven fruitful in other areas of AI [13, 5]. In the domain of reasoning about action, Ginsberg and Smith offer the PWA as a solution to the frame problem (What facts about the world remain true when an action is performed?) and its dual, the ramification problem [3] (What facts about the world must change when an action is performed?). In addition, Ginsberg and Smith offer the PWA as a solution to the qualification problem (When is it reasonable to assume that an action will succeed?), and claim for the PWA computational advantages over other approaches such as situation calculus. Here and in [16] I show that the PWA fails to solve the frame, ramification, and qualification problems, even with additional simplifying restrictions not imposed by Ginsberg and Smith. The cause of the failure seems to be a lack of distinction in the PWA between the state of the world and the description of the state of the world. I introduce a new approach to reasoning about action, called the possible models approach, and show that the possible models approach works as well as the PWA on the examples of [6, 7] but does not suffer from its deficiencies."
] | An early effort on handling defeasible causal rules in reasoning about action was due to the author's previous work @cite_1 , in which the author identified the restriction of McCain and Turner's causal theory of actions and claimed that in general a causal rule should be treated as a defeasible rule in order to solve the ramification problem properly. In @cite_1 , constraints ) and ) simply correspond to defaults @math and @math respectively. By combining Reiter's default theory @cite_3 and Winslett's PMA @cite_4 the author developed a causality-based minimal change principle for reasoning about action and change which subsumes McCain and Turner's causal theory. |
[
"abstract: Representing defeasibility is an important issue in common sense reasoning. In reasoning about action and change, this issue becomes more difficult because domain and action related defeasible information may conflict with general inertia rules. Furthermore, different types of defeasible information may also interfere with each other during the reasoning. In this paper, we develop a prioritized logic programming approach to handle defeasibilities in reasoning about action. In particular, we propose three action languages AT ^ 0 , AT ^ 1 and AT ^ 2 which handle three types of defeasibilities in action domains named defeasible constraints, defeasible observations and actions with defeasible and abnormal effects respectively. Each language with a higher superscript can be viewed as an extension of the language with a lower superscript. These action languages inherit the simple syntax of A language but their semantics is developed in terms of transition systems where transition functions are defined based on prioritized logic programs. By illustrating various examples, we show that our approach eventually provides a powerful mechanism to handle various defeasibilities in temporal prediction and postdiction. We also investigate semantic properties of these three action languages and characterize classes of action domains that present more desirable solutions in reasoning about action within the underlying action languages.",
"@cite_1: Recent research on reasoning about action has shown that the traditional logic form of domain constraints is problematic to represent ramifications of actions that are related to causality of domains. To handle this problem properly, as proposed by some researchers, it is necessary to describe causal relations of domains explicitly in action theories. In this paper, we address this problem from a new point of view. Specifically, unlike other researchers viewing causal relations as some kind of inference rules, we distinguish causal relations between defeasible and non-defeasible cases. It turns out that a causal theory in our formalism can be specified by using Reiter's default logic. Based on this idea, we propose a causality-based minimal change approach for representing effects of actions, and argue that our approach provides more plausible solutions for the ramification and qualification problems compared with other related work. We also describe a logic programming approximation to compute causal theories of actions which provides an implementational basis for our approach."
] | Although the work presented in @cite_1 provided a natural way to represent causality in reasoning about action, there were several restrictions in this action theory. First, due to technical restrictions, only normal defaults or defaults without justifications are the suitable forms to represent causal rules in problem domains. Second, this action theory did not handle the other two major defeasibilities - defeasible observations and actions with defeasible and abnormal effects. |
[
"abstract: Automating the navigation of unmanned aerial vehicles (UAVs) in diverse scenarios has gained much attention in recent years. However, teaching UAVs to fly in challenging environments remains an unsolved problem, mainly due to the lack of training data. In this paper, we train a deep neural network to predict UAV controls from raw image data for the task of autonomous UAV racing in a photo-realistic simulation. Training is done through imitation learning with data augmentation to allow for the correction of navigation mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots. Additionally, we show that our optimized network architecture can run in real-time on embedded hardware, allowing for efficient on-board processing critical for real-world deployment. From a broader perspective, our results underline the importance of extensive data augmentation techniques to improve robustness in end-to-end learning setups.",
"@cite_1: We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS).",
"@cite_2: Today, there are two major paradigms for vision-based autonomous driving systems: mediated perception approaches that parse an entire scene to make a driving decision, and behavior reflex approaches that directly map an input image to a driving action by a regressor. In this paper, we propose a third paradigm: a direct perception approach to estimate the affordance for driving. We propose to map an input image to a small number of key perception indicators that directly relate to the affordance of a road traffic state for driving. Our representation provides a set of compact yet complete descriptions of the scene to enable a simple controller to drive autonomously. Falling in between the two extremes of mediated perception and behavior reflex, we argue that our direct perception representation provides the right level of abstraction. To demonstrate this, we train a deep Convolutional Neural Network using recording from 12 hours of human driving in a video game and show that our model can work well to drive a car in a very diverse set of virtual environments. We also train a model for car distance estimation on the KITTI dataset. Results show that our direct perception approach can generalize well to real driving images. Source code and data are available on our project website.",
"@cite_3: The idea of using evolutionary computation to train artificial neural networks, or neuroevolution (NE), for reinforcement learning (RL) tasks has now been around for over 20 years. However, as RL tasks become more challenging, the networks required become larger, as do their genomes. But, scaling NE to large nets (i.e. tens of thousands of weights) is infeasible using direct encodings that map genes one-to-one to network components. In this paper, we scale-up our compressed network encoding where network weight matrices are represented indirectly as a set of Fourier-type coefficients, to tasks that require very-large networks due to the high-dimensionality of their input space. The approach is demonstrated successfully on two reinforcement learning tasks in which the control networks receive visual input: (1) a vision-based version of the octopus control task requiring networks with over 3 thousand weights, and (2) a version of the TORCS driving game where networks with over 1 million weights are evolved to drive a car around a track using video images from the driver's perspective.",
"@cite_4: Abstract: We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.",
"@cite_5: Learning physics-based locomotion skills is a difficult problem, leading to solutions that typically exploit prior knowledge of various forms. In this paper we aim to learn a variety of environment-aware locomotion skills with a limited amount of prior knowledge. We adopt a two-level hierarchical control framework. First, low-level controllers are learned that operate at a fine timescale and which achieve robust walking gaits that satisfy stepping-target and style objectives. Second, high-level controllers are then learned which plan at the timescale of steps by invoking desired step targets for the low-level controller. The high-level controller makes decisions directly based on high-dimensional inputs, including terrain maps or other suitable representations of the surroundings. Both levels of the control policy are trained using deep reinforcement learning. Results are demonstrated on a simulated 3D biped. Low-level controllers are learned for a variety of motion styles and demonstrate robustness with respect to force-based disturbances, terrain variations, and style interpolation. High-level controllers are demonstrated that are capable of following trails through terrains, dribbling a soccer ball towards a target location, and navigating through static or dynamic obstacles.",
"@cite_6: We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.",
"@cite_7: Modern optimization-based approaches to control increasingly allow automatic generation of complex behavior from only a model and an objective. Recent years has seen growing interest in fast solver ...",
"@cite_8: The combination of modern Reinforcement Learning and Deep Learning approaches holds the promise of making significant progress on challenging applications requiring both rich perception and policy-selection. The Arcade Learning Environment (ALE) provides a set of Atari games that represent a useful benchmark set of such applications. A recent breakthrough in combining model-free reinforcement learning with deep learning, called DQN, achieves the best real-time agents thus far. Planning-based approaches achieve far higher scores than the best model-free approaches, but they exploit information that is not available to human players, and they are orders of magnitude slower than needed for real-time play. Our main goal in this work is to build a better real-time Atari game playing agent than DQN. The central idea is to use the slow planning-based agents to provide training data for a deep-learning architecture capable of real-time play. We proposed new agents based on this idea and show that they outperform DQN.",
"@cite_9: Direct policy search can effectively scale to high-dimensional systems, but complex policies with hundreds of parameters often present a challenge for such methods, requiring numerous samples and often falling into poor local optima. We present a guided policy search algorithm that uses trajectory optimization to direct policy learning and avoid poor local optima. We show how differential dynamic programming can be used to generate suitable guiding samples, and describe a regularized importance sampled policy optimization that incorporates these samples into the policy search. We evaluate the method by learning neural network controllers for planar swimming, hopping, and walking, as well as simulated 3D humanoid running.",
"@cite_10: We present an approach to sensorimotor control in immersive environments. Our approach utilizes a high-dimensional sensory stream and a lower-dimensional measurement stream. The cotemporal structure of these streams provides a rich supervisory signal, which enables training a sensorimotor control model by interacting with the environment. The model is trained using supervised learning techniques, but without extraneous supervision. It learns to act based on raw sensory input from a complex three-dimensional environment. The presented formulation enables learning without a fixed goal at training time, and pursuing dynamically changing goals at test time. We conduct extensive experiments in three-dimensional simulations based on the classical first-person game Doom. The results demonstrate that the presented approach outperforms sophisticated prior formulations, particularly on challenging tasks. The results also show that trained models successfully generalize across environments and goals. A model trained using the presented approach won the Full Deathmatch track of the Visual Doom AI Competition, which was held in previously unseen environments.",
"@cite_11: Inspired by how humans learn dynamic motor skills through a progressive process of coaching and practices, we introduce an intuitive and interactive framework for developing dynamic controllers. The user only needs to provide a primitive initial controller and high-level, human-readable instructions as if s he is coaching a human trainee, while the character has the ability to interpret the abstract instructions, accumulate the knowledge from the coach, and improve its skill iteratively. We introduce “control rigs” as an intermediate layer of control module to facilitate the mapping between high-level instructions and low-level control variables. Control rigs also utilize the human coach's knowledge to reduce the search space for control optimization. In addition, we develop a new sampling-based optimization method, Covariance Matrix Adaptation with Classification (CMA-C), to efficiently compute-control rig parameters. Based on the observation of human ability to “learn from failure”, CMA-C utilizes the failed simulation trials to approximate an infeasible region in the space of control rig parameters, resulting a faster convergence for the CMA optimization. We demonstrate the design process of complex dynamic controllers using our framework, including precision jumps, turnaround jumps, monkey vaults, drop-and-rolls, and wall-backflips.",
"@cite_12: We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS).",
"@cite_13: Today, there are two major paradigms for vision-based autonomous driving systems: mediated perception approaches that parse an entire scene to make a driving decision, and behavior reflex approaches that directly map an input image to a driving action by a regressor. In this paper, we propose a third paradigm: a direct perception approach to estimate the affordance for driving. We propose to map an input image to a small number of key perception indicators that directly relate to the affordance of a road traffic state for driving. Our representation provides a set of compact yet complete descriptions of the scene to enable a simple controller to drive autonomously. Falling in between the two extremes of mediated perception and behavior reflex, we argue that our direct perception representation provides the right level of abstraction. To demonstrate this, we train a deep Convolutional Neural Network using recording from 12 hours of human driving in a video game and show that our model can work well to drive a car in a very diverse set of virtual environments. We also train a model for car distance estimation on the KITTI dataset. Results show that our direct perception approach can generalize well to real driving images. Source code and data are available on our project website."
] | Learning to Navigate. Navigation has traditionally been approached by either employing supervised learning (SL) methods @cite_13 @cite_3 or reinforcement learning (RL) methods @cite_4 @cite_5 @cite_6 . Furthermore, combinations of the two have been proposed in an effort to leverage advantages of both techniques, e.g. for increasing sample efficiency for RL methods @cite_7 @cite_8 @cite_9 @cite_10 . For the case of controlling physics-driven vehicles, SL can be advantageous when acquiring labeled data is not too costly or inefficient, and has been proven to have relative success in the field of autonomous driving, among other applications, in recent years @cite_13 . However, the use of neural networks for SL in autonomous driving goes back to much earlier work . |
[
"abstract: Automating the navigation of unmanned aerial vehicles (UAVs) in diverse scenarios has gained much attention in recent years. However, teaching UAVs to fly in challenging environments remains an unsolved problem, mainly due to the lack of training data. In this paper, we train a deep neural network to predict UAV controls from raw image data for the task of autonomous UAV racing in a photo-realistic simulation. Training is done through imitation learning with data augmentation to allow for the correction of navigation mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots. Additionally, we show that our optimized network architecture can run in real-time on embedded hardware, allowing for efficient on-board processing critical for real-world deployment. From a broader perspective, our results underline the importance of extensive data augmentation techniques to improve robustness in end-to-end learning setups.",
"@cite_1: We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS)."
] | In the work of @cite_1 , a deep neural network (DNN) is trained to map recorded camera views to 3-DoF steering commands (steering wheel angle, throttle, and brake). Seventy-two hours of human driven training data was tediously collected from a forward facing camera and augmented with two additional views to provide data for simulated drifting and corrective maneuvering. The simulated and on-road results of this pioneering work demonstrate the ability of a DNN to learn (end-to-end) the control process of a self-driving car from raw video data. |
[
"abstract: Automating the navigation of unmanned aerial vehicles (UAVs) in diverse scenarios has gained much attention in recent years. However, teaching UAVs to fly in challenging environments remains an unsolved problem, mainly due to the lack of training data. In this paper, we train a deep neural network to predict UAV controls from raw image data for the task of autonomous UAV racing in a photo-realistic simulation. Training is done through imitation learning with data augmentation to allow for the correction of navigation mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots. Additionally, we show that our optimized network architecture can run in real-time on embedded hardware, allowing for efficient on-board processing critical for real-world deployment. From a broader perspective, our results underline the importance of extensive data augmentation techniques to improve robustness in end-to-end learning setups.",
"@cite_1: Today, there are two major paradigms for vision-based autonomous driving systems: mediated perception approaches that parse an entire scene to make a driving decision, and behavior reflex approaches that directly map an input image to a driving action by a regressor. In this paper, we propose a third paradigm: a direct perception approach to estimate the affordance for driving. We propose to map an input image to a small number of key perception indicators that directly relate to the affordance of a road traffic state for driving. Our representation provides a set of compact yet complete descriptions of the scene to enable a simple controller to drive autonomously. Falling in between the two extremes of mediated perception and behavior reflex, we argue that our direct perception representation provides the right level of abstraction. To demonstrate this, we train a deep Convolutional Neural Network using recording from 12 hours of human driving in a video game and show that our model can work well to drive a car in a very diverse set of virtual environments. We also train a model for car distance estimation on the KITTI dataset. Results show that our direct perception approach can generalize well to real driving images. Source code and data are available on our project website.",
"@cite_2: The idea of using evolutionary computation to train artificial neural networks, or neuroevolution (NE), for reinforcement learning (RL) tasks has now been around for over 20 years. However, as RL tasks become more challenging, the networks required become larger, as do their genomes. But, scaling NE to large nets (i.e. tens of thousands of weights) is infeasible using direct encodings that map genes one-to-one to network components. In this paper, we scale-up our compressed network encoding where network weight matrices are represented indirectly as a set of Fourier-type coefficients, to tasks that require very-large networks due to the high-dimensionality of their input space. The approach is demonstrated successfully on two reinforcement learning tasks in which the control networks receive visual input: (1) a vision-based version of the octopus control task requiring networks with over 3 thousand weights, and (2) a version of the TORCS driving game where networks with over 1 million weights are evolved to drive a car around a track using video images from the driver's perspective.",
"@cite_3: Today, there are two major paradigms for vision-based autonomous driving systems: mediated perception approaches that parse an entire scene to make a driving decision, and behavior reflex approaches that directly map an input image to a driving action by a regressor. In this paper, we propose a third paradigm: a direct perception approach to estimate the affordance for driving. We propose to map an input image to a small number of key perception indicators that directly relate to the affordance of a road traffic state for driving. Our representation provides a set of compact yet complete descriptions of the scene to enable a simple controller to drive autonomously. Falling in between the two extremes of mediated perception and behavior reflex, we argue that our direct perception representation provides the right level of abstraction. To demonstrate this, we train a deep Convolutional Neural Network using recording from 12 hours of human driving in a video game and show that our model can work well to drive a car in a very diverse set of virtual environments. We also train a model for car distance estimation on the KITTI dataset. Results show that our direct perception approach can generalize well to real driving images. Source code and data are available on our project website.",
"@cite_4: We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.",
"@cite_5: Dealing with high-dimensional input spaces, like visual input, is a challenging task for reinforcement learning (RL). Neuroevolution (NE), used for continuous RL problems, has to either reduce the problem dimensionality by (1) compressing the representation of the neural network controllers or (2) employing a pre-processor (compressor) that transforms the high-dimensional raw inputs into low-dimensional features. In this paper we extend the approach in [16]. The Max-Pooling Convolutional Neural Network (MPCNN) compressor is evolved online, maximizing the distances between normalized feature vectors computed from the images collected by the recurrent neural network (RNN) controllers during their evaluation in the environment. These two interleaved evolutionary searches are used to find MPCNN compressors and RNN controllers that drive a race car in the TORCS racing simulator using only visual input."
] | Similar to our work but for cars, @cite_1 use TORCS (The Open Racing Car Simulator) to train a DNN to drive at casual speeds through a course and properly pass or follow other vehicles in its lane. This work builds off earlier work using TORCS, which focused on keeping the car on a track @cite_3 . In contrast to our work, the vehicle controls to be predicted in the work of @cite_1 are limited, since only a small discrete set of expected control outputs are available: turn-left, turn-right, throttle, and brake. Recently, TORCS has also been successfully used in several RL approaches for autonomous car driving @cite_4 @cite_5 ; however, in these cases, RL was used to teach the agent to drive specific tracks or all available tracks rather than learning to drive never before seen tracks. |
[
"abstract: Automating the navigation of unmanned aerial vehicles (UAVs) in diverse scenarios has gained much attention in recent years. However, teaching UAVs to fly in challenging environments remains an unsolved problem, mainly due to the lack of training data. In this paper, we train a deep neural network to predict UAV controls from raw image data for the task of autonomous UAV racing in a photo-realistic simulation. Training is done through imitation learning with data augmentation to allow for the correction of navigation mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots. Additionally, we show that our optimized network architecture can run in real-time on embedded hardware, allowing for efficient on-board processing critical for real-world deployment. From a broader perspective, our results underline the importance of extensive data augmentation techniques to improve robustness in end-to-end learning setups.",
"@cite_1: Wooden blocks are a common toy for infants, allowing them to develop motor skills and gain intuition about the physical behavior of the world. In this paper, we explore the ability of deep feed-forward models to learn such intuitive physics. Using a 3D game engine, we create small towers of wooden blocks whose stability is randomized and render them collapsing (or remaining upright). This data allows us to train large convolutional network models which can accurately predict the outcome, as well as estimating the block trajectories. The models are also able to generalize in two important ways: (i) to new physical scenarios, e.g. towers with an additional block and (ii) to images of real wooden blocks, where it obtains a performance comparable to human subjects."
] | @cite_1 trained a network on autonomous car datasets and then deployed it to control a drone. For this, they used full supervision by providing image and measured steering angle pairs from pre-collected datasets, and collecting their own dataset containing image and binary obstacle indication pairs. While they demonstrate an ability to transfer successfully to other environments, their approach does not model and exploit the full six degrees of freedom available. It also focuses on slow and safe navigation, rather than optimizing for speed as is the case for racing. Finally, with their network being fairly complex, they report an inference speed of 20fps (CPU) for remote processing, which is more than three times lower than the estimated frame rate for our proposed method when running on-board processing, and more than 27 times lower compared to our method running remotely on GPU. |
[
"abstract: Automating the navigation of unmanned aerial vehicles (UAVs) in diverse scenarios has gained much attention in recent years. However, teaching UAVs to fly in challenging environments remains an unsolved problem, mainly due to the lack of training data. In this paper, we train a deep neural network to predict UAV controls from raw image data for the task of autonomous UAV racing in a photo-realistic simulation. Training is done through imitation learning with data augmentation to allow for the correction of navigation mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots. Additionally, we show that our optimized network architecture can run in real-time on embedded hardware, allowing for efficient on-board processing critical for real-world deployment. From a broader perspective, our results underline the importance of extensive data augmentation techniques to improve robustness in end-to-end learning setups.",
"@cite_1: We present a Model-Predictive Control (MPC) system for online synthesis of interactive and physically valid character motion. Our system enables a complex (36-DOF) 3D human character model to balance in a given pose, dodge projectiles, and improvise a get up strategy if forced to lose balance, all in a dynamic and unpredictable environment. Such contact-rich, predictive and reactive motions have previously only been generated offline or using a handcrafted state machine or a dataset of reference motions, which our system does not require. For each animation frame, our system generates trajectories of character control parameters for the near future --- a few seconds --- using Sequential Monte Carlo sampling. Our main technical contribution is a multimodal, tree-based sampler that simultaneously explores multiple different near-term control strategies represented as parameter splines. The strategies represented by each sample are evaluated in parallel using a causal physics engine. The best strategy, as determined by an objective function measuring goal achievement, fluidity of motion, etc., is used as the control signal for the current frame, but maintaining multiple hypotheses is crucial for adapting to dynamically changing environments.",
"@cite_2: Abstract: We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.",
"@cite_3: We present a novel, general-purpose Model-Predictive Control (MPC) algorithm that we call Control Particle Belief Propagation (C-PBP). C-PBP combines multimodal, gradient-free sampling and a Markov Random Field factorization to effectively perform simultaneous path finding and smoothing in high-dimensional spaces. We demonstrate the method in online synthesis of interactive and physically valid humanoid movements, including balancing, recovery from both small and extreme disturbances, reaching, balancing on a ball, juggling a ball, and fully steerable locomotion in an environment with obstacles. Such a large repertoire of movements has not been demonstrated before at interactive frame rates, especially considering that all our movement emerges from simple cost functions. Furthermore, we abstain from using any precomputation to train a control policy offline, reference data such as motion capture clips, or state machines that break the movements down into more manageable subtasks. Operating under these conditions enables rapid and convenient iteration when designing the cost functions.",
"@cite_4: Inspired by how humans learn dynamic motor skills through a progressive process of coaching and practices, we introduce an intuitive and interactive framework for developing dynamic controllers. The user only needs to provide a primitive initial controller and high-level, human-readable instructions as if s he is coaching a human trainee, while the character has the ability to interpret the abstract instructions, accumulate the knowledge from the coach, and improve its skill iteratively. We introduce “control rigs” as an intermediate layer of control module to facilitate the mapping between high-level instructions and low-level control variables. Control rigs also utilize the human coach's knowledge to reduce the search space for control optimization. In addition, we develop a new sampling-based optimization method, Covariance Matrix Adaptation with Classification (CMA-C), to efficiently compute-control rig parameters. Based on the observation of human ability to “learn from failure”, CMA-C utilizes the failed simulation trials to approximate an infeasible region in the space of control rig parameters, resulting a faster convergence for the CMA optimization. We demonstrate the design process of complex dynamic controllers using our framework, including precision jumps, turnaround jumps, monkey vaults, drop-and-rolls, and wall-backflips.",
"@cite_5: In a glance, we can perceive whether a stack of dishes will topple, a branch will support a child’s weight, a grocery bag is poorly packed and liable to tear or crush its contents, or a tool is firmly attached to a table or free to be lifted. Such rapid physical inferences are central to how people interact with the world and with each other, yet their computational underpinnings are poorly understood. We propose a model based on an “intuitive physics engine,” a cognitive mechanism similar to computer engines that simulate rich physics in video games and graphics, but that uses approximate, probabilistic simulations to make robust and fast inferences in complex natural scenes where crucial information is unobserved. This single model fits data from five distinct psychophysical tasks, captures several illusions and biases, and explains core aspects of human mental models and common-sense reasoning that are instrumental to how humans understand their everyday world.",
"@cite_6: In this work we address the problem of indoor scene understanding from RGB-D images. Specifically, we propose to find instances of common furniture classes, their spatial extent, and their pose with respect to generalized class models. To accomplish this, we use a deep, wide, multi-output convolutional neural network (CNN) that predicts class, pose, and location of possible objects simultaneously. To overcome the lack of large annotated RGB-D training sets (especially those with pose), we use an on-the-fly rendering pipeline that generates realistic cluttered room scenes in parallel to training. We then perform transfer learning on the relatively small amount of publicly available annotated RGB-D data, and find that our model is able to successfully annotate even highly challenging real scenes. Importantly, our trained network is able to understand noisy and sparse observations of highly cluttered scenes with a remarkable degree of accuracy, inferring class and pose from a very limited set of cues. Additionally, our neural network is only moderately deep and computes class, pose and position in tandem, so the overall run-time is significantly faster than existing methods, estimating all output parameters simultaneously in parallel.",
"@cite_7: We introduce a new approach for recognizing and reconstructing 3D objects in images. Our approach is based on an analysis by synthesis strategy. A forward synthesis model constructs possible geometric interpretations of the world, and then selects the interpretation that best agrees with the measured visual evidence. The forward model synthesizes visual templates defined on invariant (HOG) features. These visual templates are discriminatively trained to be accurate for inverse estimation. We introduce an efficient \"brute-force\" approach to inference that searches through a large number of candidate reconstructions, returning the optimal one. One benefit of such an approach is that recognition is inherently (re)constructive. We show state of the art performance for detection and reconstruction on two challenging 3D object recognition datasets of cars and cuboids.",
"@cite_8: Current object class recognition systems typically target 2D bounding box localization, encouraged by benchmark data sets, such as Pascal VOC. While this seems suitable for the detection of individual objects, higher-level applications such as 3D scene understanding or 3D object tracking would benefit from more fine-grained object hypotheses incorporating 3D geometric information, such as viewpoints or the locations of individual parts. In this paper, we help narrowing the representational gap between the ideal input of a scene understanding system and object class detector output, by designing a detector particularly tailored towards 3D geometric reasoning. In particular, we extend the successful discriminatively trained deformable part models to include both estimates of viewpoint and 3D parts that are consistent across viewpoints. We experimentally verify that adding 3D geometric information comes at minimal performance loss w.r.t. 2D bounding box localization, but outperforms prior work in 3D viewpoint estimation and ultra-wide baseline matching."
] | Simulation. As mentioned earlier, generating diverse natural' training data for sequential decision making through SL is tedious. Generating additional data for exploration purposes (i.e. in scenarios where both input and output pairs have to be generated) is much more so. Therefore, a lot of attention from the community is being given to simulators (or games) for this source of data. In fact, a broad range of work has exploited them recently for these types of learning, namely in animation and motion planning @cite_2 @cite_6 @cite_4 , scene understanding @cite_6 , pedestrian detection , and identification of 2D 3D objects @cite_7 @cite_8 . For instance, the authors of used Unity, a video game engine similar to Unreal Engine, to teach a bird how to fly in simulation. |
[
"abstract: Automating the navigation of unmanned aerial vehicles (UAVs) in diverse scenarios has gained much attention in recent years. However, teaching UAVs to fly in challenging environments remains an unsolved problem, mainly due to the lack of training data. In this paper, we train a deep neural network to predict UAV controls from raw image data for the task of autonomous UAV racing in a photo-realistic simulation. Training is done through imitation learning with data augmentation to allow for the correction of navigation mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots. Additionally, we show that our optimized network architecture can run in real-time on embedded hardware, allowing for efficient on-board processing critical for real-world deployment. From a broader perspective, our results underline the importance of extensive data augmentation techniques to improve robustness in end-to-end learning setups.",
"@cite_1: Purpose – The purpose of this paper is to present the development of hardware‐in‐the‐loop simulation (HILS) for visual target tracking of an octorotor unmanned aerial vehicle (UAV) with onboard computer vision.Design methodology approach – HILS for visual target tracking of an octorotor UAV is developed by integrating real embedded computer vision hardware and camera to software simulation of the UAV dynamics, flight control and navigation systems run on Simulink. Visualization of the visual target tracking is developed using FlightGear. The computer vision system is used to recognize and track a moving target using feature correlation between captured scene images and object images stored in the database. Features of the captured images are extracted using speed‐up robust feature (SURF) algorithm, and subsequently matched with features extracted from object image using fast library for approximate nearest neighbor (FLANN) algorithm. Kalman filter is applied to predict the position of the moving target on...",
"@cite_2: In this chapter we present a modular Micro Aerial Vehicle (MAV) simulation framework, which enables a quick start to perform research on MAVs. After reading this chapter, the reader will have a ready to use MAV simulator, including control and state estimation. The simulator was designed in a modular way, such that different controllers and state estimators can be used interchangeably, while incorporating new MAVs is reduced to a few steps. The provided controllers can be adapted to a custom vehicle by only changing a parameter file. Different controllers and state estimators can be compared with the provided evaluation framework. The simulation framework is a good starting point to tackle higher level tasks, such as collision avoidance, path planning, and vision based problems, like Simultaneous Localization and Mapping (SLAM), on MAVs. All components were designed to be analogous to its real world counterparts. This allows the usage of the same controllers and state estimators, including their parameters, in the simulation as on the real MAV.",
"@cite_3: Developing and testing algorithms for autonomous vehicles in real world is an expensive and time consuming process. Also, in order to utilize recent advances in machine intelligence and deep learning we need to collect a large amount of annotated training data in a variety of conditions and environments. We present a new simulator built on Unreal Engine that offers physically and visually realistic simulations for both of these goals. Our simulator includes a physics engine that can operate at a high frequency for real-time hardware-in-the-loop (HITL) simulations with support for popular protocols (e.g. MavLink). The simulator is designed from the ground up to be extensible to accommodate new types of vehicles, hardware platforms and software protocols. In addition, the modular design enables various components to be easily usable independently in other projects. We demonstrate the simulator by first implementing a quadrotor as an autonomous vehicle and then experimentally comparing the software components with real-world flights.",
"@cite_4: Modern computer vision algorithms typically require expensive data acquisition and accurate manual labeling. In this work, we instead leverage the recent progress in computer graphics to generate fully labeled, dynamic, and photo-realistic proxy virtual worlds. We propose an efficient real-to-virtual world cloning method, and validate our approach by building and publicly releasing a new video dataset, called Virtual KITTI (see this http URL), automatically labeled with accurate ground truth for object detection, tracking, scene and instance segmentation, depth, and optical flow. We provide quantitative experimental evidence suggesting that (i) modern deep learning algorithms pre-trained on real data behave similarly in real and virtual worlds, and (ii) pre-training on virtual data improves performance. As the gap between real and virtual worlds is small, virtual worlds enable measuring the impact of various weather and imaging conditions on recognition performance, all other things being equal. We show these factors may affect drastically otherwise high-performing deep models for tracking.",
"@cite_5: In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photo-realistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https: ivul.kaust.edu.sa Pages pub-benchmark-simulator-uav.aspx.)."
] | Moreover, there is another line of work that uses hardware-in-the-loop (HILT) simulation. Examples include JMAVSim @cite_1 which was used to develop and evaluate controllers and RotorS @cite_2 which was used to study visual servoing. The visual quality of most HIL simulators is very basic and far from photo-realistic with the exception of AirSim @cite_3 . While there are multiple established simulators such as Realflight, Flightgear, or XPlane for simulating aerial platforms, they have several limitations. In contrast to Unreal Engine, advanced shading and post-processing settings are not available and the selection of assets and textures is limited. Recent work @cite_4 highlights how modern game engines can be used to generate photo-realistic training datasets and pixel-accurate segmentation masks. The goal of this work is to build an automated UAV flying system (based on imitation learning) that can relatively easily be transitioned from a simulated world to the real one. Therefore, we choose Sim4CV @cite_5 as our simulator, which uses the open source game engine UE4 and provides a full software in-the-loop UAV simulation. The simulator also provides a lot of flexibility in terms of assets, textures, and communication interfaces. |
[
"abstract: Convolutional neural networks have gained a remarkable success in computer vision. However, most usable network architectures are hand-crafted and usually require expertise and elaborate design. In this paper, we provide a block-wise network generation pipeline called BlockQNN which automatically builds high-performance networks using the Q-Learning paradigm with epsilon-greedy exploration strategy. The optimal network block is constructed by the learning agent which is trained sequentially to choose component layers. We stack the block to construct the whole auto-generated network. To accelerate the generation process, we also propose a distributed asynchronous framework and an early stop strategy. The block-wise generation brings unique advantages: (1) it performs competitive results in comparison to the hand-crafted state-of-the-art networks on image classification, additionally, the best network generated by BlockQNN achieves 3.54 top-1 error rate on CIFAR-10 which beats all existing auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of the search space in designing networks which only spends 3 days with 32 GPUs, and (3) moreover, it has strong generalizability that the network built on CIFAR also performs well on a larger-scale ImageNet dataset.",
"@cite_1: Various schemes for combining genetic algorithms and neural networks have been proposed and tested in recent years, but the literature is scattered among a variety of journals, proceedings and technical reports. Activity in this area is clearly increasing. The authors provide an overview of this body of literature drawing out common themes and providing, where possible, the emerging wisdom about what seems to work and what does not. >",
"@cite_2: Research in neuroevolution---that is, evolving artificial neural networks (ANNs) through evolutionary algorithms---is inspired by the evolution of biological brains, which can contain trillions of connections. Yet while neuroevolution has produced successful results, the scale of natural brains remains far beyond reach. This article presents a method called hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) that aims to narrow this gap. HyperNEAT employs an indirect encoding called connective compositional pattern-producing networks (CPPNs) that can produce connectivity patterns with symmetries and repeating motifs by interpreting spatial patterns generated within a hypercube as connectivity patterns in a lower-dimensional space. This approach can exploit the geometry of the task by mapping its regularities onto the topology of the network, thereby shifting problem difficulty away from dimensionality to the underlying problem structure. Furthermore, connective CPPNs can represent the same connectivity pattern at any resolution, allowing ANNs to scale to new numbers of inputs and outputs without further evolution. HyperNEAT is demonstrated through visual discrimination and food-gathering tasks, including successful visual discrimination networks containing over eight million connections. The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution.",
"@cite_3: The convolutional neural network (CNN), which is one of the deep learning models, has seen much success in a variety of computer vision tasks. However, designing CNN architectures still requires expert knowledge and a lot of trial and error. In this paper, we attempt to automatically construct CNN architectures for an image classification task based on Cartesian genetic programming (CGP). In our method, we adopt highly functional modules, such as convolutional blocks and tensor concatenation, as the node functions in CGP. The CNN structure and connectivity represented by the CGP encoding method are optimized to maximize the validation accuracy. To evaluate the proposed method, we constructed a CNN architecture for the image classification task with the CIFAR-10 dataset. The experimental result shows that the proposed method can be used to automatically find the competitive CNN architecture compared with state-of-the-art models.",
"@cite_4: Despite the success of CNNs, selecting the optimal architecture for a given task remains an open problem. Instead of aiming to select a single optimal architecture, we propose a \"fabric\" that embeds an exponentially large number of architectures. The fabric consists of a 3D trellis that connects response maps at different layers, scales, and channels with a sparse homogeneous local connectivity pattern. The only hyper-parameters of a fabric are the number of channels and layers. While individual architectures can be recovered as paths, the fabric can in addition ensemble all embedded architectures together, sharing their weights where their paths overlap. Parameters can be learned using standard methods based on back-propagation, at a cost that scales linearly in the fabric size. We present benchmark results competitive with the state of the art for image classification on MNIST and CIFAR10, and for semantic segmentation on the Part Labels dataset.",
"@cite_5: Deep neural networks (DNNs) show very strong performance on many machine learning problems, but they are very sensitive to the setting of their hyperparameters. Automated hyperparameter optimization methods have recently been shown to yield settings competitive with those found by human experts, but their widespread adoption is hampered by the fact that they require more computational resources than human experts. Humans have one advantage: when they evaluate a poor hyperparameter setting they can quickly detect (after a few steps of stochastic gradient descent) that the resulting network performs poorly and terminate the corresponding evaluation to save time. In this paper, we mimic the early termination of bad runs using a probabilistic model that extrapolates the performance from the first part of a learning curve. Experiments with a broad range of neural network architectures on various prominent object recognition benchmarks show that our resulting approach speeds up state-of-the-art hyperparameter optimization methods for DNNs roughly twofold, enabling them to find DNN settings that yield better performance than those chosen by human experts.",
"@cite_6: Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.",
"@cite_7: At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using @math -learning with an @math -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.",
"@cite_8: The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond."
] | Early works, from @math s, have made efforts on automating neural network design which often searched good architecture by the genetic algorithm or other evolutionary algorithms @cite_1 @cite_2 @cite_3 @cite_7 @cite_5 . Nevertheless, these works, to our best knowledge, cannot perform competitively compared with hand-crafted networks. Recent works, Neural Architecture Search (NAS) @cite_6 and MetaQNN @cite_7 , adopted reinforcement learning to automatically search a good network architecture. Although they can yield good performance on small datasets such as CIFAR- @math , CIFAR- @math , the direct use of MetaQNN or NAS for architecture design on big datasets like ImageNet @cite_8 is computationally expensive via searching in a huge space. Besides, the network generated by this kind of methods is task-specific or dataset-specific, that is, it cannot been well transferred to other tasks nor datasets with different input data sizes. For example, the network designed for CIFAR- @math cannot been generalized to ImageNet. |
[
"abstract: Convolutional neural networks have gained a remarkable success in computer vision. However, most usable network architectures are hand-crafted and usually require expertise and elaborate design. In this paper, we provide a block-wise network generation pipeline called BlockQNN which automatically builds high-performance networks using the Q-Learning paradigm with epsilon-greedy exploration strategy. The optimal network block is constructed by the learning agent which is trained sequentially to choose component layers. We stack the block to construct the whole auto-generated network. To accelerate the generation process, we also propose a distributed asynchronous framework and an early stop strategy. The block-wise generation brings unique advantages: (1) it performs competitive results in comparison to the hand-crafted state-of-the-art networks on image classification, additionally, the best network generated by BlockQNN achieves 3.54 top-1 error rate on CIFAR-10 which beats all existing auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of the search space in designing networks which only spends 3 days with 32 GPUs, and (3) moreover, it has strong generalizability that the network built on CIFAR also performs well on a larger-scale ImageNet dataset.",
"@cite_1: We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"@cite_2: Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9 top-5 validation error (and 4.8 test error), exceeding the accuracy of human raters.",
"@cite_3: Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2 top-1 and 5.6 top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5 top-5 error on the validation set (3.6 error on the test set) and 17.3 top-1 error on the validation set.",
"@cite_4: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"@cite_5: Deep residual networks have emerged as a family of extremely deep architectures showing compelling accuracy and nice convergence behaviors. In this paper, we analyze the propagation formulations behind the residual building blocks, which suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation. A series of ablation experiments support the importance of these identity mappings. This motivates us to propose a new residual unit, which makes training easier and improves generalization. We report improved results using a 1001-layer ResNet on CIFAR-10 (4.62 error) and CIFAR-100, and a 200-layer ResNet on ImageNet. Code is available at: https: github.com KaimingHe resnet-1k-layers."
] | Instead, our approach is aimed to design network block architecture by an efficient search method with a distributed asynchronous Q-learning framework as well as an early-stop strategy. The block design conception follows the modern convolutional neural networks such as Inception @cite_1 @cite_2 @cite_3 and Resnet @cite_4 @cite_5 . The inception-based networks construct the inception blocks via a hand-crafted multi-level feature extractor strategy by computing @math , @math , and @math convolutions, while the Resnet uses residue blocks with shortcut connection to make it easier to represent the identity mapping which allows a very deep network. The blocks automatically generated by our approach have similar structures such as some blocks contain short cut connections and inception-like multi-branch combination. We will discuss the details in . |
[
"abstract: Convolutional neural networks have gained a remarkable success in computer vision. However, most usable network architectures are hand-crafted and usually require expertise and elaborate design. In this paper, we provide a block-wise network generation pipeline called BlockQNN which automatically builds high-performance networks using the Q-Learning paradigm with epsilon-greedy exploration strategy. The optimal network block is constructed by the learning agent which is trained sequentially to choose component layers. We stack the block to construct the whole auto-generated network. To accelerate the generation process, we also propose a distributed asynchronous framework and an early stop strategy. The block-wise generation brings unique advantages: (1) it performs competitive results in comparison to the hand-crafted state-of-the-art networks on image classification, additionally, the best network generated by BlockQNN achieves 3.54 top-1 error rate on CIFAR-10 which beats all existing auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of the search space in designing networks which only spends 3 days with 32 GPUs, and (3) moreover, it has strong generalizability that the network built on CIFAR also performs well on a larger-scale ImageNet dataset.",
"@cite_1: Different researchers hold different views of what the term meta-learning exactly means. The first part of this paper provides our own perspective view in which the goal is to build self-adaptive learners (i.e. learning algorithms that improve their bias dynamically through experience by accumulating meta-knowledge). The second part provides a survey of meta-learning as reported by the machine-learning literature. We find that, despite different views and research lines, a question remains constant: how can we exploit knowledge about learning (i.e. meta-knowledge) to improve the performance of learning algorithms? Clearly the answer to this question is key to the advancement of the field and continues being the subject of intensive research.",
"@cite_2: This paper introduces the application of gradient descent methods to meta-learning. The concept of \"meta-learning\", i.e. of a system that improves or discovers a learning algorithm, has been of interest in machine learning for decades because of its appealing applications. Previous meta-learning approaches have been based on evolutionary methods and, therefore, have been restricted to small models with few free parameters. We make meta-learning in large systems feasible by using recurrent neural networks withth eir attendant learning routines as meta-learning systems. Our system derived complex well performing learning algorithms from scratch. In this paper we also show that our approachp erforms non-stationary time series prediction.",
"@cite_3: The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art."
] | Another bunch of related works include hyper-parameter optimization , meta-learning @cite_1 and learning to learn methods @cite_2 @cite_3 . However, the goal of these works is to use meta-data to improve the performance of the existing algorithms, such as finding the optimal learning rate of optimization methods or the optimal number of hidden layers to construct the network. In this paper, we focus on learning the entire topological architecture of network blocks to improve the performance. |
[
"abstract: Domain randomization (DR) is a successful technique for learning robust policies for robot systems, when the dynamics of the target robot system are unknown. The success of policies trained with domain randomization however, is highly dependent on the correct selection of the randomization distribution. The majority of success stories typically use real world data in order to carefully select the DR distribution, or incorporate real world trajectories to better estimate appropriate randomization distributions. In this paper, we consider the problem of finding good domain randomization parameters for simulation, without prior access to data from the target system. We explore the use of gradient-based search methods to learn a domain randomization with the following properties: 1) The trained policy should be successful in environments sampled from the domain randomization distribution 2) The domain randomization distribution should be wide enough so that the experience similar to the target robot system is observed during training, while addressing the practicality of training finite capacity models. These two properties aim to ensure the trajectories encountered in the target system are close to those observed during training, as existing methods in machine learning are better suited for interpolation than extrapolation. We show how adapting the domain randomization distribution while training context-conditioned policies results in improvements on jump-start and asymptotic performance when transferring a learned policy to the target environment.",
"@cite_1: Deep reinforcement learning (RL) has achieved breakthrough results on many tasks, but has been shown to be sensitive to system changes at test time. As a result, building deep RL agents that generalize has become an active research area. Our aim is to catalyze and streamline community-wide progress on this problem by providing the first benchmark and a common experimental protocol for investigating generalization in RL. Our benchmark contains a diverse set of environments and our evaluation methodology covers both in-distribution and out-of-distribution generalization. To provide a set of baselines for future research, we conduct a systematic evaluation of deep RL algorithms, including those that specifically tackle the problem of generalization."
] | @cite_1 present an empirical study of generalization in Deep-RL, testing interpolation and extrapolation performance of state-of-the-art algorithms when varying simulation parameters in control tasks. The authors provide an experimental assessment of generalization under varying training and testing distributions. Our work extends these results by providing results for the case when the training distribution parameters are learned and change during policy training. |
[
"abstract: Domain randomization (DR) is a successful technique for learning robust policies for robot systems, when the dynamics of the target robot system are unknown. The success of policies trained with domain randomization however, is highly dependent on the correct selection of the randomization distribution. The majority of success stories typically use real world data in order to carefully select the DR distribution, or incorporate real world trajectories to better estimate appropriate randomization distributions. In this paper, we consider the problem of finding good domain randomization parameters for simulation, without prior access to data from the target system. We explore the use of gradient-based search methods to learn a domain randomization with the following properties: 1) The trained policy should be successful in environments sampled from the domain randomization distribution 2) The domain randomization distribution should be wide enough so that the experience similar to the target robot system is observed during training, while addressing the practicality of training finite capacity models. These two properties aim to ensure the trajectories encountered in the target system are close to those observed during training, as existing methods in machine learning are better suited for interpolation than extrapolation. We show how adapting the domain randomization distribution while training context-conditioned policies results in improvements on jump-start and asymptotic performance when transferring a learned policy to the target environment.",
"@cite_1: We consider the problem of transferring policies to the real world by training on a distribution of simulated scenarios. Rather than manually tuning the randomization of simulations, we adapt the simulation parameter distribution using a few real world roll-outs interleaved with policy training. In doing so, we are able to change the distribution of simulations to improve the policy transfer by matching the policy behavior in simulation and the real world. We show that policies trained with our method are able to reliably transfer to different robots in two real world tasks: swing-peg-in-hole and opening a cabinet drawer. The video of our experiments can be found at this https URL"
] | @cite_1 propose training policies on a distribution of simulators, whose parameters are fit to real-world data. Their proposed algorithm switches back and forth between optimizing the policy under the DR distribution and updating the DR distribution by minimizing the discrepancy between simulated and real world trajectories. In contrast, we aim to learn policies that maximize performance over a diverse distribution of environments where the task is feasible, as a way of minimizing the interactions with the real robot system. |
[
"abstract: Domain randomization (DR) is a successful technique for learning robust policies for robot systems, when the dynamics of the target robot system are unknown. The success of policies trained with domain randomization however, is highly dependent on the correct selection of the randomization distribution. The majority of success stories typically use real world data in order to carefully select the DR distribution, or incorporate real world trajectories to better estimate appropriate randomization distributions. In this paper, we consider the problem of finding good domain randomization parameters for simulation, without prior access to data from the target system. We explore the use of gradient-based search methods to learn a domain randomization with the following properties: 1) The trained policy should be successful in environments sampled from the domain randomization distribution 2) The domain randomization distribution should be wide enough so that the experience similar to the target robot system is observed during training, while addressing the practicality of training finite capacity models. These two properties aim to ensure the trajectories encountered in the target system are close to those observed during training, as existing methods in machine learning are better suited for interpolation than extrapolation. We show how adapting the domain randomization distribution while training context-conditioned policies results in improvements on jump-start and asymptotic performance when transferring a learned policy to the target environment.",
"@cite_1: Sample complexity and safety are major challenges when learning policies with reinforcement learning for real-world tasks, especially when the policies are represented using rich function approximators like deep neural networks. Model-based methods where the real-world target domain is approximated using a simulated source domain provide an avenue to tackle the above challenges by augmenting real data with simulated data. However, discrepancies between the simulated source domain and the target domain pose a challenge for simulated training. We introduce the EPOpt algorithm, which uses an ensemble of simulated source domains and a form of adversarial training to learn policies that are robust and generalize to a broad range of possible target domains, including unmodeled effects. Further, the probability distribution over source domains in the ensemble can be adapted using data from target domain and approximate Bayesian methods, to progressively make it a better approximation. Thus, learning on a model ensemble, along with source domain adaptation, provides the benefit of both robustness and learning adaptation.",
"@cite_2: Conditional Value at Risk (CVaR) is a prominent risk measure that is being used extensively in various domains. We develop a new formula for the gradient of the CVaR in the form of a conditional expectation. Based on this formula, we propose a novel sampling-based estimator for the CVaR gradient, in the spirit of the likelihood-ratio method. We analyze the bias of the estimator, and prove the convergence of a corresponding stochastic gradient descent algorithm to a local CVaR optimum. Our method allows to consider CVaR optimization in new domains. As an example, we consider a reinforcement learning application, and learn a risk-sensitive controller for the game of Tetris."
] | @cite_1 propose a related approach for learning robust policies over a distribution of simulator models. The proposed approach, based on the the @math -percentile conditional value at risk (CVaR) @cite_2 objective, improves the policy performance on a small proportion of environments where the policy performs the worst. The authors propose an algorithm that updates the distribution of simulation models to maximize the likelihood of real-world trajectories, via Bayesian inference. The combination of worst-case performance optimization and Bayesian updates ensures that the resulting policy is robust to errors in the estimation of the simulation model parameters. Our method can be combined with the CVaR objective to encourage diversity of the learned DR distribution. |
[
"abstract: Domain randomization (DR) is a successful technique for learning robust policies for robot systems, when the dynamics of the target robot system are unknown. The success of policies trained with domain randomization however, is highly dependent on the correct selection of the randomization distribution. The majority of success stories typically use real world data in order to carefully select the DR distribution, or incorporate real world trajectories to better estimate appropriate randomization distributions. In this paper, we consider the problem of finding good domain randomization parameters for simulation, without prior access to data from the target system. We explore the use of gradient-based search methods to learn a domain randomization with the following properties: 1) The trained policy should be successful in environments sampled from the domain randomization distribution 2) The domain randomization distribution should be wide enough so that the experience similar to the target robot system is observed during training, while addressing the practicality of training finite capacity models. These two properties aim to ensure the trajectories encountered in the target system are close to those observed during training, as existing methods in machine learning are better suited for interpolation than extrapolation. We show how adapting the domain randomization distribution while training context-conditioned policies results in improvements on jump-start and asymptotic performance when transferring a learned policy to the target environment.",
"@cite_1: Policy gradient methods have been successfully applied to a variety of reinforcement learning tasks. However, while learning in a simulator, these methods do not utilise the opportunity to improve learning by adjusting certain environment variables: unobservable state features that are randomly determined by the environment in a physical setting, but that are controllable in a simulator. This can lead to slow learning, or convergence to highly suboptimal policies. In this paper, we present contextual policy optimisation (CPO). The central idea is to use Bayesian optimisation to actively select the distribution of the environment variable that maximises the improvement generated by each iteration of the policy gradient method. To make this Bayesian optimisation practical, we contribute two easy-to-compute low-dimensional fingerprints of the current policy. We apply CPO to a number of continuous control tasks of varying difficulty and show that CPO can efficiently learn policies that are robust to significant rare events, which are unlikely to be observable under random sampling but are key to learning good policies."
] | Related to learning the DR distribution, @cite_1 propose using Bayesian Optimization (BO) to update from the simulation model distribution. This is done by evaluating the improvement over the current policy by using a policy gradient algorithm with data sampled from the current simulator distribution. The parameters of the simulator distribution for the next iteration are selected to maximize said improvement. |
[
"abstract: Domain randomization (DR) is a successful technique for learning robust policies for robot systems, when the dynamics of the target robot system are unknown. The success of policies trained with domain randomization however, is highly dependent on the correct selection of the randomization distribution. The majority of success stories typically use real world data in order to carefully select the DR distribution, or incorporate real world trajectories to better estimate appropriate randomization distributions. In this paper, we consider the problem of finding good domain randomization parameters for simulation, without prior access to data from the target system. We explore the use of gradient-based search methods to learn a domain randomization with the following properties: 1) The trained policy should be successful in environments sampled from the domain randomization distribution 2) The domain randomization distribution should be wide enough so that the experience similar to the target robot system is observed during training, while addressing the practicality of training finite capacity models. These two properties aim to ensure the trajectories encountered in the target system are close to those observed during training, as existing methods in machine learning are better suited for interpolation than extrapolation. We show how adapting the domain randomization distribution while training context-conditioned policies results in improvements on jump-start and asymptotic performance when transferring a learned policy to the target environment.",
"@cite_1: Deep reinforcement learning algorithms require large amounts of experience to learn an individual task. While in principle meta-reinforcement learning (meta-RL) algorithms enable agents to learn new skills from small amounts of experience, several major challenges preclude their practicality. Current methods rely heavily on on-policy experience, limiting their sample efficiency. The also lack mechanisms to reason about task uncertainty when adapting to new tasks, limiting their effectiveness in sparse reward problems. In this paper, we address these challenges by developing an off-policy meta-RL algorithm that disentangles task inference and control. In our approach, we perform online probabilistic filtering of latent task variables to infer how to solve a new task from small amounts of experience. This probabilistic interpretation enables posterior sampling for structured and efficient exploration. We demonstrate how to integrate these task variables with off-policy RL algorithms to achieve both meta-training and adaptation efficiency. Our method outperforms prior algorithms in sample efficiency by 20-100X as well as in asymptotic performance on several meta-RL benchmarks.",
"@cite_2: We present a new method of learning control policies that successfully operate under unknown dynamic models. We create such policies by leveraging a large number of training examples that are generated using a physical simulator. Our system is made of two components: a Universal Policy (UP) and a function for Online System Identification (OSI). We describe our control policy as universal because it is trained over a wide array of dynamic models. These variations in the dynamic model may include differences in mass and inertia of the robots' components, variable friction coefficients, or unknown mass of an object to be manipulated. By training the Universal Policy with this variation, the control policy is prepared for a wider array of possible conditions when executed in an unknown environment. The second part of our system uses the recent state and action history of the system to predict the dynamics model parameters mu. The value of mu from the Online System Identification is then provided as input to the control policy (along with the system state). Together, UP-OSI is a robust control policy that can be used across a wide range of dynamic models, and that is also responsive to sudden changes in the environment. We have evaluated the performance of this system on a variety of tasks, including the problem of cart-pole swing-up, the double inverted pendulum, locomotion of a hopper, and block-throwing of a manipulator. UP-OSI is effective at these tasks across a wide range of dynamic models. Moreover, when tested with dynamic models outside of the training range, UP-OSI outperforms the Universal Policy alone, even when UP is given the actual value of the model dynamics. In addition to the benefits of creating more robust controllers, UP-OSI also holds out promise of narrowing the Reality Gap between simulated and real physical systems."
] | @cite_2 also use context-conditioned policies, where the context is implicitly encoded into a vector @math . During the training phase, their proposed algorithm improves on the performance of the policy while learning a probabilistic mapping from trajectory data to context vectors. At test time, the learned mapping is used for online inference of the context vector. This is similar in spirit to the Universal Policies with Online System Identification method @cite_2 , which instead uses deterministic context inference with an explicit context encoding. Again, these methods use a fixed DR distribution and could benefit from adapting it during training, as we propose in this work. |
[
"abstract: Non-negative matrix factorization (NMF) minimizes the euclidean distance between the data matrix and its low rank approximation, and it fails when applied to corrupted data because the loss function is sensitive to outliers. In this paper, we propose a Truncated CauchyNMF loss that handle outliers by truncating large errors, and develop a Truncated CauchyNMF to robustly learn the subspace on noisy datasets contaminated by outliers. We theoretically analyze the robustness of Truncated CauchyNMF comparing with the competing models and theoretically prove that Truncated CauchyNMF has a generalization bound which converges at a rate of order @math , where @math is the sample size. We evaluate Truncated CauchyNMF by image clustering on both simulated and real datasets. The experimental results on the datasets containing gross corruptions validate the effectiveness and robustness of Truncated CauchyNMF for learning robust subspaces.",
"@cite_1: Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary function analogous to that used for proving convergence of the Expectation-Maximization algorithm. The algorithms can also be interpreted as diagonally rescaled gradient descent, where the rescaling factor is optimally chosen to ensure convergence.",
"@cite_2: Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary function analogous to that used for proving convergence of the Expectation-Maximization algorithm. The algorithms can also be interpreted as diagonally rescaled gradient descent, where the rescaling factor is optimally chosen to ensure convergence.",
"@cite_3: Matrix factorization techniques have been frequently applied in information retrieval, computer vision, and pattern recognition. Among them, Nonnegative Matrix Factorization (NMF) has received considerable attention due to its psychological and physiological interpretation of naturally occurring data whose representation may be parts based in the human brain. On the other hand, from the geometric perspective, the data is usually sampled from a low-dimensional manifold embedded in a high-dimensional ambient space. One then hopes to find a compact representation,which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. In this paper, we propose a novel algorithm, called Graph Regularized Nonnegative Matrix Factorization (GNMF), for this purpose. In GNMF, an affinity graph is constructed to encode the geometrical information and we seek a matrix factorization, which respects the graph structure. Our empirical study shows encouraging results of the proposed algorithm in comparison to the state-of-the-art algorithms on real-world problems.",
"@cite_4: Let A be a real m×n matrix with m≧n. It is well known (cf. [4]) that @math (1) where @math The matrix U consists of n orthonormalized eigenvectors associated with the n largest eigenvalues of AA T , and the matrix V consists of the orthonormalized eigenvectors of A T A. The diagonal elements of ∑ are the non-negative square roots of the eigenvalues of A T A; they are called singular values. We shall assume that @math Thus if rank(A)=r, σ r+1 = σ r+2=⋯=σ n = 0. The decomposition (1) is called the singular value decomposition (SVD).",
"@cite_5: Nonnegative matrix factorization (NMF) approximates a given data matrix as a product of two low-rank nonnegative matrices, usually by minimizing the L2 or the KL distance between the data matrix and the matrix product. This factorization was shown to be useful for several important computer vision applications. We propose here two new NMF algorithms that minimize the Earth mover's distance (EMD) error between the data and the matrix product. The algorithms (EMD NMF and bilateral EMD NMF) are iterative and based on linear programming methods. We prove their convergence, discuss their numerical difficulties, and propose efficient approximations. Naturally, the matrices obtained with EMD NMF are different from those obtained with L2-NMF. We discuss these differences in the context of two challenging computer vision tasks, texture classification and face recognition, perform actual NMF-based image segmentation for the first time, and demonstrate the advantages of the new methods with common benchmarks."
] | Traditional NMF @cite_1 assumes that noise obeys a Gaussian distribution and derives the following squared @math -norm based objective function: @math , where @math signifies the matrix Frobenius norm. It is commonly known that NMF can be solved by using the multiplicative update rule (MUR, @cite_1 ). Because of the nice mathematical property of squared @math -norm and the efficiency of MUR, NMF has been extended for various applications @cite_3 @cite_4 @cite_5 . However, NMF and its extensions are non-robust because the @math -norm is sensitive to outliers. |
[
"abstract: Non-negative matrix factorization (NMF) minimizes the euclidean distance between the data matrix and its low rank approximation, and it fails when applied to corrupted data because the loss function is sensitive to outliers. In this paper, we propose a Truncated CauchyNMF loss that handle outliers by truncating large errors, and develop a Truncated CauchyNMF to robustly learn the subspace on noisy datasets contaminated by outliers. We theoretically analyze the robustness of Truncated CauchyNMF comparing with the competing models and theoretically prove that Truncated CauchyNMF has a generalization bound which converges at a rate of order @math , where @math is the sample size. We evaluate Truncated CauchyNMF by image clustering on both simulated and real datasets. The experimental results on the datasets containing gross corruptions validate the effectiveness and robustness of Truncated CauchyNMF for learning robust subspaces.",
"@cite_1: Non-negative matrix factorization (NMF) is a recently popularized technique for learning parts-based, linear representations of non-negative data. The traditional NMF is optimized under the Gaussian noise or Poisson noise assumption, and hence not suitable if the data are grossly corrupted. To improve the robustness of NMF, a novel algorithm named robust nonnegative matrix factorization (RNMF) is proposed in this paper. We assume that some entries of the data matrix may be arbitrarily corrupted, but the corruption is sparse. RNMF decomposes the non-negative data matrix as the summation of one sparse error matrix and the product of two non-negative matrices. An efficient iterative approach is developed to solve the optimization problem of RNMF. We present experimental results on two face databases to verify the effectiveness of the proposed method."
] | Zhang . @cite_1 assumed that the dataset contains both Laplace distributed noise and Gaussian distributed noise and proposed an @math -norm regularized Robust NMF (RNMF- @math ) as follows: @math , where @math is a positive constant that trades off the sparsity of @math . Similar to @math -NMF, RNMF- @math is also less sensitive to outliers than NMF, but they are both non-robust to large numbers of outliers because the @math -minimization model has a low breakdown point. Moreover, it is non-trivial to determine the tradeoff parameter @math . |
[
"abstract: We introduce a global Landau-Ginzburg model which is mirror to several toric Deligne-Mumford stacks and describe the change of the Gromov-Witten theories under discrepant transformations. We prove a formal decomposition of the quantum cohomology D-modules (and of the all-genus Gromov-Witten potentials) under a discrepant toric wall-crossing. In the case of weighted blowups of weak-Fano compact toric stacks along toric centres, we show that an analytic lift of the formal decomposition corresponds, via the Gamma-integral structure, to an Orlov-type semiorthogonal decomposition of topological K-groups. We state a conjectural functoriality of Gromov-Witten theories under discrepant transformations in terms of a Riemann-Hilbert problem.",
"@cite_1: We study B-branes in two-dimensional N=(2,2) anomalous models, and their behaviour as we vary bulk parameters in the quantum Kahler moduli space. We focus on the case of (2,2) theories defined by abelian gauged linear sigma models (GLSM). We use the hemisphere partition function as a guide to find how B-branes split in the IR into components supported on Higgs, mixed and Coulomb branches: this generalizes the band restriction rule of Herbst-Hori-Page to anomalous models. As a central example, we work out in detail the case of GLSMs for Hirzebruch-Jung resolutions of cyclic surface singularities. In these non-compact models we explain how to compute and regularize the hemisphere partition function for a brane with compact support, and check that its Higgs branch component explicitly matches with the geometric central charge of an object in the derived category.",
"@cite_2: We study B-type D-branes in linear sigma models with Abelian gauge groups. The most important finding is the grade restriction rule. It classifies representations of the gauge group on the Chan-Paton factor, which can be used to define a family of D-branes over a region of the Kahler moduli space that connects special points of different character. As an application, we find a precise, transparent relation between D-branes in various geometric phases as well as free orbifold and Landau-Ginzburg points. The result reproduces and unifies many of the earlier mathematical results on equivalences of D-brane categories, including the McKay correspondence and Orlov's construction."
] | Recently, Clingempeel-Le Floch-Romo @cite_1 compared the hemisphere partition functions (which in our language correspond to certain solutions of the quantum D-modules) of the cyclic quotient singularities @math and their Hirzebruch-Jung resolutions. They discussed the relation to a semiorthogonal decomposition of the the derived categories, extending the work of Herbst-Hori-Page @cite_2 to the anomalous (discrepant) case. Their examples are complementary to ours: the Hirzebruch-Jung resolutions are type (II-ii) discrepant transformations whereas transformations in Theorem are of type (II-i) or (III) (see Remark for these types). |
[
"abstract: We consider worst case time bounds for NP-complete problems including 3-SAT, 3-coloring, 3-edge-coloring, and 3-list-coloring. Our algorithms are based on a constraint satisfaction (CSP) formulation of these problems. 3-SAT is equivalent to (2,3)-CSP while the other problems above are special cases of (3,2)-CSP; there is also a natural duality transformation from (a,b)-CSP to (b,a)-CSP. We give a fast algorithm for (3,2)-CSP and use it to improve the time bounds for solving the other problems listed above. Our techniques involve a mixture of Davis-Putnam-style backtracking with more sophisticated matching and network flow based ideas.",
"@cite_1: Abstract We present an algorithm that generates all maximal independent sets of a graph in lexicographic order, with only polynomial delay between the output of two successive independent sets. We also show that there is no polynomial-delay algorithm for generating all maximal independent sets in reverse lexicographic order, unless P=NP.",
"@cite_2: A clique is a maximal complete subgraph of a graph. The maximum number of cliques possible in a graph withn nodes is determined. Also, bounds are obtained for the number of different sizes of cliques possible in such a graph.",
"@cite_3: In this paper we describe and analyze an improved algorithm for deciding the 3-Colourability problem. If G is a simple graph on n vertices then we will show that this algorithm tests a graph for 3-Colourability, i.e. an assignment of three colours to the vertices of G such that two adjacent vertices obtain different colours, in less than O(1.415n) steps."
] | For three-coloring, we know of several relevant references. Lawler is primarily concerned with the general chromatic number, but he also gives the following very simple algorithm for 3-coloring: for each maximal independent set, test whether the complement is bipartite. The maximal independent sets can be listed with polynomial delay @cite_1 , and there are at most @math such sets @cite_2 , so this algorithm takes time @math . Schiermeyer @cite_3 gives a complicated algorithm for solving 3-colorability in time @math , based on the following idea: if there is one vertex @math of degree @math then the graph is 3-colorable iff @math is bipartite, and the problem is easily solved. Otherwise, Schiermeyer performs certain reductions involving maximal independent sets that attempt to increase the degree of @math while partitioning the problem into subproblems, at least one of which will remain solvable. Our @math bound significantly improves both of these results. |
[
"abstract: We consider worst case time bounds for NP-complete problems including 3-SAT, 3-coloring, 3-edge-coloring, and 3-list-coloring. Our algorithms are based on a constraint satisfaction (CSP) formulation of these problems. 3-SAT is equivalent to (2,3)-CSP while the other problems above are special cases of (3,2)-CSP; there is also a natural duality transformation from (a,b)-CSP to (b,a)-CSP. We give a fast algorithm for (3,2)-CSP and use it to improve the time bounds for solving the other problems listed above. Our techniques involve a mixture of Davis-Putnam-style backtracking with more sophisticated matching and network flow based ideas.",
"@cite_1: We show how the results of Karger, Motwani, and Sudan (1994) and Blum (1994) can be combined in a natural manner to yield a polynomial-time algorithm for O(n314)-coloring any n-node 3-colorable graph. This improves on the previous best bound of O(n14) colors (, 1994).",
"@cite_2: Let G3n,p,3 be a random 3-colorable graph on a set of 3n vertices generated as follows. First, split the vertices arbitrarily into three equal color classes, and then choose every pair of vertices of distinct color classes, randomly and independently, to be edges with probability p. We describe a polynomial-time algorithm that finds a proper 3-coloring of G3n,p,3 with high probability, whenever p @math c n, where c is a sufficiently large absolute constant. This settles a problem of Blum and Spencer, who asked if an algorithm can be designed that works almost surely for p @math polylog(n) n [J. Algorithms, 19 (1995), pp. 204--234]. The algorithm can be extended to produce optimal k-colorings of random k-colorable graphs in a similar model as well as in various related models. Implementation results show that the algorithm performs very well in practice even for moderate values of c.",
"@cite_3: This paper describes a randomised algorithm for the NP-complete problem of 3-colouring the vertices of a graph. The method is based on a model of repulsion in interacting particle systems. Although it seems to work well on most random inputs there is a critical phenomenon apparent reminiscent of critical behaviour in other areas of statistical mechanics.",
"@cite_4: We present a simple generation procedure which turns out to be an effective source of very hard cases for graph 3-colorability. The graphs distributed according to this generation procedure are much denser in very hard cases than previously reported for the same problem size. The coloring cost for these instances is also orders of magnitude bigger. This ability is issued from the fact that the procedure favors-inside the class of graphs with given connectivity and free of 4-cliques-the generation of graphs with relatively few paths of length three (that we call 3-paths). There is a critical value of the ratio between the number of 3-paths and the number of edges, independent of the number of nodes, which separates the graphs having the same connectivity in two regions: one contains almost all graphs free of 4-cliques while the other contains almost no such graphs. The generated very hard cases are near this phase transition, and have a regular structure, witnessed by the low variance in node degrees, as opposite to the random graphs. This regularity in the graph structure seems to confuse the coloring algorithm by inducing an uniform search space, with no clue for the search."
] | There has also been some related work on approximate or heuristic 3-coloring algorithms. Blum and Karger @cite_1 show that any 3-chromatic graph can be colored with @math colors in polynomial time. Alon and Kahale @cite_2 describe a technique for coloring random 3-chromatic graphs in expected polynomial time, and Petford and Welsh @cite_3 present a randomized algorithm for 3-coloring graphs which also works well empirically on random graphs although they prove no bounds on its running time. Finally, Vlasie @cite_4 has described a class of instances which are (unlike random 3-chromatic graphs) difficult to color. |
[
"abstract: In this paper, we present a new methodology for developing systematic and automatic test generation algorithms for multipoint protocols. These algorithms attempt to synthesize network topologies and sequences of events that stress the protocol's correctness or performance. This problem can be viewed as a domain-specific search problem that suffers from the state space explosion problem. One goal of this work is to circumvent the state space explosion problem utilizing knowledge of network and fault modeling, and multipoint protocols. The two approaches investigated in this study are based on forward and backward search techniques. We use an extended finite state machine (FSM) model of the protocol. The first algorithm uses forward search to perform reduced reachability analysis. Using domain-specific information for multicast routing over LANs, the algorithm complexity is reduced from exponential to polynomial in the number of routers. This approach, however, does not fully automate topology synthesis. The second algorithm, the fault-oriented test generation, uses backward search for topology synthesis and uses backtracking to generate event sequences instead of searching forward from initial states. Using these algorithms, we have conducted studies for correctness of the multicast routing protocol PIM. We propose to extend these algorithms to study end-to-end multipoint protocols using a virtual LAN that represents delays of the underlying multicast distribution tree.",
"@cite_1: In order for the nodes of a distributed computer network to communicate, each node must have information about the network's topology. Since nodes and links sometimes crash, a scheme is needed to update this information. One of the major constraints on such a topology information scheme is that it may not involve a central controller. The Topology Information Protocol that was implemented on the MERIT Computer Network is presented and explained; this protocol is quite general and could be implemented on any computer network. It is based on Baran's “Hot Potato Heuristic Routing Doctrine.” A correctness proof of this Topology Information Protocol is also presented.",
"@cite_2: This paper deals with a distributed adaptive routing strategy which is very simple and effective, and is free of a ping-pong-type looping in the presence of network failures. Using the number of time intervals required for a node to recover from a network failure as the measure of network's adaptability, performance of this strategy and the ARPANET's previous routing strategy (APRS) is comparatively analyzed without resorting to simulation. Formulas of the exact number of time intervals required for failure recovery under both strategies are also derived. We show that i)the performance of the strategy is always better than, or at least as good as, that of APRS, and ii) network topology has significant effects on the performance of both strategies.",
"@cite_3: A new distributed algorithm is presented for dynamically determining weighted shortest paths used for message routing in computer networks. The major features of the algorithm are that the paths defined do not form transient loops when weights change and the number of steps required to find new shortest paths when network links fail is less than for previous algorithms. Specifically, the worst case recovery time is proportional to the largest number of hops h in any of the weighted shortest paths. For previous loop-free distributed algorithms this recovery time is proportional to h2.",
"@cite_4: An algorithm for constructing and adaptively maintaining routing tables in communication networks is presented. The algorithm can be employed in message as well as circuit switching networks, uses distributed computation, provides routing tables that are loop-free for each destination at all times, adapts to changes in network flows, and is completely failsafe. The latter means that after arbitrary failures and additions, the network recovers in finite time in the sense of providing routing paths between all physically connected nodes. For each destination, the routes are independently updated by an update cycle triggered by the destination.",
"@cite_5: Mobile computing represents a major point of departure from the traditional distributed computing paradigm. The potentially very large number of independent computing units, a decoupled computing style, frequent disconnections, continuous position changes, and the location-dependent nature of the behavior and communication patterns of the individual components present designers with unprecedented challenges in the areas of modularity and dependability. The paper describes two ideas regarding a modular approach to specifying and reasoning about mobile computing. The novelty of our approach rests with the notion of allowing transient interactions among programs which move in space. We restrict our concern to pairwise interactions involving variable sharing and action synchronization. The motivation behind the transient nature of the interactions comes from the fact that components can communicate with each other only when they are within a certain range. The notation we propose is meant to simplify the writing of mobile applications and is a direct extension of that used in UNITY. Reasoning about mobile computations relies on the UNITY proof logic.",
"@cite_6: My goal is to propose a set of questions that I think are important. J. Misra and I are working on these questions.",
"@cite_7: Aho, Ullman, and Yannakakis have proposed a set of protocols that ensure reliable transmission of data across an error-prone channel. They have obtained lower bounds on the complexity required of the protocols to assure reliability for different classes of errors. They specify these protocols with finite-state machines. Although the protocol machines have only a small number of states, they are nontrivial to prove correct. In this paper we present proofs of one of these protocols using the finite-state-machine approach and the abstract-program approach. We also show that the abstract-program approach gives special insight into the operation of the protocol.",
"@cite_8: 0. Introduction.- 1. Experimenting on nondeterministic machines.- 2. Synchronization.- 3. A case study in synchronization and proof techniques.- 4. Case studies in value-communication.- 5. Syntax and semantics of CCS.- 6. Communication trees (CTs) as a model of CCS.- 7. Observation equivalence and its properties.- 8. Some proofs about data structures.- 9. Translation into CCS.- 10. Determinancy and confluence.- 11. Conclusion."
] | Several attempts to apply formal verification to network protocols have been made. Assertional proof techniques were used to prove distance vector routing @cite_1 , path vector routing @cite_2 and route diffusion algorithms @cite_3 and @cite_4 using communicating finite state machines. An example point-to-point mobile application was proved using assertional reasoning in @cite_5 using UNITY @cite_6 . Axiomatic reasoning was used in proving a simple transmission protocol in @cite_7 . Algebraic systems based on the calculus of communicating systems (CCS) @cite_8 have been used to prove CSMA CD . Formal verification has been applied to TCP and T TCP in . |
[
"abstract: In this paper, we present a new methodology for developing systematic and automatic test generation algorithms for multipoint protocols. These algorithms attempt to synthesize network topologies and sequences of events that stress the protocol's correctness or performance. This problem can be viewed as a domain-specific search problem that suffers from the state space explosion problem. One goal of this work is to circumvent the state space explosion problem utilizing knowledge of network and fault modeling, and multipoint protocols. The two approaches investigated in this study are based on forward and backward search techniques. We use an extended finite state machine (FSM) model of the protocol. The first algorithm uses forward search to perform reduced reachability analysis. Using domain-specific information for multicast routing over LANs, the algorithm complexity is reduced from exponential to polynomial in the number of routers. This approach, however, does not fully automate topology synthesis. The second algorithm, the fault-oriented test generation, uses backward search for topology synthesis and uses backtracking to generate event sequences instead of searching forward from initial states. Using these algorithms, we have conducted studies for correctness of the multicast routing protocol PIM. We propose to extend these algorithms to study end-to-end multipoint protocols using a virtual LAN that represents delays of the underlying multicast distribution tree.",
"@cite_1: The temporal logic of actions (TLA) is a logic for specifying and reasoning about concurrent systems. Systems and their properties are represented in the same logic, so the assertion that a system meets its specification and the assertion that one system implements another are both expressed by logical implication. TLA is very simple; its syntax and complete formal semantics are summarized in about a page. Yet, TLA is not just a logician's toy; it is extremely powerful, both in principle and in practice. This report introduces TLA and describes how it is used to specify and verify concurrent algorithms. The use of TLA to specify and reason about open systems will be described elsewhere."
] | The combination of timed automata, invariants, simulation mappings, automaton composition, and temporal logic @cite_1 seem to be very useful tools for proving (or disproving) and reasoning about safety or liveness properties of distributed algorithms. It may also be used to establish asymptotic bounds on the complexity of the distributed algorithms. It is not clear, however, how theorem proving techniques can be used in test synthesis to construct event sequences and topologies that stress network protocols. Parts of our work draw from distributed algorithms verification principles. Yet we feel that our work complements such work, as we focus on test synthesis problems. |
[
"abstract: We address the problem of complementing higher-order patterns without repetitions of existential variables. Differently from the first-order case, the complement of a pattern cannot, in general, be described by a pattern, or even by a finite set of patterns. We therefore generalize the simply-typed λ-calculus to include an internal notion of strict function so that we can directly express that a term must depend on a given variable. We show that, in this more expressive calculus, finite sets of patterns without repeated variables are closed under complement and intersection. Our principal application is the transformational approach to negation in higher-order logic programs.",
"@cite_1: Abstract A transformation technique is introduced which, given the Horn-clause definition of a set of predicates p i , synthesizes the definitions of new predicate p i which can be used, under a suitable refutation procedure, to compute the finite failure set of p i . This technique exhibits some computational advantages, such as the possibility of computing nonground negative goals still preserving the capability of producing answers. The refutation procedure, named SLDN refutation, is proved sound and complete with respect to the completed program."
] | Lassez87 proposed the seminal algorithm for computing relative complements and introduced the now familiar restriction to linear terms. We quote the definition of the @math '' algorithm for the (singleton) complement problem given in @cite_1 which we generalize in Definition . Given a finite signature @math and a linear term @math they define: [ ] The relative complement problem is then solved by composing the above complement operation with term intersection implemented via first-order unification. |
[
"abstract: We address the problem of complementing higher-order patterns without repetitions of existential variables. Differently from the first-order case, the complement of a pattern cannot, in general, be described by a pattern, or even by a finite set of patterns. We therefore generalize the simply-typed λ-calculus to include an internal notion of strict function so that we can directly express that a term must depend on a given variable. We show that, in this more expressive calculus, finite sets of patterns without repeated variables are closed under complement and intersection. Our principal application is the transformational approach to negation in higher-order logic programs.",
"@cite_1: Functional logic languages with a sound and complete operational semantics are mainly based on narrowing. Due to the huge search space of simple narrowing, steadily improved narrowing strategies have been developed in the past. Needed narrowing is currently the best narrowing strategy for first-order functional logic programs due to its optimality properties w.r.t. the length of derivations and the number of computed solutions. In this paper, we extend the needed narrowing strategy to higher-order functions and λ-terms as data structures. By the use of definitional trees, our strategy computes only incomparable solutions. Thus, it is the first calculus for higher-order functional logic programming which provides for such an optimality result. Since we allow higher-order logical variables denoting λ-terms, applications go beyond current functional and logic programming languages."
] | The class of higher-order patterns inherits many properties from first-order terms. However, as we will see, it is closed under complement, but a special subclass is. We call a canonical pattern @math if each occurrence of an existential variable @math under binders @math is applied to some permutation of the variables in @math and @math . Fully applied patterns play an important role in functional logic programming and rewriting @cite_1 , because any fully applied existential variable @math denotes all canonical terms of type @math with parameters from @math . It is this property which makes complementation particularly simple. |
[
"abstract: Nomadic applications create replicas of shared objects that evolve independently while they are disconnected. When reconnecting, the system has to reconcile the divergent replicas. In the log-based approach to reconciliation, such as in the IceCube system, the input is a common initial state and logs of actions that were performed on each replica. The output is a consistent global schedule that maximises the number of accepted actions. The reconciler merges the logs according to the schedule, and replays the operations in the merged log against the initial state, yielding to a reconciled common final state. In this paper, we show the NP-completeness of the log-based reconciliation problem and present two programs for solving it. Firstly, a constraint logic program (CLP) that uses integer constraints for expressing precedence constraints, boolean constraints for expressing dependencies between actions, and some heuristics for guiding the search. Secondly, a stochastic local search method with Tabu heuristic (LS), that computes solutions in an incremental fashion but does not prove optimality. One difficulty in the LS modeling lies in the handling of both boolean variables and integer variables, and in the handling of the objective function which differs from a max-CSP problem. Preliminary evaluation results indicate better performance for the CLP program which, on somewhat realistic benchmarks, finds nearly optimal solutions up to a thousands of actions and proves optimality up to a hundreds of actions.",
"@cite_1: We describe a novel approach to log-based reconciliation called IceCube. It is general and is parameterised by application and object semantics. IceCube considers more flexible orderings and is designed to ease the burden of reconciliation on the application programmers. IceCube captures the static and dynamic reconciliation constraints between all pairs of actions, proposes schedules that satisfy the static constraints, and validates them against the dynamic constraints. Preliminary experience indicates that strong static constraints successfully contain the potential combinatorial explosion of the simulation stage. With weaker static constraints, the system still finds good solutions in a reasonable time.",
"@cite_2: Abstract The paper describes a simple heuristic approach to solving large-scale constraint satisfaction and scheduling problems. In this approach one starts with an inconsistent assignment for a set of variables and searches through the space of possible repairs. The search can be guided by a value-ordering heuristic, the min-conflicts heuristic , that attempts to minimize the number of constraint violations after each step. The heuristic can be used with a variety of different search strategies. We demonstrate empirically that on the n -queens problem, a technique based on this approach performs orders of magnitude better than traditional backtracking techniques. We also describe a scheduling application where the approach has been used successfully. A theoretical analysis is presented both to explain why this method works well on certain types of problems and to predict when it is likely to be most effective."
] | Log-based reconciliation is a new topic for which few algorithms have been developed. The only implementation we know of is the IceCube system reported in @cite_1 . It is worth noting that the objective function of maximizing the number of accepted actions, is different from maximizing the number of satisfied constraints. For that reason, the modeling of log-based reconciliation as a max-CSP problem is inadequate. This is also the main reason why in our second program based on local search, the min-conflict heuristics @cite_2 or the adaptive search method of do not perform well in our modeling, and we use instead a randomized Tabu heuristics. |
[
"abstract: The traffic behavior of University of Louisville network with the interconnected backbone routers and the number of Virtual Local Area Network (VLAN) subnets is investigated using the Random Matrix Theory (RMT) approach. We employ the system of equal interval time series of traffic counts at all router to router and router to subnet connections as a representation of the inter-VLAN traffic. The cross-correlation matrix C of the traffic rate changes between different traffic time series is calculated and tested against null-hypothesis of random interactions. The majority of the eigenvalues i of matrix C fall within the bounds predicted by the RMT for the eigenvalues of random correlation matrices. The distribution of eigenvalues and eigenvectors outside of the RMT bounds displays prominent and systematic deviations from the RMT predictions. Moreover, these deviations are stable in time. The method we use provides a unique possibility to accomplish three concurrent tasks of traffic analysis. The method verifies the uncongested state of the network, by establishing the profile of random interactions. It recognizes the system-specific large-scale interactions, by establishing the profile of stable in time non-random interactions. Finally, by looking into the eigenstatistics we are able to detect and allocate anomalies of network traffic interactions.",
"@cite_1: The increasing practicality of large-scale flow capture makes it possible to conceive of traffic analysis methods that detect and identify a large and diverse set of anomalies. However the challenge of effectively analyzing this massive data source for anomaly diagnosis is as yet unmet. We argue that the distributions of packet features (IP addresses and ports) observed in flow traces reveals both the presence and the structure of a wide range of anomalies. Using entropy as a summarization tool, we show that the analysis of feature distributions leads to significant advances on two fronts: (1) it enables highly sensitive detection of a wide range of anomalies, augmenting detections by volume-based methods, and (2) it enables automatic classification of anomalies via unsupervised learning. We show that using feature distributions, anomalies naturally fall into distinct and meaningful clusters. These clusters can be used to automatically classify anomalies and to uncover new anomaly types. We validate our claims on data from two backbone networks (Abilene and Geant) and conclude that feature distributions show promise as a key element of a fairly general network anomaly diagnosis framework.",
"@cite_2: Detecting and understanding anomalies in IP networks is an open and ill-defined problem. Toward this end, we have recently proposed the subspace method for anomaly diagnosis. In this paper we present the first large-scale exploration of the power of the subspace method when applied to flow traffic. An important aspect of this approach is that it fuses information from flow measurements taken throughout a network. We apply the subspace method to three different types of sampled flow traffic in a large academic network: multivariate timeseries of byte counts, packet counts, and IP-flow counts. We show that each traffic type brings into focus a different set of anomalies via the subspace method. We illustrate and classify the set of anomalies detected. We find that almost all of the anomalies detected represent events of interest to network operators. Furthermore, the anomalies span a remarkably wide spectrum of event types, including denial of service attacks (single-source and distributed), flash crowds, port scanning, downstream traffic engineering, high-rate flows, worm propagation, and network outage.",
"@cite_3: Hidden semi-Markov Model (HsMM) has been well studied and widely applied to many areas. The advantage of using an HsMM is its efficient forward-backward algorithm for estimating model parameters to best account for an observed sequence. In this paper, we propose an HsMM to model the distribution of network-wide traffic and use an observation window to distinguish DoS flooding attacks mixed within the normal background traffic. Several experiments are conducted to validate our method.",
"@cite_4: IP forwarding anomalies, triggered by equipment failures, implementation bugs, or configuration errors, can significantly disrupt and degrade network service. Robust and reliable detection of such anomalies is essential to rapid problem diagnosis, problem mitigation, and repair. We propose a simple, robust method that integrates routing and traffic data streams to reliably detect forwarding anomalies. The overall method is scalable, automated and self-training. We find this technique effectively identifies forwarding anomalies, while avoiding the high false alarms rate that would otherwise result if either stream were used unilaterally.",
"@cite_5: Detecting and understanding anomalies in IP networks is an open and ill-defined problem. Toward this end, we have recently proposed the subspace method for anomaly diagnosis. In this paper we present the first large-scale exploration of the power of the subspace method when applied to flow traffic. An important aspect of this approach is that it fuses information from flow measurements taken throughout a network. We apply the subspace method to three different types of sampled flow traffic in a large academic network: multivariate timeseries of byte counts, packet counts, and IP-flow counts. We show that each traffic type brings into focus a different set of anomalies via the subspace method. We illustrate and classify the set of anomalies detected. We find that almost all of the anomalies detected represent events of interest to network operators. Furthermore, the anomalies span a remarkably wide spectrum of event types, including denial of service attacks (single-source and distributed), flash crowds, port scanning, downstream traffic engineering, high-rate flows, worm propagation, and network outage."
] | The urgent need for a network-wide, scalable approach to the problem of healthy network traffic profile creation is expressed in works of @cite_1 @cite_2 @cite_3 @cite_4 . There are several studies with the promising results, which demonstrate that the traffic anomalous events cause the temporal changes in statistical properties of traffic features. Lakhina, Crovella and Diot presented the characterization of the network-wide anomalies of the traffic flows. The authors studied three different types of traffic flows and fused the information from flow measurements taken throughout the entire network. They obtained and classified a different set of anomalies for different traffic types using the subspace method @cite_2 . |
[
"abstract: The traffic behavior of University of Louisville network with the interconnected backbone routers and the number of Virtual Local Area Network (VLAN) subnets is investigated using the Random Matrix Theory (RMT) approach. We employ the system of equal interval time series of traffic counts at all router to router and router to subnet connections as a representation of the inter-VLAN traffic. The cross-correlation matrix C of the traffic rate changes between different traffic time series is calculated and tested against null-hypothesis of random interactions. The majority of the eigenvalues i of matrix C fall within the bounds predicted by the RMT for the eigenvalues of random correlation matrices. The distribution of eigenvalues and eigenvectors outside of the RMT bounds displays prominent and systematic deviations from the RMT predictions. Moreover, these deviations are stable in time. The method we use provides a unique possibility to accomplish three concurrent tasks of traffic analysis. The method verifies the uncongested state of the network, by establishing the profile of random interactions. It recognizes the system-specific large-scale interactions, by establishing the profile of stable in time non-random interactions. Finally, by looking into the eigenstatistics we are able to detect and allocate anomalies of network traffic interactions.",
"@cite_1: The increasing practicality of large-scale flow capture makes it possible to conceive of traffic analysis methods that detect and identify a large and diverse set of anomalies. However the challenge of effectively analyzing this massive data source for anomaly diagnosis is as yet unmet. We argue that the distributions of packet features (IP addresses and ports) observed in flow traces reveals both the presence and the structure of a wide range of anomalies. Using entropy as a summarization tool, we show that the analysis of feature distributions leads to significant advances on two fronts: (1) it enables highly sensitive detection of a wide range of anomalies, augmenting detections by volume-based methods, and (2) it enables automatic classification of anomalies via unsupervised learning. We show that using feature distributions, anomalies naturally fall into distinct and meaningful clusters. These clusters can be used to automatically classify anomalies and to uncover new anomaly types. We validate our claims on data from two backbone networks (Abilene and Geant) and conclude that feature distributions show promise as a key element of a fairly general network anomaly diagnosis framework."
] | The same group of researchers extended their work in @cite_1 . Under the new assumption that any network anomaly induces the changes in distributional aspects of packet header fields, they detected and identified large set of anomalies using the entropy measurement tool. |
[
"abstract: The traffic behavior of University of Louisville network with the interconnected backbone routers and the number of Virtual Local Area Network (VLAN) subnets is investigated using the Random Matrix Theory (RMT) approach. We employ the system of equal interval time series of traffic counts at all router to router and router to subnet connections as a representation of the inter-VLAN traffic. The cross-correlation matrix C of the traffic rate changes between different traffic time series is calculated and tested against null-hypothesis of random interactions. The majority of the eigenvalues i of matrix C fall within the bounds predicted by the RMT for the eigenvalues of random correlation matrices. The distribution of eigenvalues and eigenvectors outside of the RMT bounds displays prominent and systematic deviations from the RMT predictions. Moreover, these deviations are stable in time. The method we use provides a unique possibility to accomplish three concurrent tasks of traffic analysis. The method verifies the uncongested state of the network, by establishing the profile of random interactions. It recognizes the system-specific large-scale interactions, by establishing the profile of stable in time non-random interactions. Finally, by looking into the eigenstatistics we are able to detect and allocate anomalies of network traffic interactions.",
"@cite_1: Hidden semi-Markov Model (HsMM) has been well studied and widely applied to many areas. The advantage of using an HsMM is its efficient forward-backward algorithm for estimating model parameters to best account for an observed sequence. In this paper, we propose an HsMM to model the distribution of network-wide traffic and use an observation window to distinguish DoS flooding attacks mixed within the normal background traffic. Several experiments are conducted to validate our method."
] | Hidden Markov model has been proposed to model the distribution of network-wide traffic in @cite_1 . The observation window is used to distinguish denial of service (DoS) flooding attack mixed with the normal background traffic. |
[
"abstract: Constructing effective representations is a critical but challenging problem in multimedia understanding. The traditional handcraft features often rely on domain knowledge, limiting the performances of exiting methods. This paper discusses a novel computational architecture for general image feature mining, which assembles the primitive filters (i.e. Gabor wavelets) into compositional features in a layer-wise manner. In each layer, we produce a number of base classifiers (i.e. regression stumps) associated with the generated features, and discover informative compositions by using the boosting algorithm. The output compositional features of each layer are treated as the base components to build up the next layer. Our framework is able to generate expressive image representations while inducing very discriminate functions for image classification. The experiments are conducted on several public datasets, and we demonstrate superior performances over state-of-the-art approaches.",
"@cite_1: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.",
"@cite_2: This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region. The resulting \"spatial pyramid\" is a simple and computationally efficient extension of an orderless bag-of-features image representation, and it shows significantly improved performance on challenging scene categorization tasks. Specifically, our proposed method exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories. The spatial pyramid framework also offers insights into the success of several recently proposed image descriptions, including Torralbas \"gist\" and Lowes SIFT descriptors.",
"@cite_3: Recently SVMs using spatial pyramid matching (SPM) kernel have been highly successful in image classification. Despite its popularity, these nonlinear SVMs have a complexity O(n2 n3) in training and O(n) in testing, where n is the training size, implying that it is nontrivial to scaleup the algorithms to handle more than thousands of training images. In this paper we develop an extension of the SPM method, by generalizing vector quantization to sparse coding followed by multi-scale spatial max pooling, and propose a linear SPM kernel based on SIFT sparse codes. This new approach remarkably reduces the complexity of SVMs to O(n) in training and a constant in testing. In a number of image categorization experiments, we find that, in terms of classification accuracy, the suggested linear SPM based on sparse coding of SIFT descriptors always significantly outperforms the linear SPM kernel on histograms, and is even better than the nonlinear SPM kernels, leading to state-of-the-art performance on several benchmarks by using a single type of descriptors.",
"@cite_4: The traditional SPM approach based on bag-of-features (BoF) requires nonlinear classifiers to achieve good image classification performance. This paper presents a simple but effective coding scheme called Locality-constrained Linear Coding (LLC) in place of the VQ coding in traditional SPM. LLC utilizes the locality constraints to project each descriptor into its local-coordinate system, and the projected coordinates are integrated by max pooling to generate the final representation. With linear classifier, the proposed approach performs remarkably better than the traditional nonlinear SPM, achieving state-of-the-art performance on several benchmarks. Compared with the sparse coding strategy [22], the objective function used by LLC has an analytical solution. In addition, the paper proposes a fast approximated LLC method by first performing a K-nearest-neighbor search and then solving a constrained least square fitting problem, bearing computational complexity of O(M + K2). Hence even with very large codebooks, our system can still process multiple frames per second. This efficiency significantly adds to the practical values of LLC for real applications.",
"@cite_5: Recently SVMs using spatial pyramid matching (SPM) kernel have been highly successful in image classification. Despite its popularity, these nonlinear SVMs have a complexity O(n2 n3) in training and O(n) in testing, where n is the training size, implying that it is nontrivial to scaleup the algorithms to handle more than thousands of training images. In this paper we develop an extension of the SPM method, by generalizing vector quantization to sparse coding followed by multi-scale spatial max pooling, and propose a linear SPM kernel based on SIFT sparse codes. This new approach remarkably reduces the complexity of SVMs to O(n) in training and a constant in testing. In a number of image categorization experiments, we find that, in terms of classification accuracy, the suggested linear SPM based on sparse coding of SIFT descriptors always significantly outperforms the linear SPM kernel on histograms, and is even better than the nonlinear SPM kernels, leading to state-of-the-art performance on several benchmarks by using a single type of descriptors.",
"@cite_6: The traditional SPM approach based on bag-of-features (BoF) requires nonlinear classifiers to achieve good image classification performance. This paper presents a simple but effective coding scheme called Locality-constrained Linear Coding (LLC) in place of the VQ coding in traditional SPM. LLC utilizes the locality constraints to project each descriptor into its local-coordinate system, and the projected coordinates are integrated by max pooling to generate the final representation. With linear classifier, the proposed approach performs remarkably better than the traditional nonlinear SPM, achieving state-of-the-art performance on several benchmarks. Compared with the sparse coding strategy [22], the objective function used by LLC has an analytical solution. In addition, the paper proposes a fast approximated LLC method by first performing a K-nearest-neighbor search and then solving a constrained least square fitting problem, bearing computational complexity of O(M + K2). Hence even with very large codebooks, our system can still process multiple frames per second. This efficiency significantly adds to the practical values of LLC for real applications."
] | In the past few decades, many works focus on designing different types of features to capture the characteristics of images such as color, SIFT and HoG @cite_1 . Based on these feature descriptors, Bag-of-Feature (BoF) model seems to be the most classical image representation method in computer vision and related multimedia applications. Several promising studies @cite_2 @cite_3 @cite_4 were published to improve this traditional approach in different aspects. Among these extension, a class of sparse coding based methods @cite_3 @cite_4 , which employ spatial pyramid matching kernel (SPM) proposed by Lazebnik , has achieved great success in image classification problem. Despite we are developing more and more effective representation methods, the lack of high-level image expression still plagues us to build up the ideal vision system. |
[
"abstract: Constructing effective representations is a critical but challenging problem in multimedia understanding. The traditional handcraft features often rely on domain knowledge, limiting the performances of exiting methods. This paper discusses a novel computational architecture for general image feature mining, which assembles the primitive filters (i.e. Gabor wavelets) into compositional features in a layer-wise manner. In each layer, we produce a number of base classifiers (i.e. regression stumps) associated with the generated features, and discover informative compositions by using the boosting algorithm. The output compositional features of each layer are treated as the base components to build up the next layer. Our framework is able to generate expressive image representations while inducing very discriminate functions for image classification. The experiments are conducted on several public datasets, and we demonstrate superior performances over state-of-the-art approaches.",
"@cite_1: This paper illustrates a hierarchical generative model for representing and recognizing compositional object categories with large intra-category variance. In this model, objects are broken into their constituent parts and the variability of configurations and relationships between these parts are modeled by stochastic attribute graph grammars, which are embedded in an And-Or graph for each compositional object category. It combines the power of a stochastic context free grammar (SCFG) to express the variability of part configurations, and a Markov random field (MRF) to represent the pictorial spatial relationships between these parts. As a generative model, different object instances of a category can be realized as a traversal through the And-Or graph to arrive at a valid configuration (like a valid sentence in language, by analogy). The inference recognition procedure is intimately tied to the structure of the model and follows a probabilistic formulation consisting of bottom-up detection steps for the parts, which in turn recursively activate the grammar rules for top-down verification and searches for missing parts. We present experiments comparing our results to state of art methods and demonstrate the potential of our proposed framework on compositional objects with cluttered backgrounds using training and testing data from the public Lotus Hill and Caltech datasets.",
"@cite_2: We show how to use \"complementary priors\" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.",
"@cite_3: Recent works have shown that facial attributes are useful in a number of applications such as face recognition and retrieval. However, estimating attributes in images with large variations remains a big challenge. This challenge is addressed in this paper. Unlike existing methods that assume the independence of attributes during their estimation, our approach captures the interdependencies of local regions for each attribute, as well as the high-order correlations between different attributes, which makes it more robust to occlusions and misdetection of face regions. First, we have modeled region interdependencies with a discriminative decision tree, where each node consists of a detector and a classifier trained on a local region. The detector allows us to locate the region, while the classifier determines the presence or absence of an attribute. Second, correlations of attributes and attribute predictors are modeled by organizing all of the decision trees into a large sum-product network (SPN), which is learned by the EM algorithm and yields the most probable explanation (MPE) of the facial attributes in terms of the region's localization and classification. Experimental results on a large data set with 22,400 images show the effectiveness of the proposed approach.",
"@cite_4: Recent works have shown that facial attributes are useful in a number of applications such as face recognition and retrieval. However, estimating attributes in images with large variations remains a big challenge. This challenge is addressed in this paper. Unlike existing methods that assume the independence of attributes during their estimation, our approach captures the interdependencies of local regions for each attribute, as well as the high-order correlations between different attributes, which makes it more robust to occlusions and misdetection of face regions. First, we have modeled region interdependencies with a discriminative decision tree, where each node consists of a detector and a classifier trained on a local region. The detector allows us to locate the region, while the classifier determines the presence or absence of an attribute. Second, correlations of attributes and attribute predictors are modeled by organizing all of the decision trees into a large sum-product network (SPN), which is learned by the EM algorithm and yields the most probable explanation (MPE) of the facial attributes in terms of the region's localization and classification. Experimental results on a large data set with 22,400 images show the effectiveness of the proposed approach."
] | On the other hand, learning hierarchical models to simultaneously construct multiple levels of visual representation has received much attention recently @cite_1 . Our deep boosting method is partially motivated by recent developed deep learning techniques @cite_2 @cite_3 . Different from previous hand-craft feature design method, deep model learns the feature representation from raw data and validly generates the high-level semantic representation. However, as shown in recent study @cite_3 , these network-based hierarchical models always contain thousands of nodes in a single layer, and is too complex to control in real multimedia application. In contrast, an obvious characteristic of our study is that we build up the deep architecture to generate expressive image representation simply and obtains the near optimal classification rate in each layer. |
[
"abstract: In this paper we present a novel approach to global localization using an RGB-D camera in maps of visual features. For large maps, the performance of pure image matching techniques decays in terms of robustness and computational cost. Particularly, repeated occurrences of similar features due to repeating structure in the world (e.g., doorways, chairs, etc.) or missing associations between observations pose critical challenges to visual localization. We address these challenges using a two-step approach. We first estimate a candidate pose using few correspondences between features of the current camera frame and the feature map. The initial set of correspondences is established by proximity in feature space. The initial pose estimate is used in the second step to guide spatial matching of features in 3D, i.e., searching for associations where the image features are expected to be found in the map. A RANSAC algorithm is used to compute a fine estimation of the pose from the correspondences. Our approach clearly outperforms localization based on feature matching exclusively in feature space, both in terms of estimation accuracy and robustness to failure and allows for global localization in real time (30Hz).",
"@cite_1: In many applications of computer vision, the following problem is encountered. Two point patterns (sets of points) (x sub i ) and (x sub i ); i=1, 2, . . ., n are given in m-dimensional space, and the similarity transformation parameters (rotation, translation, and scaling) that give the least mean squared error between these point patterns are needed. Recently, K.S. (1987) and B.K.P. (1987) presented a solution of this problem. Their solution, however, sometimes fails to give a correct rotation matrix and gives a reflection instead when the data is severely corrupted. The proposed theorem is a strict solution of the problem, and it always gives the correct transformation parameters even when the data is corrupted. >",
"@cite_2: A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing"
] | Finding the transformation given by a set of point correspondences is a common problem in computer vision, e.g., for ego-motion estimation. A method to get a closed-form solution by means of a Least-Squares Estimation (LSE) is given in @cite_1 . However, when dealing with sets of point correspondences containing wrong associations, the result given by a LSE is distorted by the outliers. This is problem is commonly addressed by using a sample consensus method such as RANSAC @cite_2 . |
[
"abstract: In this paper we present a novel approach to global localization using an RGB-D camera in maps of visual features. For large maps, the performance of pure image matching techniques decays in terms of robustness and computational cost. Particularly, repeated occurrences of similar features due to repeating structure in the world (e.g., doorways, chairs, etc.) or missing associations between observations pose critical challenges to visual localization. We address these challenges using a two-step approach. We first estimate a candidate pose using few correspondences between features of the current camera frame and the feature map. The initial set of correspondences is established by proximity in feature space. The initial pose estimate is used in the second step to guide spatial matching of features in 3D, i.e., searching for associations where the image features are expected to be found in the map. A RANSAC algorithm is used to compute a fine estimation of the pose from the correspondences. Our approach clearly outperforms localization based on feature matching exclusively in feature space, both in terms of estimation accuracy and robustness to failure and allows for global localization in real time (30Hz).",
"@cite_1: Stereo camera is a very important sensor for mobile robot localization and mapping. Its consecutive images can be used to estimate the location of the robot with respect to its environment. This estimation will be fused with location estimates from other sensors for a globally optimal location estimate. In the data fusion context, it is important to compute the uncertainty of the stereo-based localization. In this paper, we propose an approach to obtain the uncertainty of localization when a correspondence-based method is used to estimate the robot pose. The computational complexity of this approach is O(n). Where n is the number of corresponding image points. Experimental results show that this approach is promising."
] | Zhang @cite_1 provide an uncertainty estimation of a 3D stereo-based localization approach using a correspondence-based method to estimate the robot pose. As in our work, visual features are extracted from the image and RANSAC is used to remove the outliers from the initial matches between features in two consecutive images. In contrast, our approach establishes correspondences by image-to-map matching. Thus, additional sources of false correspondences arise, such as repeated objects in the world, the presence of several features in the map extracted from the same point in the world, or the much larger number of features in the map, which increases the chance for random matches. |
[
"abstract: In this paper we present a novel approach to global localization using an RGB-D camera in maps of visual features. For large maps, the performance of pure image matching techniques decays in terms of robustness and computational cost. Particularly, repeated occurrences of similar features due to repeating structure in the world (e.g., doorways, chairs, etc.) or missing associations between observations pose critical challenges to visual localization. We address these challenges using a two-step approach. We first estimate a candidate pose using few correspondences between features of the current camera frame and the feature map. The initial set of correspondences is established by proximity in feature space. The initial pose estimate is used in the second step to guide spatial matching of features in 3D, i.e., searching for associations where the image features are expected to be found in the map. A RANSAC algorithm is used to compute a fine estimation of the pose from the correspondences. Our approach clearly outperforms localization based on feature matching exclusively in feature space, both in terms of estimation accuracy and robustness to failure and allows for global localization in real time (30Hz).",
"@cite_1: This paper presents a new algorithm for mobile robot localization, called Monte Carlo Localization (MCL). MCL is a version of Markov localization, a family of probabilistic approaches that have recently been applied with great practical success. However, previous approaches were either computationally cumbersome (such as grid-based approaches that represent the state space by high-resolution 3D grids), or had to resort to extremely coarse-grained resolutions. Our approach is computationally efficient while retaining the ability to represent (almost) arbitrary distributions. MCL applies sampling-based methods for approximating probability distributions, in a way that places computation \"where needed.\" The number of samples is adapted on-line, thereby invoking large sample sets only when necessary. Empirical results illustrate that MCL yields improved accuracy while requiring an order of magnitude less computation when compared to previous approaches. It is also much easier to implement.",
"@cite_2: Mobile robot localization is the problem of determining a robot’s pose from sensor data. This article presents a family of probabilistic localization algorithms known as Monte Carlo Localization (MCL). MCL algorithms represent a robot’s belief by a set of weighted hypotheses (samples), which approximate the posterior under a common Bayesian formulation of the localization problem. Building on the basic MCL algorithm, this article develops a more robust algorithm called MixtureMCL, which integrates two complimentary ways of generating samples in the estimation. To apply this algorithm to mobile robots equipped with range finders, a kernel density tree is learned that permits fast sampling. Systematic empirical results illustrate the robustness and computational efficiency of the approach. 2001 Published by Elsevier Science B.V."
] | MCL overcomes the limitations of EKFs as mentioned earlier. It was successfully used in @cite_1 for vision-based localization of a tour-guide robot in a museum using a map of the ceiling and a camera pointing to it. In contrast to this approach, we do not rely on odometry measurements to predict the pose, and are not restricted to planar motion. Additionally, MCL would report incorrect locations after unexpected robot motions or sensor outages. Sensor Resetting Localization partially substitutes particles by new ones directly generated from the sensor measurements when the position estimate is uncertain. Mixture-MCL @cite_2 combines standard MCL with dual-MCL to drastically reduce computational cost and localization error. Dual-MCL also generates particles from the current sensor measurements and was shown to deal well with the kidnapped robot problem when properly combined with standard MCL. We do not need a reset process, since our estimation is independent from the prior state. Our approach could be used in combination with Monte Carlo Localization (MCL) for efficient particle initialization and weighting. However, the results of our experiments show that our estimate is accurate enough to be used as final result, without any filtering. |
[
"abstract: In this paper we present a novel approach to global localization using an RGB-D camera in maps of visual features. For large maps, the performance of pure image matching techniques decays in terms of robustness and computational cost. Particularly, repeated occurrences of similar features due to repeating structure in the world (e.g., doorways, chairs, etc.) or missing associations between observations pose critical challenges to visual localization. We address these challenges using a two-step approach. We first estimate a candidate pose using few correspondences between features of the current camera frame and the feature map. The initial set of correspondences is established by proximity in feature space. The initial pose estimate is used in the second step to guide spatial matching of features in 3D, i.e., searching for associations where the image features are expected to be found in the map. A RANSAC algorithm is used to compute a fine estimation of the pose from the correspondences. Our approach clearly outperforms localization based on feature matching exclusively in feature space, both in terms of estimation accuracy and robustness to failure and allows for global localization in real time (30Hz).",
"@cite_1: The authors describe a general-purpose, representation-independent method for the accurate and computationally efficient registration of 3-D shapes including free-form curves and surfaces. The method handles the full six degrees of freedom and is based on the iterative closest point (ICP) algorithm, which requires only a procedure to find the closest point on a geometric entity to a given point. The ICP algorithm always converges monotonically to the nearest local minimum of a mean-square distance metric, and the rate of convergence is rapid during the first few iterations. Therefore, given an adequate set of initial rotations and translations for a particular class of objects with a certain level of 'shape complexity', one can globally minimize the mean-square distance metric over all six degrees of freedom by testing each initial registration. One important application of this method is to register sensed data from unfixtured rigid objects with an ideal geometric model, prior to shape inspection. Experimental results show the capabilities of the registration algorithm on point sets, curves, and surfaces. >",
"@cite_2: An RGB-D camera is a sensor which outputs range and color information about objects. Recent technological advances in this area have introduced affordable RGB-D devices in the robotics community. In this paper, we present a real-time technique for 6-DoF camera pose estimation through the incremental registration of RGB-D images. First, a set of edge features are computed from the depth and color images. An initial motion estimation is calculated through aligning the features. This initial guess is refined by applying the Iterative Closest Point algorithm on the dense point cloud data. A rigorous error analysis assesses several sets of RGB-D ground truth data via an error accumulation metric. We show that the proposed two-stage approach significantly reduces error in the pose estimation, compared to a state-of-the-art ICP registration technique.",
"@cite_3: The increasing number of ICP variants leads to an explosion of algorithms and parameters. This renders difficult the selection of the appropriate combination for a given application. In this paper, we propose a state-of-the-art, modular, and efficient implementation of an ICP library. We took advantage of the recent availability of fast depth cameras to demonstrate one application example: a 3D pose tracker running at 30 Hz. For this application, we show the modularity of our ICP library by optimizing the use of lean and simple descriptors in order to ease the matching of 3D point clouds. This tracker is then evaluated using datasets recorded along a ground truth of millimeter accuracy. We provide both source code and datasets to the community in order to accelerate further comparisons in this field.",
"@cite_4: We present an energy-based approach to visual odometry from RGB-D images of a Microsoft Kinect camera. To this end we propose an energy function which aims at finding the best rigid body motion to map one RGB-D image into another one, assuming a static scene filmed by a moving camera. We then propose a linearization of the energy function which leads to a 6×6 normal equation for the twist coordinates representing the rigid body motion. To allow for larger motions, we solve this equation in a coarse-to-fine scheme. Extensive quantitative analysis on recently proposed benchmark datasets shows that the proposed solution is faster than a state-of-the-art implementation of the iterative closest point (ICP) algorithm by two orders of magnitude. While ICP is more robust to large camera motion, the proposed method gives better results in the regime of small displacements which are often the case in camera tracking applications.",
"@cite_5: We present a technique to estimate the egomotion of an RGB-D sensor based on rotations of functions defined on the unit sphere. In contrast to traditional approaches, our technique is not based on image features and does not require correspondences to be generated between frames of data. Instead, consecutive functions are correlated using spherical harmonic analysis. An Extended Gaussian Image (EGI), created from the local normal estimates of a point cloud, defines each function. Correlations are efficiently computed using Fourier transformations, resulting in a 3 Degree of Freedom (3-DoF) rotation estimate. An Iterative Closest Point (ICP) process then refines the initial rotation estimate and adds a translational component, yielding a full 6-DoF egomotion estimate. The focus of this work is to investigate the merits of using spherical harmonic analysis for egomotion estimation by comparison with alternative 6-DoF methods. We compare the performance of the proposed technique with that of stand-alone ICP and image feature based methods. As with other egomotion techniques, estimation errors accumulate and degrade results, necessitating correction mechanisms for robust localization. For this report, however, we use the raw estimates; no filtering or smoothing processes are applied. In-house and external benchmark data sets are analyzed for both runtime and accuracy. Results show that the algorithm is competitive in terms of both accuracy and runtime, and future work will aim to combine the various techniques into a more robust egomotion estimation framework."
] | To the best of our knowledge, at the time being, there is no other dedicated global localization approach for the recently introduced RGB-D sensors. However, a number of novel approaches for visual odometry have been proposed, which exploit the available combination of color, density of depth and the high frame rate to improve alignment performance as compared, e.g., to the iterative closest point algorithm @cite_1 . In @cite_3 and @cite_3 adaptations of ICP are proposed to process the high amounts of data more efficiently. Steinbruecker @cite_4 present a transformation estimation based on the minimization of an energy function. For frames close to each other, they achieve enhanced runtime performance and accuracy compared to Generalized ICP . Using the distribution of normals, Osteen @cite_5 improve the initialization of ICP by efficiently computing the difference in orientation between two frames, which allows a substantial drift reduction. These approaches work well for computing the transformation for small incremental changes between consecutive frames, but they are of limited applicability for global localization in a map. |