context
stringlengths
80
2.5k
A
stringlengths
80
2.59k
B
stringlengths
80
1.95k
C
stringlengths
80
3.07k
D
stringlengths
80
3.07k
label
stringclasses
4 values
If the ellipse is not circular, then ρℰsubscript𝜌ℰ\rho_{{\mathcal{E}}}italic_ρ start_POSTSUBSCRIPT caligraphic_E end_POSTSUBSCRIPT is smaller than the asymptotic convergence rate of Orthomin(1)1(1)( 1 ).
those of Orthomin(4). Note that, in general, the first k𝑘kitalic_k steps of Orthomin(k+1𝑘1k+1italic_k + 1) are identical to those of Orthomin(k𝑘kitalic_k).
similar to the ones for Orthomin(1), and in practice the two methods converge comparably fast for SPD systems. Analogously,
Two facts are notable about the behavior of Orthomin(k)𝑘(k)( italic_k ) on the systems in Conjecture 4.
The main goal of this article is to examine the behavior of Orthomin(k𝑘kitalic_k) on a family of examples, and to show
C
For each s∈S𝑠𝑆s\in Sitalic_s ∈ italic_S, (R⁢u⁢n⁢(s),ℱ,𝒫)𝑅𝑢𝑛𝑠ℱ𝒫(Run(s),\mathcal{F},\mathcal{P})( italic_R italic_u italic_n ( italic_s ) , caligraphic_F , caligraphic_P ) is a probability space, where ℱℱ\mathcal{F}caligraphic_F is the σ𝜎\sigmaitalic_σ-field generated by all basic cylinders C⁢y⁢l⁢(π)𝐶𝑦𝑙𝜋Cyl(\pi)italic_C italic_y italic_l ( italic_π ) and π𝜋\piitalic_π is a finite path initiating from s𝑠sitalic_s, C⁢y⁢l⁢(π)={π~∈R⁢u⁢n⁢(s):π∈p⁢r⁢e⁢f⁢i⁢x⁢(π~)}𝐶𝑦𝑙𝜋conditional-set~𝜋𝑅𝑢𝑛𝑠𝜋𝑝𝑟𝑒𝑓𝑖𝑥~𝜋Cyl(\pi)=\{\widetilde{\pi}\in Run(s):\pi\in prefix(\widetilde{\pi})\}italic_C italic_y italic_l ( italic_π ) = { over~ start_ARG italic_π end_ARG ∈ italic_R italic_u italic_n ( italic_s ) : italic_π ∈ italic_p italic_r italic_e italic_f italic_i italic_x ( over~ start_ARG italic_π end_ARG ) }, and 𝒫:ℱ→[0,1]:𝒫→ℱ01\mathcal{P}:\mathcal{F}\rightarrow[0,1]caligraphic_P : caligraphic_F → [ 0 , 1 ] is the unique probability measure such that 𝒫⁢(C⁢y⁢l⁢(π))=∏1≤i≤|π|−1𝒫⁢(si,si+1)𝒫𝐶𝑦𝑙𝜋subscriptproduct1𝑖𝜋1𝒫subscript𝑠𝑖subscript𝑠𝑖1\mathcal{P}(Cyl(\pi))=\prod_{1\leq i\leq|\pi|-1}\mathcal{P}(s_{i},s_{i+1})caligraphic_P ( italic_C italic_y italic_l ( italic_π ) ) = ∏ start_POSTSUBSCRIPT 1 ≤ italic_i ≤ | italic_π | - 1 end_POSTSUBSCRIPT caligraphic_P ( italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_s start_POSTSUBSCRIPT italic_i + 1 end_POSTSUBSCRIPT ) where π=s1⁢s2⁢⋯⁢s|π|𝜋subscript𝑠1subscript𝑠2⋯subscript𝑠𝜋\pi=s_{1}s_{2}\cdots s_{|\pi|}italic_π = italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_s start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ⋯ italic_s start_POSTSUBSCRIPT | italic_π | end_POSTSUBSCRIPT and s1=ssubscript𝑠1𝑠s_{1}=sitalic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = italic_s.
Model checking [5] is an essential tool for formal verification, which is an interesting and important topic in the research field of logic in computer science and particularly plays an important role in verification of digital circuit (chips), in which one describes the system to be verified as a model of some logic, expresses the property to be verified as a formula in that logic, and then checks by using automated algorithms that the formula holds or not in that model, see e.g. the standard textbook [1] by Baier et al. In particular, the famous work [16] investigated extensions of temporal logic by connectives defined by finite automata on infinite words, which are important directions in model-checking. Traditionally, model checking has been applied to finite-state systems and non-probabilistic programs. During the last two decades, researchers have paid much attention to model-checking of probabilistic infinite-state systems, see e.g. [6]. Apart from the above mentioned works, there are many other excellent works model-checking on infinite-state systems, such as [2] where the Well-structured transition systems (WSTS) were investigated, [13] in which context-bounded model checking of concurrent software was studied, and [14] in which the algorithms for model-checking CSL (continuous stochastic logic) against infinite-state continuous-time Markov chains are developed.
The logic PCTL was originally introduced in [9], where the corresponding model-checking question has been focused mainly on finite-state Markov chains.
Among the probabilistic infinite-state systems, one is the probabilistic pushdown systems, which were dubbed “probabilistic pushdown automata” in [4, 3, 6], the input alphabet of which contains only one symbol. Throughout the paper, such a limited version of probabilistic pushdown automata will be dubbed “probabilistic pushdown system”. Their model-checking question, initiated in [6], has attracted a lot of attention, see e.g. [3, 4], in which the model-checking of stateless probabilistic pushdown systems (pBPA) against PCTL∗ was resloved. However, the question of model-checking of stateless probabilistic pushdown systems (pBPA) against PCTL still left open in [3, 4], which was first proposed in [6].
The logic PCTL∗ extends PCTL by deleting the requirement that any temporal operator must be proceeded by a state formula (Thus, the logic PCTL can be regarded as a sublogic of PCTL∗), and its path formulas are generated by the following syntax:
B
The convergence results on Gaussian process regression presented in section 3 are mainly known results from the theory of scattered data interpolation [43, 37, 28]. The error bounds are given in terms of the fill distance of the design points used to construct the Gaussian process emulator, and depend in several ways on the number K𝐾Kitalic_K of input parameters we want to infer. Firstly, when looking at the error in terms of the number of design points used, rather than the fill distance of these points, the rate of convergence typically deteriorates with the number of parameters K𝐾Kitalic_K. Secondly, the proof of these error estimates requires assumptions on the smoothness of the function being emulated, where the precise smoothness requirements depend on the Gaussian process emulator employed. For emulators based on Matèrn kernels [24], we require these maps to be in a Sobolev space Hssuperscript𝐻𝑠H^{s}italic_H start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT, where s>K/2𝑠𝐾2s>K/2italic_s > italic_K / 2. We would like to point out here that it is not necessary for the function being emulated to be in the reproducing kernel Hilbert space (or native space) of the Matèrn kernel used in order to prove convergence (cf Proposition 3.4), but that is suffices to be in a larger Sobolev space in which point evaluations are bounded linear functionals.
Gaussian process emulators are frequently used as surrogate models. In this work, we analysed the error that is introduced in the Bayesian posterior distribution when a Gaussian process emulator is used to approximate the forward model, either in terms of the parameter-to-observation map or the negative log-likelihood. We showed that the error in the posterior distribution, measured in the Hellinger distance, can be bounded in terms of the error in the emulator, measured in a norm dependent on the approximation considered.
The remainder of this paper is organised as follows. In section 2, we set up the Bayesian inverse problem of interest. We then recall some results on Gaussian process regression in section 3. The heart of the paper is section 4, where we introduce the different approximations to the posterior and perform an error analysis. Our theoretical results are confirmed on a simple model problem in section 5, and some conclusions are finally given in section 6.
The main focus of this work is to analyse the error introduced in the posterior distribution by using a Gaussian process emulator as a surrogate model. The error is measured in the Hellinger distance, which is shown in [41, 15] to be a suitable metric for evaluation of perturbations to the posterior measure in Bayesian inverse problems, including problems with infinite dimensional input parameter spaces. We consider emulating either the parameter-to-observation map or the negative log-likelihood. The convergence results presented in this paper are of two types. In section 3, we present convergence results for simple Gaussian process emulators applied to a general function f𝑓fitalic_f satisfying suitable regularity assumptions. In section 4, we prove bounds on the error in the posterior distribution in terms of the error in the Gaussian process emulator. The novel contributions of this work are mainly in section 4. The results in the two sections can be combined to give a final error estimate for the simple Gaussian process emulators presented in section 3. However, the error bounds derived in section 4 are much more general in the sense that they apply to any Gaussian process emulator satisfying the required assumptions. A short discussion on extensions of this work related to Gaussian process emulators used in practice is included in the conclusions in section 6.
The convergence results on Gaussian process regression presented in section 3 are mainly known results from the theory of scattered data interpolation [43, 37, 28]. The error bounds are given in terms of the fill distance of the design points used to construct the Gaussian process emulator, and depend in several ways on the number K𝐾Kitalic_K of input parameters we want to infer. Firstly, when looking at the error in terms of the number of design points used, rather than the fill distance of these points, the rate of convergence typically deteriorates with the number of parameters K𝐾Kitalic_K. Secondly, the proof of these error estimates requires assumptions on the smoothness of the function being emulated, where the precise smoothness requirements depend on the Gaussian process emulator employed. For emulators based on Matèrn kernels [24], we require these maps to be in a Sobolev space Hssuperscript𝐻𝑠H^{s}italic_H start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT, where s>K/2𝑠𝐾2s>K/2italic_s > italic_K / 2. We would like to point out here that it is not necessary for the function being emulated to be in the reproducing kernel Hilbert space (or native space) of the Matèrn kernel used in order to prove convergence (cf Proposition 3.4), but that is suffices to be in a larger Sobolev space in which point evaluations are bounded linear functionals.
B
After initialization, log⁡π≥−2𝜋2\log\pi\geq-2roman_log italic_π ≥ - 2 by Lemma 1.
τ𝜏\tauitalic_τ. For each k∉G𝑘𝐺k\notin Gitalic_k ∉ italic_G, we have, by definition of G𝐺Gitalic_G,
By definition of G𝐺Gitalic_G, whenever we add an item to W𝑊Witalic_W, we decrease
and irrevocably) whether to add i𝑖iitalic_i to the set W𝑊Witalic_W, subject to the
see an item with value at least 2⁢O⁢P⁢T2𝑂𝑃𝑇2OPT2 italic_O italic_P italic_T, we select it. For item i𝑖iitalic_i,
B
Recall that Wk⁢(G,x)subscript𝑊𝑘𝐺𝑥W_{k}(G,x)italic_W start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_G , italic_x ) is the set of closed walks in G𝐺Gitalic_G of length k𝑘kitalic_k starting from x𝑥xitalic_x.
A random rooted graph (G,∘)𝐺(G,\circ)( italic_G , ∘ ) is a unimodular network if
The spectral radius of (G,∘)𝐺(G,\circ)( italic_G , ∘ ) can also be formulated in terms of the spectral
on (G,x)𝐺𝑥(G,x)( italic_G , italic_x ) started from vertex x𝑥xitalic_x. The spectral radius of the SRW on a unimodular network (G,∘)𝐺(G,\circ)( italic_G , ∘ ) is
The spectral radius of a unimodular network (G,∘)𝐺(G,\circ)( italic_G , ∘ ) is defined to be
D
Decide the corresponding k𝑘kitalic_k random variables as the anomalous random variables.
Algorithm 1 Maximum likelihood estimation with fixed time-invariant measurements
Consider fixed time-invariant measurements Yj=𝐚jT⁢𝐗j=𝐚T⁢𝐗jsuperscript𝑌𝑗superscriptsuperscript𝐚𝑗𝑇superscript𝐗𝑗superscript𝐚𝑇superscript𝐗𝑗Y^{j}={{\boldsymbol{a}}^{j}}^{T}{\boldsymbol{X}}^{j}={\boldsymbol{a}}^{T}{%
Algorithm 5 Maximum likelihood estimation with deterministic time-varying measurements
We propose the maximum likelihood estimate method with random time-varying measurements over (nk)binomial𝑛𝑘\binom{n}{k}( FRACOP start_ARG italic_n end_ARG start_ARG italic_k end_ARG ) hypotheses in Algorithm 3. For the purpose of analyzing the error probability of the maximum likelihood estimation, we further propose a hypothesis testing algorithm based on pairwise comparison in Algorithm 4. The number of samples required to find the abnormal random variables is stated in Theorem III.3. Before we introduce our theorem for hypothesis testing with random time-varying measurements, we newly introduce the Chernoff information between two conditional probability density functions, named it as the inner conditional Chernoff information, in Definition III.2.
A
There are tools to estimate posture by analyzing acceleration acquired from these sensors (e.g., NTT Docomo provides hitoe SDK [18] for hitoe data analysis). However, acceleration of vehicles is added to acceleration of wearable sensor in vehicles, it is vague to estimate drivers’ posture accurately by tools of wearable sensors. We need to verify posture estimation in vehicles.
In Japan, a fatal accident of long distance bus occurred in which 15 people were dead in January 2016 [14]. Thus, needs for safety management with wearable sensors for drivers are increasing because dangerous driving posture like picking up things or fatigue accumulation of bus or taxi (hereafter, vehicle) drivers may result in accidents. However, acceleration of vehicles is added to acceleration of wearable sensor in vehicles, and there is no guarantee to estimate drivers’ posture accurately. Therefore, in this paper, we study methods to estimate driving posture using acceleration data acquired from T-shirt type wearable sensor hitoe[15] and conduct field tests.
This paper studied methods to estimate postures of drivers on vehicles using wearable acceleration sensor hitoe and conducted field tests. The method to subtract vehicle acceleration using hitoe and smart phone has problems of accuracy differences between them. On the other hand, posture changes such as picking things can be detected by threshold judgement of acceleration. Based on the results, we implemented a sample application which shows posture on vehicles and confirmed feasibility of posture estimation.
For drivers’ posture estimation in vehicles, we need to consider two things. The first is that acceleration data of wearable sensor includes acceleration of vehicle. The second is that considering safety management of vehicles, specific dangerous posture such as picking up things during driving needs to be detected.
There are tools to estimate posture by analyzing acceleration acquired from these sensors (e.g., NTT Docomo provides hitoe SDK [18] for hitoe data analysis). However, acceleration of vehicles is added to acceleration of wearable sensor in vehicles, it is vague to estimate drivers’ posture accurately by tools of wearable sensors. We need to verify posture estimation in vehicles.
C
Step 1: In parallel with shoplifting detection using camera movie, a sales management terminal sends sales information to a product management application on a cloud via a network. A product management application is SaaS which provides business application of ERP, and information of sales and product item stock is stored in item DB. Product item stock information is reflected to item DB in accordance with sales.
Step 2: Stream data of security camera movie is sent to a small computer in a shop. A small computer is a computer which has a certain degree of computation power, memory size and communication capability. For example, Rasbpberry Pi can be used for this to analyze images.
Based on these backgrounds, this paper targets a low cost shoplifting prevention SaaS service for small shops using cloud technology and data analysis technology. In our proposal, machine learning framework Jubatus[6] on a small computer deployed in a shop analyzes security cameras movie, detects anomaly behavior and notifies to a cloud. Then, a shoplifting prevention application on a cloud checks product stock using item DB of ERP and notifies smart phones of shop staffs by mails when a possibility of shoplifting is high.
Saburo-kun Ace[7] is a shoplifting prevention system using security camera movie. Saburo-kun Ace detects a shoplifting from security camera movie when customers’ actions match pre-defined 50 patterns of suspicious behaviors, and notifies it to staffs of shops. Shop staffs question or say something to the suspicious customer. This can reduce or prevent shopliftings. However, Saburo-kun Ace has some problems such as initial cost is high because shops need to deploy PC and movie analysis software, new shoplifting behaviors cannot be detected except for pre-defined suspicious behavior rules, actual operation may be hard because precision ratio of detection is not 100% and shop staffs often need to question customers.
Step 3: A small computer cuts off each image from movie and extracts feature values from the image data. To extract feature values, libraries of dlib, OpenCV can be used.
A
Recently, a scheme for faithful quantum communication in quantum wireless multihop networks, by performing quantum teleportation between two distant nodes which do not initially share entanglement with each other, was proposed by Wang et al. MA2 . Xiong et al. MA3 proposed a quantum communication network model where a mesh backbone network structure was introduced. Some other pertinent reports can be found in refs. (MA4 ; MA5 and references therein). Although several relevant results have been obtained along this direction, but most contributions have been based on 2- and 3-qubit Greenberger-Horne-Zeilinger (GHZ) state as quantum channel. In this paper, we propose quantum wireless multihop network with a mesh backbone structure, based on 4-qubit cluster state via the method of quantum teleportation.
\rangle=1/(4\varrho\gamma)⟨ caligraphic_G start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT | caligraphic_P start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT | caligraphic_G start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ⟩ = 1 / ( 4 italic_ϱ italic_γ ), then the node finds that the state of qubits J1⁢J3subscript𝐽1subscript𝐽3J_{1}J_{3}italic_J start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_J start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT is a0⁢|00⟩+d0⁢|11⟩subscript𝑎0ket00subscript𝑑0ket11\left.a_{0}|00\right\rangle+d_{0}\left|11\right\rangleitalic_a start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT | 00 ⟩ + italic_d start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT | 11 ⟩ which is the original state of the particle. Suppose the result is 𝒫3subscript𝒫3\mathcal{P}_{3}caligraphic_P start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT, the teleportation fails because of the node’s ineptitude to infer anything about the identity of the particles state. Thus, we calculate the total probability of success as
In this section, we establish the quantum channel linking the nodes. As it can be seen in Figs. 2 (a) and (b), A denotes the source node while the destination node is denoted by J. In the source node A, there exists two-qubit of unknown state |χ⟩A1⁢A2=a0⁢|00⟩+d0⁢|11⟩subscriptket𝜒subscript𝐴1subscript𝐴2subscript𝑎0ket00subscript𝑑0ket11\left|\chi\right\rangle_{A_{1}A_{2}}=a_{0}\left|00\right\rangle+d_{0}\left|11\right\rangle| italic_χ ⟩ start_POSTSUBSCRIPT italic_A start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_A start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT = italic_a start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT | 00 ⟩ + italic_d start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT | 11 ⟩. Let N𝑁Nitalic_N denotes number of nodes such that between A and J, we have N−1𝑁1N-1italic_N - 1 nodes. The entangle state of the neighboring nodes is 4-qubit cluster state of the form |𝒞⁢𝒮⟩=1/2⁢(|0000⟩+|0011⟩+|1100⟩−|1111⟩)ket𝒞𝒮12ket0000ket0011ket1100ket1111\left|\mathcal{CS}\right\rangle=1/2\left(\left|0000\right\rangle+\left|0011%
The cluster state MA6 , which is a type of highly entangled state of multiple qubits, is generated in lattices of qubits with Ising type interactions. On the basis of single qubit operation, the cluster state serves as initial resource for a universal computation scheme MA7 . Cluster state has been realized experimentally in photonic experiments MA7 and in optical lattices of cold atoms MA8 . In this paper we select four-qubit cluster state as the entangled resource. When the state of one particle in the entangled pair changes, the state of other particle situated at a distant node changes non-contemporaneously. Thus, entanglement swapping can be applied. Using a classical communication channel, the results of the local measurement can be transmitted from node-to-node in a secure manner.
To sum it up, in this paper, we propose a quantum routing protocol with multihop teleportation for wireless mesh backbone networks. The quantum channel that linked the intermediate nodes has been realized through entanglement swapping based on four-qubit cluster state. After quantum entanglement swapping, quantum link was established between the source node and the destination node and quantum states are transferred via quantum teleportation. We have shown that the quantum teleportation would be successful if the sender performs a Bell state measurement, and the receiver introduces auxiliary particles, applies positive operative value measure and then utilizes corresponding unitary transformation to recover the transmitted state. We have numerically scrutinized the success probability of transferring the quantum state. This study is another supportive evidence that justifies quantum entanglement as key resource in quantum information science and furtherance of recent studies MA2 ; MA3 ; MA10 . Particularly, in Ref. MA3 , where partially entangled GHZ state was employed as quantum channel and 35353535 nodes are required to achieve success probability of about 0.50.50.50.5, given that transmission range is 200200200200m. In the current study, with 35353535 nodes, success probability of 1111 is obtainable. This result agrees with the one obtained in Ref. MA3 by taking the transmission range as 300300300300m and consider maximally entangled GHZ states.
C
For example of Tacit Computing, let us consider tracking cameras. Tracking cameras are usage that movies of small children in schools or roads are taken by security cameras near the children and parents can see the movies by their mobile terminals. Tracking cameras can satisfy parents’ needs to confirm children’s safety at that time, but we think it is difficult to accept if it is charged as a certain price as once movie checking. We think it can be acceptable that it is charged as a monthly fee of continuous monitoring service in which parents can see movies of their children when they want to see on mobile terminals, but children are watched by machine learning or so on even when parents do not see and alerts are sent to parents in case of anomaly such as when a suspicious person approaches to children on roads.
In device layer, it is necessary to switch a device that satisfies the needs of user at first. In the case of tracking cameras, it means to select the camera in which the image of the child appears based on location of the child. At this time, if we analyze images with all cameras connected to the network and use only the camera with the child appears, the cost will be very high. Thus, it is necessary to narrow down cameras near the child by identifying children with some identifiers and using cameras’ metadata of location. Since many IoT devices are non-IP devices, we are considering to analyze communication patterns of IoT devices including non-IP devices and to assign metadata automatically or semi-automatically from the communication pattern.
For example of Tacit Computing, let us consider tracking cameras. Tracking cameras are usage that movies of small children in schools or roads are taken by security cameras near the children and parents can see the movies by their mobile terminals. Tracking cameras can satisfy parents’ needs to confirm children’s safety at that time, but we think it is difficult to accept if it is charged as a certain price as once movie checking. We think it can be acceptable that it is charged as a monthly fee of continuous monitoring service in which parents can see movies of their children when they want to see on mobile terminals, but children are watched by machine learning or so on even when parents do not see and alerts are sent to parents in case of anomaly such as when a suspicious person approaches to children on roads.
As a processing of tracking cameras, Tacit Computing discovers the camera in which the child appears, and delivers movies of the camera to the parents’ mobile terminals when the parents request movies. And for watching, image analyzing functions such as OpenCV library are arranged on gateways or network edge SSE (Subscriber Service Edge) which accommodate cameras and image analyzing functions analyze images from the camera movie in which the child appears. The result of image analysis is summarized to the feature vector and it is aggregated to the cloud, and using cloud technologies (e.g., [28]-[47]), anomaly such as suspicious person approaching is analyzed by machine learning techniques (e.g., Local Outlier Factor). When anomaly is detected, alerts are sent to parents.
In cloud layer, where to process in the cloud greatly affects cost and performance. Firstly, we deploy the processing function to the cloud of the DC (Data Center) that has a small delay from the network edge that accommodates devices frequently used. Furthermore, since the size of the cloud resource also affects the operation cost, its resource size is also subject to optimization. For example, in the case of tracking cameras, the cloud analyzes summarized feature vectors by machine learning techniques and alerts suspicious person and so on. To detect anomaly in real time, stream processing functions such as Storm or Spark are used, so the cloud keeps appropriate resource sizes for these functions.
C
While the problem has been formulated for all dimensions n≥2𝑛2n\geq 2italic_n ≥ 2, the study in this paper has been limited to n=2𝑛2n=2italic_n = 2. This is for the following reasons. (i) n=2𝑛2n=2italic_n = 2 this is the smallest dimension in which the shape of a Voronoi cell plays a role in the communication cost, (ii) the partition can be easily visualized, which helps develop useful intuition, (iii) a complete parameterization of lattices exists in two dimensions and is not available in higher dimensions; this will preclude as thorough a study in higher dimensions, (iv) the higher dimensional case can be reduced to the study of the two-dimensional case (work in progress), since for the single round results, a transmission from node i𝑖iitalic_i essentially reduces the dimension of the problem for the remaining nodes by one. Eventually, through the use of a bounding argument, the problem is reduced to a series of two-dimensional problems. Fully developing this approach requires a significant amount of added machinery which will be considered in a later submission, (v) for the infinite round case, new machinery may be needed, see, e.g., [18], (vi) quantization techniques in networks with two nodes are of interest and have been considered recently in classification problems [7].
We study the dependence of the communication cost of our protocols on the lattice structure. In particular, we show that the lattice which is best in two dimensions for both coding and quantization, namely, the hexagonal lattice, requires the largest amount of communication in a two-node distributed setting.
For interactive protocols with an unbounded number of rounds, we exhibit a construction which results in zero error probability with finite average bit cost. This is a surprising result, when compared to the single-round protocol, which can only achieve a strictly positive error probability at a finite rate.
While the problem has been formulated for all dimensions n≥2𝑛2n\geq 2italic_n ≥ 2, the study in this paper has been limited to n=2𝑛2n=2italic_n = 2. This is for the following reasons. (i) n=2𝑛2n=2italic_n = 2 this is the smallest dimension in which the shape of a Voronoi cell plays a role in the communication cost, (ii) the partition can be easily visualized, which helps develop useful intuition, (iii) a complete parameterization of lattices exists in two dimensions and is not available in higher dimensions; this will preclude as thorough a study in higher dimensions, (iv) the higher dimensional case can be reduced to the study of the two-dimensional case (work in progress), since for the single round results, a transmission from node i𝑖iitalic_i essentially reduces the dimension of the problem for the remaining nodes by one. Eventually, through the use of a bounding argument, the problem is reduced to a series of two-dimensional problems. Fully developing this approach requires a significant amount of added machinery which will be considered in a later submission, (v) for the infinite round case, new machinery may be needed, see, e.g., [18], (vi) quantization techniques in networks with two nodes are of interest and have been considered recently in classification problems [7].
We have considered the problem of interactively computing the nearest lattice point for a lattice in two dimensions. A two-party model of communication is assumed and expressions for the error probability have been obtained for a single round of communication (i.e. two messages). We have also considered an unbounded number of rounds of communication and shown that it is possible to achieve zero probability of error with a finite number of bits exchanged on average. Our results indicate that lattices which are better for quantization or for communication have a higher communication cost.
D
The analysis of different studies about motor control revealed some parallels with current theories about NS anatomy physiology, and this paragraph is dedicated to describe such findings. However, the authors would like to remark that the proposed organisation is not the structure of the NS but an architecture that can capture some of its characteristics.
We proposed a structure of a semi-autonomous controller organised in a hierarchical architecture that has a parallelism in the biological motor control Figure 6. At the apex of the hierarchy is the TS planner, which acts on the input of walking towards a target. The TS planning appears to be based on stereotyped optimised strategies which depend on the environmental conditions Figure 6. Hence, this structure should perform the following tasks: receive the input of the voluntary decision, perform the assessment of both environmental and body conditions, select an adequate optimal strategy based on a priori models, and generate the desired task-space trajectories and expected external dynamics. Based on these three characteristics, the brain regions involved in the process are the frontal and prefrontal cortices, which are tasked to produce the desired behaviour in the task-space. They are supported by the basal ganglia, which regulate the reward circuits of the brain and the cerebellum. In turn, it is considered that the cerebellum provides the internal models for planning and supervision, connected through the thalamus and the parietal cortex [42, 16, 47, 13]. This information is then forwarded to the task-space planner, which provides a bridge between the pre-motor and motor cortices. Afterwards, the latter is tasked with joint-space planning with the support of the parietal cortex and cerebellum [42, 16]. Its output is then passed to the spinal cord through the brain stem which implements the CPG and the spinal cord reflex, which is the final stage of the architecture allocated within the CNS [42, 16]. This stage of the CNS controller also provides the first centralised response to external perturbation, and it has a reaction time for the ankles of about 80ms [43, 14]. The last stage of the architecture is the intrinsic mechanical impedance of the musculoskeletal system that filters external perturbations and which controlled by the CNS to execute the planned movements.
Figure 6: The parallel between the proposed architecture and the human nervous system based on literature information [17, 42, 8, 47, 16, 11, 13, 5, 10]. The full lines in the schematic represent information involved in the control, while the dotted lines indicate a supervision signal. Both types of connection are often mediated by other parts of the brain which are not included in this simplified architecture. Furthermore, the pre-motor cortex is drawn across the border of the task-space and joint-space planner because we hypothesise that is acting as a bridge between the frontal cortex and the motor cortex.
The dynamic motion primitives theory was developed as an extension of the motor primitives and, until now, it is unclear how they are connected[5]. Our theory is based on the integration of our task-space planner with the joint-space planner λ0−P⁢M⁢Psubscript𝜆0𝑃𝑀𝑃\lambda_{0}-PMPitalic_λ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT - italic_P italic_M italic_P mentioned in the introduction. The current architecture of this planner considers that the task-space dynamics is determined only by a spring-based mechanism that pushes the user towards the desired posture[10]. However, if the forces generated by the attractor substitute the spring, the planning optimises an energy function which considers desired behaviour, body mechanical impedance and environmental dynamics [44]. Hence, the λ0−P⁢M⁢Psubscript𝜆0𝑃𝑀𝑃\lambda_{0}-PMPitalic_λ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT - italic_P italic_M italic_P architecture will then produce a sequence of optimised postures that also account for the environmental dynamics. The CPG will then use this sequence of reference posture to generate the neural signals that control muscular activities. This latest part of the theory is also supported by animal experiments that show how it is possible to reproduce the gait-like behaviour in rats with an induced spinal injury, through an electrochemical stimulation of the spinal cord[45, 46]. Lastly, this architecture implies that when the environmental dynamics is negligible, then the task planning is based only on the ”reward” or ”discomfort” function of the λ0−P⁢M⁢Psubscript𝜆0𝑃𝑀𝑃\lambda_{0}-PMPitalic_λ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT - italic_P italic_M italic_P. Therefore, the motor primitives are dynamic primitives when the body’s intrinsic mechanical impedance dominates the task.
Our results describe how the proposed deterministic model, based on a harmonic oscillator centred in the saddle point, produces human-like gait trajectories for both the CoM and the foot swing at different gait speeds. Furthermore, it identifies the ankle strategies as the mechanism that allows to control of the vertical movement of the CoM. Therefore, it simultaneously regulates walking speed and stability. Moreover, the slower gait speed also showed an increase in the double stance phase in agreement with well-established experimental results [11, 32]. The results presented here confirm and extend previous results obtained with different data sets [21, 15, 33]. The data also indicate that humans control step length, step frequency and mediolateral amplitudes according to an a priori optimised behaviour, which is similar to what has been observed by Collins et al [34]. In contrast, they seem to optimize the step width (dS⁢Wsubscript𝑑𝑆𝑊d_{SW}italic_d start_POSTSUBSCRIPT italic_S italic_W end_POSTSUBSCRIPT) and the vertical amplitude of the CoM (via the ankle strategies) depending on local conditions. This result is also supported by multiple studies that observed how these two parameters are modulated based on the environmental conditions [4, 14, 28]. Moreover, the role of ankle strategies in the energetic efficiency of gait is widely reported in the literature [14, 35, 34, 36, 37, 38, 39, 9]. Furthermore, the proposed model intrinsically captures the regulation of the CoM vertical trajectory and the double support observed in human data [31, 32, 11]. As reported by Orendurff et al human trajectories are not symmetric at low speeds due to the anticipation of the CoM minimum height in the gait cycle. The model proposed here is based on the Saddle Space proposed by Tiseo et al [21, 15, 23, 29, 40], it also provides a complete characterisation of the potential energy which can be calculated from the derivative of the CoM trajectory. Hence, it models comprehensively the attractor produced by the interaction between CoM and gravity.
A
E𝐸Eitalic_E is partitioned into p𝑝pitalic_p disjoint sets E1,…,Epsubscript𝐸1…subscript𝐸𝑝E_{1},\ldots,E_{p}italic_E start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_E start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT, where |Ei|=risubscript𝐸𝑖subscript𝑟𝑖|E_{i}|=r_{i}| italic_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | = italic_r start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and m=∑i∈[p]ri𝑚subscript𝑖delimited-[]𝑝subscript𝑟𝑖m=\sum_{i\in[p]}r_{i}italic_m = ∑ start_POSTSUBSCRIPT italic_i ∈ [ italic_p ] end_POSTSUBSCRIPT italic_r start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.
Hence, and from |X|=LP𝑋subscript𝐿𝑃|X|=L_{P}| italic_X | = italic_L start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT, we get that the number of connected components is reduced by LPsubscript𝐿𝑃L_{P}italic_L start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT. Then,
(with p=LP𝑝subscript𝐿𝑃p=L_{P}italic_p = italic_L start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT),
We will denote by k^^𝑘\hat{k}over^ start_ARG italic_k end_ARG the number of rounds performed by the algorithm (Steps 1-1).
We will denote by [p]delimited-[]𝑝[p][ italic_p ] the set {1,2,…,p}12…𝑝\{1,2,\ldots,p\}{ 1 , 2 , … , italic_p }.
D
Compatibility with R and Bioconductor. The results returned by EBIC could be easily saved into a format loadable by Bioconductor R package biclust in order to perform biological validation. In Supplementary Material we provide detailed workflow presenting how to use EBIC, all within R environment.
Workflow for analysis of methylation data. EBIC was capable to capture bio-meaningful signals in methylation data. A tutorial is presented in a Supplementary Material.
In this paper we introduce the open source package built on top of the upgraded version of the method. First and foremost, a full support for multi-GPUs is added, which allows to analyze datasets with almost unlimited numbers of rows (available memory is a constraint). Secondly, the method has been integrated with Bioconductor, which enables the user to run all the analysis from the R level. Thirdly, a different method for performing analysis was added, which depends on the presence or absence of missing values within the data. Last, but not least, some bugs have been fixed and optimizations were made for more efficient memory management. All above combined make an this open source software ready out-of-the-box for big data biclustering analysis.
Compatibility with R and Bioconductor. The results returned by EBIC could be easily saved into a format loadable by Bioconductor R package biclust in order to perform biological validation. In Supplementary Material we provide detailed workflow presenting how to use EBIC, all within R environment.
In this paper we present the recent advancements in one of the leading biclustering methods. The algorithm was wrapped into a framework, which is conveniently integrated with R and allows multiple input file formats. In Supplementary Material we also demonstrate that even for such a large genomic dataset, the results provided by EBIC are bio-meaningful. We conclude that EBIC, released as open source package, is a very convenient tool for getting insight from large genomic datasets.
A
Similar reasoning leads to the result that the above non-generic game possesses no strictly perfect equilibria; see van Damme (1991, p. 16).
it follows that this outcome can be destabilized by the “secret handshake” entrants.
Then it is obvious that σ∗superscript𝜎\sigma^{*}italic_σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is a strictly perfect equilibrium of G¯¯𝐺\bar{G}over¯ start_ARG italic_G end_ARG.
Motivated by this, it can be shown that a strictly perfect equilibrium actually ensures that
Suppose that σ∗superscript𝜎\sigma^{*}italic_σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is a strictly perfect equilibrium of G𝐺Gitalic_G,
C
\Delta}_{r},\tilde{\Lambda}_{1},\ldots,\tilde{\Lambda}_{s}caligraphic_S start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT = ! start_POSTSUPERSCRIPT italic_β start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT over~ start_ARG roman_Δ end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , ! start_POSTSUPERSCRIPT italic_β start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT end_POSTSUPERSCRIPT over~ start_ARG roman_Δ end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT , over~ start_ARG roman_Λ end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , over~ start_ARG roman_Λ end_ARG start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT. The intuition behind a single-conclusion meta-sequent is that after the substitution, the succedent has at most one formula. Especially in the latter case, we are allowed to substitute at most one formula in one of the contexts and all the others must be substituted with the empty multiset.
The formulas in the conclusion are called the main formulas and the formulas in the premises are called the active formulas of the rule. Usually, we work with the rules with exactly one main formula. A rule is called single-conclusion if all of its meta-sequents are single-conclusion. Define the single-conclusion version of a rule ℛℛ\mathcal{R}caligraphic_R of the form (2)2(2)( 2 ) as follows:
Such a setting is required to define the multiplicative rules, like the left implication or right fusion rules, where the contexts of the premises are merged in the conclusion.
The left and middle premises are in the same family with the context Π~~Π\tilde{\Pi}over~ start_ARG roman_Π end_ARG in the antecedent. Thus, one copy of Π~~Π\tilde{\Pi}over~ start_ARG roman_Π end_ARG appears in the antecedent of the conclusion. A generic example of a right multi-conclusion semi-analytic rule is:
condition of occurrence-preservation is a weaker version of the usual analyticity property. Furthermore, recall that the usual sequent calculi for substructural logics have two basic ways to handle the contexts. For the additive connectives, namely the conjunction and disjunction, the contexts in the premises of the rule must be the same and the conclusion inherits these common contexts. For the multiplicative connectives, i.e., the implication and fusion, the rule combines the contexts of its premises and inserts the combination in its conclusion. Semi-analytic rules allow almost all possible combinations of these two approaches. For instance, in the premises of a right multi-conclusion semi-analytic rule, fixing i𝑖iitalic_i, the family {Γ~i,μ¯i⁢r⇒ν¯i⁢r,Δ~i∣1≤r≤ki}formulae-sequence⇒subscript~Γ𝑖subscript¯𝜇𝑖𝑟subscript¯𝜈𝑖𝑟conditionalsubscript~Δ𝑖1𝑟subscript𝑘𝑖\{\tilde{\Gamma}_{i},\bar{\mu}_{ir}\Rightarrow\bar{\nu}_{ir},\tilde{\Delta}_{i%
B
\widetilde{q},r)-\left(\frac{\sqrt{3}}{2}+\epsilon_{1}+\epsilon_{2}\right)\deltaoverOVERACCENT start_ARG < end_ARG roman_dist start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT ( italic_p , over~ start_ARG italic_q end_ARG ) + italic_ϵ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_δ + italic_ϵ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT italic_δ + roman_dist start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT ( over~ start_ARG italic_q end_ARG , italic_r ) - ( divide start_ARG square-root start_ARG 3 end_ARG end_ARG start_ARG 2 end_ARG + italic_ϵ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + italic_ϵ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) italic_δ
<distx⁡(c,q).absentsubscriptdist𝑥𝑐𝑞\displaystyle<\operatorname{dist}_{x}(c,q).< roman_dist start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT ( italic_c , italic_q ) .
<distx⁡(p,c)absentsubscriptdist𝑥𝑝𝑐\displaystyle<\operatorname{dist}_{x}(p,c)< roman_dist start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT ( italic_p , italic_c )
=distx⁡(p,r)−distx⁡(r,c)+distx⁡(r,c)−32⁢δ⁢≤4⁢distx⁡(p,c).absentsubscriptdist𝑥𝑝𝑟subscriptdist𝑥𝑟𝑐subscriptdist𝑥𝑟𝑐32𝛿4subscriptdist𝑥𝑝𝑐\displaystyle=\operatorname{dist}_{x}(p,r)-\operatorname{dist}_{x}(r,c)+%
dist⁡(p,rp)+dist⁡(sq,q)+dist⁡(r,s)<distx⁡(p,c)+distx⁡(c,q)+dist⁡(r,s)=dist⁡(p,q)+dist⁡(r,s)dist𝑝superscript𝑟𝑝distsuperscript𝑠𝑞𝑞dist𝑟𝑠subscriptdist𝑥𝑝𝑐subscriptdist𝑥𝑐𝑞dist𝑟𝑠dist𝑝𝑞dist𝑟𝑠\displaystyle\operatorname{dist}(p,r^{p})+\operatorname{dist}(s^{q},q)+%
C
If the shape functions are in the pre-image basis, then F𝐹Fitalic_F won’t in general
H𝐻Hitalic_H and V𝑉Vitalic_V, the univariate functions H⁢(0,y)𝐻0𝑦H(0,y)italic_H ( 0 , italic_y ) and V⁢(x,0)𝑉𝑥0V(x,0)italic_V ( italic_x , 0 )
compute for ζ⁢(x,y)𝜁𝑥𝑦\zeta(x,y)italic_ζ ( italic_x , italic_y ) that are straightforward to evaluate.
polynomial on 𝐑2superscript𝐑2\mathbf{R}^{2}bold_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT and so will H𝐻Hitalic_H and V𝑉Vitalic_V and since each curve
To evaluate G𝐺Gitalic_G, we must also evaluate H𝐻Hitalic_H and V𝑉Vitalic_V numerically.
D
In this section, we will provide the details of the contextual hourglass module and our Contextual Hourglass Network (CxtHGNet).
In this work, we choose to utilize the encoding layer [13, 21] as our channel-wise attention mechanism to test the network performance, which has the ability to selectively highlight class-dependent featuremaps. It may be worthwhile to mention that it is convenient to replace the channel-wise attention used here with other attention mechanisms [22, 23] which means our contextual hourglass module is generalizable.
1) We design a novel contextual hourglass module which incorporates attention mechanism on processed low-resolution featuremaps to exploit the contextual semantics and therefore improve the robustness of the prediction.
Our contextual hourglass module is a symmetric structure inspired by hourglass module [12]. It firstly processes features down to a low resolution by a set of convolutional and pooling layers, then applies channel-wise, point-wise or other attention mechanisms on the processed low-resolution featuremap, and finally bi-linearly upsamples and combines features until reaching the final output resolution. The location we choose to apply the attention mechanism should make it retain more details without greatly increasing the computational cost. The contextual hourglass module is then boosted by its inner attention mechanism which utilizes contextual information to improve the labeling robustness.
We develop a novel Contextual Hourglass Network (CxtHGNet) for semantic segmentation of high-resolution aerial imagery. Our CxtHGNet can extract rich multi-scale features of the image and learn the contextual semantics in scenes, due to the incorporation of bottom-up, top-down inference across various scales, attention mechanism, and intermediate supervision. Also, It may be worthwhile to mention that with our CxtHGNet it can be convenient to replace the channel-wise attention used in this paper with other attention mechanisms and it makes our network structure more generalizable for other applications. The experimental results on Potsdam and Vaihingen datasets have demonstrated the superiority of our CxtHGNet.
C
On the applications side, the results herein can be used to solve infinite horizon versions of online network revenue management where the retailer must price several unique products, each of which may consume common resources (e.g., inventories of different products) that have limited availability and are replenished at a constant rate.
O⁢(l⁢o⁢g⁢n)𝑂𝑙𝑜𝑔𝑛O(logn)italic_O ( italic_l italic_o italic_g italic_n )) for cases in which activation costs are bandit-dependent iid random variables. Applications of MAB models include problems of online revenue management: Ferreira et al. (2018), Wang et al. (2014), Johnson et al. (2015) of dynamic procurement: Singla and Krause (2013), auctions: Tran-Thanh et al. (2014).
search-based and targeted advertising online learning, cf. Rusmevichientong and Williamson (2006), Agarwal et al. (2014) and references therein.
For versions of such problems with no resource (inventory) replenishment we refer to Ferreira et al. (2018) and references therein. Additional applications include
refer to: Guha and Munagala (2007); Tran-Thanh et al. (2012); Thomaidou et al. (2012); Lattimore et al. (2014); Sen et al. (2015); Pike-Burke and Grunewalder (2017); Zhou et al. (2018); Spencer and Kevan de Lopez (2018) and
C
5:     S∗←S∗∪{v}←superscript𝑆superscript𝑆𝑣S^{*}\leftarrow S^{*}\cup\{v\}italic_S start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ← italic_S start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ∪ { italic_v };
In this section, we present the algorithm designed based on the hybrid sampling method for solving the MP problem and provide theoretical analysis.
In this paper, we present a hybrid sampling method which is designed particularly for the misinformation prevention problem. We show that the new sampling method can be used to design an approximation algorithm which outperforms the state-of-the-art solutions.
In this sampling method, the frequency that a node can be collected in the first step is proportional to the probability that it will be affected by the misinformation. Thus, the samples produced by the second step are more likely to be the protectors of the nodes which are prone to be misinformation-influenced. Note that the pattern of the sample areas is determined by the nodes collected in the first step. As shown in Fig. 2, under the uniform reverse sampling the samples are uniformly distributed to the whole graph, while under the hybrid sampling the samples tend to be centered around the seed node of the misinformation. As shown later, the sample obtained by our hybrid sampling method can be used to directly estimate the prevention effect of the positive cascade. Based on the hybrid sampling method, we propose a new randomized approximation algorithm for the MP problem. In order to evaluate the proposed algorithm, we design experiments which compare the algorithms by evaluating their performance under the same time constraint. The effectiveness of our solution is supported by encouraging experimental results.
There are three most related existing works, [2, 4, 15], which aim at solving Problems 1 and 2 by designing approximation algorithms. C. Budak et al. [2] first proposed the problem of limiting the spread of information and they considered Problem 2. In [2], it is shown that Problem 2 is monotone and submodular, and the authors use Monte Carlo simulation straightforwardly (i.e., the forward sampling method) to overcome the difficulty in computing the objective function. G. Tong et al. [4] applied the reverse sampling technique invented by [11, 12, 13] to solve Problem 1. Very recently, M. Simpson et al. [15] adopted the same technique to solve Problem 2. In the work [4] and [15], the authors utilized the same sampling method designed based on the uniform reverse sampling framework. As mentioned earlier, selecting a node uniformly is not efficient for the MN or MP problem, because it cannot attach extra importance to the nodes that are prone to be Crsubscript𝐶𝑟C_{r}italic_C start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT-active. Furthermore, as discussed in [15], for the MP problem, the sample obtained by the uniform reverse sampling method can be an empty set. When an empty set is sampled, it corresponds to the case that the selected node will not be affected by the misinformation and we do not need to select any positive seed node to protect that node. As shown in Alg. 3, the empty set cannot provide any information for selecting effective nodes. On the other hand, in the hybrid sampling method, every sample we obtain by Alg. 2 is guaranteed to be non-empty, and therefore our sampling process is more effective in generating the samples that can help us find high-quality positive seed nodes. We have observed the same in our experiments.
A
Detection Trigger Module: If the confidence score surpasses a predefined threshold and the frame-skipping counter hasn’t reached its limit, the system skips detection and relies solely on the Kalman filter’s prediction for the current frame (detection is skipped). This approach improves processing speed. Conversely, if the confidence score falls below the threshold or the frame-skipping counter reaches its limit, a new detection is triggered in the current frame to potentially correct the discrepancies between the predicted and actual object location.
Data Association: The Hungarian algorithm is employed for the data association task, effectively matching the detection results from the current frame with the predicted object locations provided by the Kalman filter. This process ascertains the continuity of object identity between consecutive frames, ensuring that each detected object is accurately aligned with its corresponding track.
Figure 1: illustrates the tracking outcomes for frames 55 to 61 utilizing the CTD approach. The analysis is illustrated with cropped image frames, where white bounding boxes denote predictions made by the Kalman filter, and blue bounding boxes indicate detections from the object detector. Between frames 56 and 60, no new detections were recorded, because the confidence score, derived from the comparison of the white and blue bounding boxes, exceeded the predetermined threshold. However, at frame 61, a new detection was prompted due to the confidence score falling below the threshold. This indicates a notable deviation between the prediction from the Kalman filter and the object detection results from the previous frame.
Hungarian Assignment: This algorithm plays a crucial role in object association and ID attribution. It essentially determines whether an object detected in the current frame corresponds to the same object tracked in the previous frame.
Kalman Filter: When a new frame arrives, the Kalman Filter predicts the object’s location in the current frame based on its detection information from the previous frame. This prediction helps maintain object tracking during periods when detection is skipped.
A
KF4subscript𝐾subscript𝐹4K_{F_{4}}italic_K start_POSTSUBSCRIPT italic_F start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT end_POSTSUBSCRIPT
1.5×10−51.5superscript1051.5\times 10^{-5}1.5 × 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT
1.2×10−51.2superscript1051.2\times 10^{-5}1.2 × 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT
1.7×10−51.7superscript1051.7\times 10^{-5}1.7 × 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT
1.6×10−51.6superscript1051.6\times 10^{-5}1.6 × 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT
A
In this section our goal will be to leverage sublevel set persistence for the selection of τ𝜏\tauitalic_τ for both state space reconstruction and permutation entropy.
Specifically, our goal is to automate the frequency analysis method [27] for selecting τ𝜏\tauitalic_τ for state space reconstruction by analyzing both the time and frequency domain of the signal using sublevel set persistence.
The goal in this section is to relate the distribution of permutations formed from a given delay τ𝜏\tauitalic_τ to the state space reconstruction with the same delay τ𝜏\tauitalic_τ. This connection will show the time delay for both permutations and state space reconstruction are related.
In this section our goal will be to leverage sublevel set persistence for the selection of τ𝜏\tauitalic_τ for both state space reconstruction and permutation entropy.
The first approach we implement for estimating the maximum significant frequency of a signal is based on a time domain analysis of the sublevel set persistence.
A
Specifically, they require that r≥2m+ς+(2ℓ−1)⁢d𝑟superscript2𝑚𝜍superscript2ℓ1𝑑r\geq 2^{m+\varsigma}+(2^{\ell}-1)ditalic_r ≥ 2 start_POSTSUPERSCRIPT italic_m + italic_ς end_POSTSUPERSCRIPT + ( 2 start_POSTSUPERSCRIPT roman_ℓ end_POSTSUPERSCRIPT - 1 ) italic_d, which implies log2⁡(d)∼log2⁡(r)/(1+1/s)similar-tosubscript2𝑑subscript2𝑟11𝑠\log_{2}(d)\sim\log_{2}(r)/(1+1/s)roman_log start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_d ) ∼ roman_log start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_r ) / ( 1 + 1 / italic_s ) or smaller.
We have verified numerically that it agrees with the analysis in [8] when d∼rsimilar-to𝑑𝑟d\sim\sqrt{r}italic_d ∼ square-root start_ARG italic_r end_ARG, see App. B.1.2.
The analyses in [7, 8] are exact, but they rely on d∼rsimilar-to𝑑𝑟d\sim\sqrt{r}italic_d ∼ square-root start_ARG italic_r end_ARG or smaller when s=1𝑠1s=1italic_s = 1.
By comparison, the heuristic analysis in this paper covers the case where d∼rsimilar-to𝑑𝑟d\sim\sqrt{r}italic_d ∼ square-root start_ARG italic_r end_ARG or greater.
To see why we need d∼rsimilar-to𝑑𝑟d\sim\sqrt{r}italic_d ∼ square-root start_ARG italic_r end_ARG or greater, note that the heuristic assumes δb=(e+b⁢d)/r⁢ mod ⁢1subscript𝛿𝑏𝑒𝑏𝑑𝑟 mod 1\delta_{b}=(e+bd)/r\text{ mod }1italic_δ start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT = ( italic_e + italic_b italic_d ) / italic_r mod 1 to be approximately uniformly distributed on [0,1)01[0,1)[ 0 , 1 ).
C
   Collect lists of srsuperscript𝑠𝑟s^{r}italic_s start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT for every Pisubscript𝑃𝑖P_{i}italic_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT
with a similar list of transactions (i.e., xij=yjsuperscriptsubscript𝑥𝑖𝑗superscript𝑦𝑗x_{i}^{j}=y^{j}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT = italic_y start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT), and
   yjsuperscript𝑦𝑗y^{j}italic_y start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT=set of verified transactions by remaining cooperative
   yjsuperscript𝑦𝑗y^{j}italic_y start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT=set of verified transactions by remaining cooperative
transactions yjsuperscript𝑦𝑗y^{j}italic_y start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT that is different from his own vector of transactions
B
|f⁢(x)∧f⁢(z)|≥i𝑓𝑥𝑓𝑧𝑖|f(x)\wedge f(z)|\geq i| italic_f ( italic_x ) ∧ italic_f ( italic_z ) | ≥ italic_i implying continuity of f𝑓fitalic_f.
Moreover, if f𝑓fitalic_f is uniformly computable, then the modulus of continuity of M𝑀Mitalic_M is in particular a modulus of continuity for f𝑓fitalic_f and is thus uniformly continuous.
We only have left to show that if f𝑓fitalic_f is moreover uniformly continuous, then M𝑀Mitalic_M has a computable modulus of continuity.
Moreover if there exists a computable function m:ℕ→ℕ:𝑚→ℕℕm\colon\mathbb{N}\rightarrow\mathbb{N}italic_m : blackboard_N → blackboard_N (called a modulus of continuity for M𝑀Mitalic_M)
Hence in each of the next i𝑖iitalic_i steps, we are guaranteed to output a letter. Hence m′superscript𝑚′m^{\prime}italic_m start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is a modulus of continuity for M𝑀Mitalic_M and f𝑓fitalic_f is uniformly computable.
A
Note that for all r∈[1,2]𝑟12r\in[1,2]italic_r ∈ [ 1 , 2 ], if |δ|=Θ⁢(|Q|r)𝛿Θsuperscript𝑄𝑟|\delta|=\Theta(|Q|^{r})| italic_δ | = roman_Θ ( | italic_Q | start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT ) then the second approach is faster by at least a factor of |Q|𝑄|Q|| italic_Q |.
In the rest of the paper we analyse the general case of model checking a given Markov chain against a given unambiguous Büchi automaton.
Perhaps more significantly, model checking Markov chains against Büchi automata, i.e., computing the probability that the random word generated by the Markov chain is accepted by the automaton, is a key problem in the verification of probabilistic systems.
In this paper we obtain a faster algorithm (recall that E𝐸Eitalic_E is the set of transitions in the Markov chain):
Although it is not the main focus of this paper, we have analysed also the model-checking problem, where a non-trivial Markov chain is part of the input.
D
2.9×1072.9superscript1072.9\times 10^{7}2.9 × 10 start_POSTSUPERSCRIPT 7 end_POSTSUPERSCRIPT
2×1062superscript1062\times 10^{6}2 × 10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT
2×1062superscript1062\times 10^{6}2 × 10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT
2×1062superscript1062\times 10^{6}2 × 10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT
3.5×1063.5superscript1063.5\times 10^{6}3.5 × 10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT
D
Multivariate analysis of variance (Manova) [38, 24] and Hotelling T2superscript𝑇2T^{2}italic_T start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT (Hotelling) [39].
Mean Embedding Test [40]: A test based on the analytical mean embeddings between two distributions.
The evaluation uses a spiral simulation with 1000 samples and 2 dimensions for each test and compares test statistics over 20 repetitions. Figure 1b shows the difference between the hyppo implementation of the independence test and the respective R package implementation of the independence test.
Smooth CF Test [40]: A test using analytic analogues of characteristic functions.
tools contains common helper functions, simulations, and finite-sample statistical power functions to test differences between each test in hyppo.
C
A pixelated “Game Over” was the message that flashed across our CRT televisions when we ran out of lives. We would start again, play until the next “Game Over,” and repeat until our mothers told us our eyeballs would turn into squares if we played any longer. The same “Game Over” also signaled the conclusion of a game. It marked completion of every level. Sometimes this unlocked new content, and sometimes we loaded a new cartridge into the console. In any case, as long as we kept playing, the game was never really over.
To test our hypothesis, we consider how to gamify the vehicle routing problem with stochastic requests (VRPSR). The VRPSR is an important problem in modern logistics. It is the problem of dynamically routing a vehicle to service customer requests that occur at random times across an operating horizon and in random places within a service area. The objective is to identify a routing policy that maximizes the expected number of serviced requests. The VRPSR models a range of operational scenarios, including those faced by service technicians, meal delivery services, and couriers. Identifying an optimal VRPSR policy is challenging, especially for instances of any practical size. If the VRPSR can be represented as a game, and if DRL can provide a useful solution methodology, then this would be a notable achievement. Rather than relying on tailored optimization routines to aid online decision making, academics and practitioners alike could instead turn to a more general DRL method.
Game worlds, playable areas, zoom views, minimaps, deep-Q𝑄Qitalic_Q networks. These things are not the end. This is only level one. New quests await us. Games, like optimization problems, have always been about discovery of a winning policy. Whether that happens through theorems and proofs, or by sitting an agent in front of a screen through millions of “Game Overs,” the goal is the same. This paper shows that it is possible to represent the VRPSR as a game and to train a neural network to play it better than some optimization algorithms. To keep going down this road, we must find new connections between games and optimization problems. Any number of vehicle routing problem variants could be gamified. Visual representations of scheduling, production, and assortment problems could lead to new designs. And a host of problems outside the operational domain might also be amenable to gamification.
DRL methods have been applied to a variety of tasks involving sequential decisions and uncertainty. These tasks span the domains of healthcare (Liu et al., 2017), image recognition (Choi et al., 2018), and autonomous driving (Sallab et al., 2017), to name a few. In the operational realm, members of our team have applied DRL to dynamic taxi dispatching (Kullman et al., 2022), Amazon has used DRL for online bin packing, newsvendor, and vehicle routing problems (Balaji et al., 2019), and others have applied it to production scheduling (Palombarini and Martínez, 2022; Xinquan and Xuefeng, 2023). Perhaps the most well-known application is to games, where DRL has achieved superhuman performance in chess (Silver et al., 2018), Go (Silver et al., 2018), Doom (Lample and Chaplot, 2017), Texas Hold’em Poker (Heinrich and Silver, 2016), and StarCraft II (Vinyals et al., 2019). Notably, the architecture of Mnih et al. (2015) outperforms humans on the majority of 49 Atari games. This is achieved despite differences across the suite of games, such as appearance, goals, rewards, and actions. Indeed, it was the success of Mnih et al. (2015) that led to our reflection on whether similar DRL methods might perform comparably on any game with a related format. Games, like dynamic and stochastic optimization problems, are challenging because they involve long sequences of decisions in the face of uncertain outcomes. Thus, one also wonders if game-based representations of such problems, paired with DRL, can lead to viable solution methodologies. The possibility of bridging these two worlds motivates this paper.
Though some of our game designs allow agents to outperform benchmark policies, our aim is not to develop a state-of-the-art procedure. Rather, our contribution is a connection between the seemingly disparate worlds of video games and logistics. More generally, our work points to the representation of dynamic and stochastic optimization problems via games as a promising research direction.
B
We next discuss the role of the symmetric generations structure in the bound of two additional signals per generation. We have implicitly imposed three restrictions on the network, beyond the basic generations structure: the observation structures are the same across generations, all generations are the same size, and observations are symmetric within generations. We assume the same observation structure across generations primarily for simplicity of exposition, and could bound aggregative efficiency with the same techniques while allowing different symmetric observation structures in different generations. The key step in extending the proof is the Markov chain mixing result, which must be replaced by a mixing result for non-homogeneous Markov chains (for examples of such results, see Blackwell (1945) and Tahbaz-Salehi and
is worse with larger generation sizes, as illustrated in Figure 1. We also show that even early generations learn slowly in maximal generations networks. Social learning accumulates no more than three signals per generation starting with the third generation. If everyone in the first generation observes a single additional common ancestor, then the same bound also holds for all generations.
We next discuss the role of the symmetric generations structure in the bound of two additional signals per generation. We have implicitly imposed three restrictions on the network, beyond the basic generations structure: the observation structures are the same across generations, all generations are the same size, and observations are symmetric within generations. We assume the same observation structure across generations primarily for simplicity of exposition, and could bound aggregative efficiency with the same techniques while allowing different symmetric observation structures in different generations. The key step in extending the proof is the Markov chain mixing result, which must be replaced by a mixing result for non-homogeneous Markov chains (for examples of such results, see Blackwell (1945) and Tahbaz-Salehi and
The arbitrarily large information loss we have highlighted in Section 4 can have large welfare consequences. To illustrate this, we give an example comparing complete networks to maximal generations networks with large generations.
Jadbabaie (2006)). We could also allow different sized generations and obtain bounds on aggregative efficiency. For example, the logic of Example 2 would extend to maximal generation networks with generations of varying sizes.
D
We therefore impose an ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-norm constraint to the first layer of 𝐖hsuperscript𝐖ℎ\mathbf{W}^{h}bold_W start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT which includes the fully connected weights.
We expect that this constraint will set many elements in the first layer of 𝐖hsuperscript𝐖ℎ\mathbf{W}^{h}bold_W start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT to zeros and identify the edges that
We therefore impose an ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-norm constraint to the first layer of 𝐖hsuperscript𝐖ℎ\mathbf{W}^{h}bold_W start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT which includes the fully connected weights.
and 𝐖hsuperscript𝐖ℎ\mathbf{W}^{h}bold_W start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT in the FC layer (embedded in olsubscript𝑜𝑙o_{l}italic_o start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT)
and 2) weight 𝐖hsuperscript𝐖ℎ\mathbf{W}^{h}bold_W start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT within the FC for prediction.
A
A retired region is a region in the old reference genome that does not map to any region in the new reference genome (colored in pink).
An updated region is a region in the old reference genome that maps to at least one region in the new reference genome within a reasonable error rate, i.e., differences from the old reference (colored in orange with some differences marked with black bars).
A new region is a region in the new reference genome that does not map to any region in the old reference genome (colored in green).
A constant region is a region of the genome that is exactly the same in both old and new reference genomes (colored in blue). The start and end positions of a constant region are not necessarily the same in the old and new reference genomes.
A retired region is a region in the old reference genome that does not map to any region in the new reference genome (colored in pink).
B
Khaligh-Razavi, S.-M., and Kriegeskorte, N.: Deep supervised, but not unsupervised, models may explain IT cortical representation. PLOS Computational Biol. 10 (2014).
Khaligh-Razavi, S.-M., and Kriegeskorte, N.: Deep supervised, but not unsupervised, models may explain IT cortical representation. PLOS Computational Biol. 10 (2014).
Kriegeskorte, N. and Douglas, P. K.: Interpreting Encoding and Decoding Models. arXiv:1812.00278 [q-bio] (2018).
Evtimov, I et al. “Robust physical-world attacks on deep learning models,” arXiv:1707.08945, 2017.
Kriegeskorte, N. and Douglas, P. K.: Cognitive Computational Neuroscience. Nature Neuroscience. 21. 1148-1160 (2018).
B
Nusubscript𝑁𝑢N_{u}italic_N start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT bits by applying Theorem 3.3 (with parameters N=Nu𝑁subscript𝑁𝑢N=N_{u}italic_N = italic_N start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT, k=ku𝑘subscript𝑘𝑢k=k_{u}italic_k = italic_k start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT and 1⁢log⁡n1𝑛1\log n1 roman_log italic_n) and store it by calling
D.𝚠𝚛𝚒𝚝𝚎⁢(u,X)formulae-sequence𝐷𝚠𝚛𝚒𝚝𝚎𝑢𝑋D.\mathtt{write}(u,X)italic_D . typewriter_write ( italic_u , italic_X ).
dense rank to replace the self-delimiting number in D.𝚟𝚎𝚌𝚝𝚘𝚛⁢(u)formulae-sequence𝐷𝚟𝚎𝚌𝚝𝚘𝚛𝑢D.\mathtt{vector}(u)italic_D . typewriter_vector ( italic_u ) by the
operations 𝚛𝚎𝚊𝚍𝚛𝚎𝚊𝚍\mathtt{read}typewriter_read(u𝑢uitalic_u) and 𝚠𝚛𝚒𝚝𝚎𝚠𝚛𝚒𝚝𝚎\mathtt{write}typewriter_write(u𝑢uitalic_u) in constant time as
vector q𝑞qitalic_q by calling D.𝚠𝚛𝚒𝚝𝚎⁢(u,(h+1,rq))formulae-sequence𝐷𝚠𝚛𝚒𝚝𝚎𝑢ℎ1subscript𝑟𝑞D.\mathtt{write}(u,(h+1,r_{q}))italic_D . typewriter_write ( italic_u , ( italic_h + 1 , italic_r start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT ) ).
A
As an alternative approach, relative localization based on wireless communication between drones has the advantage of being lightweight and suitable for resource-constrained aerial vehicles.
To support relative localization, the ranging message must carry additional velocity, yaw rate, and height information of the sender, so that the neighbors can utilize such information for relative localization estimations.
Figure 1: The scheme of the multi-robot system and all onboard sensors. Specifically, each robot has an inertial measurement unit (IMU), an optical flow sensor, and a downward-pointing laser sensor for obtaining acceleration, rotation rates, velocities, and height. This information is fused by an onboard filter to get the body-frame velocity, yaw rate, and height, which is further rotated to get the horizontal-frame velocity, yaw rate and height. By UWB wireless ranging and communication, other robots’ state information is received and combined, the relative positions and yaw are estimated.
In this simulation, the relative state between two robots is estimated and compared to the ground-truth relative position and yaw to verify the localization accuracy.
The aerial robots use wireless antennas to exchange state information (e.g., velocity, yaw rate, height) and combine these with relative range measurements obtained from the antennas, which attracts recent attensions [11, 12, 13].
D
The paper is organized as follows. We start by discussing the related background and our previous work (Section 2). We introduce the grammar of IEMA that bases on a new taxonomy of explanations (Section 3) and present its applicability on two real-world predictive tasks (Section 4). We then report the results from a user study aiming to evaluate IEMA in a third practical setting (Section 5). Finally, we conclude with a discussion on the challenges in human-centered XIML (Section 6).
2.1 A theory-practice mismatch in explainable and interpretable machine learning
The topic of explainable machine learning brings much attention recently. However, related work is dominated by contributions with a technical approach to XIML or works focused on providing a list of requirements for its better adoption.
Complex machine learning predictive models, often referred to as black-boxes, demonstrate high efficiency in a rapidly increasing number of applications. Simultaneously, there is a growing awareness among machine learning practitioners that we require more comprehensive tools for model interpretability and explainability. There are many technical discoveries in the field of explainable and interpretable machine learning (XIML) praised for their mathematical brilliance and software ingenuity (Baehrens et al., 2010; Ribeiro et al., 2016; Lundberg and Lee, 2017; Biecek, 2018; Alber et al., 2019; Apley and Zhu, 2020).
Contrastively, Yan et al. (2020) create an interpretable decision tree that supports decision-making in a hospital, concerning COVID-19 mortality. We applied the methodology of IEMA to showcase the potential human-model dialogue with a machine learning black-box in this use-case (Baniecki and Biecek, 2021). It results in a list of potential questions that appear in explanatory model analysis and a practical tool that could be used in place of a standard decision tree.333The modelStudio dashboard for the COVID-19 use-case: https://rai-covid.drwhy.ai. The grammar of IEMA becomes useful in the external audit of machine learning models.
A
The pump and dumps on currencies with higher market capitalization are typically carried out on Binance, the ones with lower market capitalization on Cryptopia.
Once a pump is detected we pause our classifier for 30303030 minutes to avoid multiple alerts for the same event.
YoBit is the exchange where most of pump and dump operations happen, while Binance is the most popular exchange among all groups.
The pump and dumps on currencies with higher market capitalization are typically carried out on Binance, the ones with lower market capitalization on Cryptopia.
In particular, the median market capitalization of the cryptocurrencies for exchange is $25,574,192currency-dollar25574192\$25,574,192$ 25 , 574 , 192 for Binance, $2,619,703currency-dollar2619703\$2,619,703$ 2 , 619 , 703 for YoBit, $2,512,627currency-dollar2512627\$2,512,627$ 2 , 512 , 627 for BitTrex, and $144,373currency-dollar144373\$144,373$ 144 , 373 for Cryptopia.
D
Note that there are different possible ways of constructing computational models
In particular, we can then apply the construction of cohomological information as in
higher mutual information functionals of cohomological information, in the same way as the
One interprets then the rest of the cohomological integrated information as measures
with respect to which one can compute the cohomological integrated information as
A
In this paper, we argue that applying neurotechnology for human augmentation to augment physicians and surgeons, and can cause personal identity, discrimination and financial issues for physicians and surgeons, and lead to patients being harmed. The way that the paper is structured is as followed: we first describe the augmentations and neurotechnologies that can be used to achieve the relevant augmentations for physicians and surgeons, then review selected ethical concerns discussed within literature particularly focusing on human rights, human-computer interaction, data, brain-computer interface, global bioethics and drug development, and discuss the neuroengineering behind using neurotechnology for augmentation purposes. We then conclude with an analysis on outcomes and ethical issues of implementing human augmentation via neurotechnology in medical and surgical practice. The ethical analysis specifically focuses on social issues, global health inequality and health migration, and patient harm, and includes an assessment on personhood with respect to the neurotechnology users (i.e. the physicians and surgeons). In this paper, we assume that all neurotechnologies mentioned always succeeds in providing a person with augmentations, but the type of augmentation provided by a neurotechnology is based on what its capable of. The motivation behind this assumption is to allow the paper to address a possible best-case scenario with neurotechnology and human augmentation in physicians and surgeons, and focus on ethical issues that arise with this scenario.
For our case, we will only mainly focus on (1) and (4) as (2) and (3) are more distant to the scope of the paper. However, those are areas that can and should be further investigated. With there being four different groups, there is the change that those that physicians and surgeons that are fully augmented (i.e. have successfully undergone all augmentation via neurotechnology) mistreat those that do not have the augmentations that they have. This possibility is not farfetched and is very likely to happen. In every industry, including healthcare and medicine, we see prejudice and workers being mistreated due to their title, lack of experience or the lack of knowledge that (at least according to the administration or management within their organization). With those physicians and surgeons that do not have a particular augmentation, such as and especially augmented vision, cognition and touch, fully and semi augmented physicians and surgeons can take advantage of that to enforce their agendas and ideologies and bypassing arguments and statements made by them on the grounds that given their inferiors cognitive abilities or senses, what they say and their opinions should not be weighted equally to those that are fully augmented, or things along those lines. As such, augmentations increase the chance of prejudice and mistreatment occurring among physicians and surgeons, reinforcing existing beneficence, non-maleficence and justice issues within healthcare and medicine. Overtime, this can lead to alteration of personality among physicians and surgeons. Furthermore, this can lead to moral distress, trust issues, and other psychological issues experienced by those doctors that are mistreated, resulting in a loss of personal identity. This is dangerous as this shift in personality and personal identity will not only impact the performance of those physicians and surgeons and the lives of patients, but can create or further contribute to suicidal thoughts within them. Depending on the profile, involvements, commitments and types of patients that the doctor was treating, the impact that their suicide can have can be one that is negatively impacts individuals on a one-to-one, domestic, and/or international level.
Knowing how neurotechnology can realize augmentations within humans enables us to address how such technology and augmentations will impact medical practice. For this section, we will specifically focus on the practice of surgery. In this case, the users of the neurotechnology are surgeons. Within surgical settings, augmented vision, augmented touch and augmented gesture will enable surgeons to perform surgeries with a higher level of precision and accuracy by allowing them to see details more clearly, utilize surgical tools and methods better, and perform surgical procedures and operations better. As a result, the success rate of surgeries performed by surgeons utilizing neurotechnology will increase, which is good for multiple reasons; the main reason being that successful surgery will enable the patient to live longer and healthier life, which is helpful for the individual, their family and society. Augmented cognition for surgeons enables surgeons to be able to learn new surgical procedures that are less invasive, shorten operative times, lessens costs and reduce the likelihood of complications. Furthermore, it will allow surgeons to be more knowledgeable in surgical techniques and competent in the ability to recognize the limits of their professional competence [36]. The benefits of these can best be understood through a bioethics perspective, with specifically using the principle of beneficence, non-maleficence and justice, In terms of beneficence, in neurotechnology and the augmentations empowering surgeons, they enable surgeons to take actions, make suitable surgical judgements to assess the risks, burdens and benefits, and approach surgeries in a way that that respects and is in the best the interest of their patients. In addition, their improved cognitive abilities will allow them to stay up to date with activities and update their knowledgebase as required.
For this section, we will focus on medical practice that do not involve surgery. In this case, the users of the neurotechnology would be physicians. Within non-surgical settings, augmented vision and augmented cognition will allow physicians to better perform diagnoses. For instance, given that the physician will have enhanced vision and cognitive abilities, analysis of scans from diagnostic radiology exams such as computed tomography (CT) and fluoroscopy would be better performed as physician will be able to see details more clearly and better spot indicators on the scans. As such, this will allow physicians to make more accurate diagnoses, which can result in better understanding assessment, planning and execution of treatment, communication between the physician and the patient, maintain of the physicians’ fidelity and responsibilities, and care for the patient. These help with achieving non-maleficence as effective diagnosis and communication significantly reduces the amount of harm within patients on both a physical and mental level. Good diagnosis helps with increasing the chances that a correct treatment will be used, and prevents moral distress in patients and their family. Additionally, and most importantly, with neurotechnology, augmented vision and augmented cognition enhancing physicians’ diagnoses performance, this leads to a major step towards justice. Specifically, improvement in diagnosis will lead to reduction in discrimination, misclassification and prejudice experienced by patients, and better provide freedom from misdiagnosis to patients.
Human augmentation can be formally defined as an interdisciplinary field that addresses methods, technologies and their application for enhancing cognitive abilities, senses and actions of humans [32]. In enhancing these leads us to augmented cognition, augmented senses, and augmented actions. Augmented senses focus on enhancing one’s ability to see, hear, smell, taste and touch. Augmented cognition focuses on enhancing one’s memory (short-term and long-term), attention, learning capabilities, awareness, and knowledge. Augmented action can be broken down to two parts: motion augmentation and gesture augmentation. Motion augmentation would simply be improving one’s ability to move, enabling them to take actions that they may not normally be able to do such as carry very heavy objects or run at a faster speed. Gesture augmentation is similar to motion augmentation, except that it focuses more movement and positioning of one’s hand and head. For instance, being able to keep your hand in a specific position for a long amount of time without fatigue and shaking would be considered augmented gesture and not motion augmentation. The end goal of both augmentations is that the augmentation should allow the person to perform an action optimally. From all the augmentations mentioned, we will only focus on augmented senses (specifically augmented vision and augmented touch), augmented cognition, and augmented action (specifically augmented gesture). The motivation behind these augmentations in particular is because these augmentations best allow physicians and surgeons to better diagnose and treat diseases and disorders.
D
In Section 5, we prove that the first fall degree of a multi-graded polynomial system is bounded by a certain value determined from its multi-degree if the order of the coefficient field is sufficiently large, and provide the theoretical assumption for applying the XL algorithm with a kernel search.
In Section 5, we prove that the first fall degree of a multi-graded polynomial system is bounded by a certain value determined from its multi-degree if the order of the coefficient field is sufficiently large, and provide the theoretical assumption for applying the XL algorithm with a kernel search.
In this article, we mainly investigate an upper bound of the first fall degree for a polynomial system over a sufficiently large field. Upper bounds given in this article are actually that of the minimal degree d𝐾𝑆𝑦𝑧subscript𝑑𝐾𝑆𝑦𝑧d_{{\it KSyz}}italic_d start_POSTSUBSCRIPT italic_KSyz end_POSTSUBSCRIPT at the first homology of the Koszul complex to a polynomial system (Definition 4.1). The assumption for the order of a field gives that the minimal degree coincides with the first fall degree (Proposition 4.4).
In Section 6, we provide actual examples that satisfy the condition for the order of the coefficient field.
In Section 4, we prove that the first fall degree is smaller than the degree of regularity in the semi-regular case if the order of the coefficient field is sufficiently large.
C
Antic, J.: A deep learning based project for colorizing and restoring old images (2018)
Anwar, S., Khan, S., Barnes, N.: A deep journey into super-resolution: A survey.
Anwar, S., Khan, S., Barnes, N.: A deep journey into super-resolution: A survey.
Anwar, S., Khan, S., Barnes, N.: A deep journey into super-resolution: A survey.
Therefore, inspired by surveys in deep image super-resolution [6], VQA [1], etc., we provide a comprehensive survey of deep image colorization.
A
In Section 4.1 we show that the factor 2 is essentially optimal in the worst case, unless P = NP.
We study the computational complexity of finding φ∗superscript𝜑\varphi^{*}italic_φ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT and polynomial-time approximation algorithms. In general, approximating φ∗superscript𝜑\varphi^{*}italic_φ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT can be an extremely hard problem, even in the constrained persuasion problem. Our first insight in Section 5.1 is that the main source of hardness in the problem is deciding on the optimal set of accept signals. We then provide a simple 2⁢n2𝑛2n2 italic_n-approximation algorithm and an n1−εsuperscript𝑛1𝜀n^{1-\varepsilon}italic_n start_POSTSUPERSCRIPT 1 - italic_ε end_POSTSUPERSCRIPT-hardness in Section 5.2. The PTAS and the matching strong NP-hardness for instances with unique accepts as well as the efficient algorithm for unique rejects are discussed in Section 5.3. The section concludes with the discussion of instances with global signal or laminarity properties in Section 5.4.
For the constrained delegation problem, we show two interesting non-trivial approximation results in Section 4.2. For instances with degree-d𝑑ditalic_d states we give a (2−1d2)21superscript𝑑2(2-\frac{1}{d^{2}})( 2 - divide start_ARG 1 end_ARG start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG )-approximation algorithm via LP rounding. For degree-2 states, we propose a SDP-based algorithm to compute a 1.1-approximation. To the best our knowledge, this is the first application of advanced results from the SDP toolbox in the context of information design, as well as mechanism design.
We discuss tractable special cases in Section 4.3. Sher [32, Theorem 7] shows that in instances with foresight the optimal decision scheme can be found in polynomial time by solving a network flow problem. Unique rejects and degree-1 accepts are special cases, so the same result holds. For proof of membership, the optimal decision scheme is very simple. In case of laminar states or laminar signals, the instance does not necessarily fulfill the conditions of foresight. In both cases, we provide new polynomial-time algorithms based on dynamic programming.
In Section 4.2 we present our results on approximation algorithms. The results on special cases with optimal schemes are discussed in Section 4.3.
D
In contrast with the setting in this paper where memory involves the dependence of the output time series on the input, the Hurst exponent measures temporal variations and dependence within the input time series itself.
This may serve to justify or improve current heuristic methods (Tseng et al., 2016; Dieng et al., 2017; Trinh et al., 2018) developed in applications to deal with the difficulty in training with long-term memory.
Recurrent neural networks (RNNs) (Rumelhart et al., 1986) are among the most frequently employed methods to build machine learning models on temporal data. Despite its ubiquitous application (Baldi et al., 1999; Graves & Schmidhuber, 2009; Graves, 2013; Graves et al., 2013; Graves & Jaitly, 2014; Gregor et al., 2015), some fundamental theoretical questions remain to be answered. These come in several flavors. First, one may pose the approximation problem, which asks what kind of temporal input-output relationships can RNNs model to an arbitrary precision. Second, one may also consider the optimization problem, which concerns the dynamics of training (say, by gradient descent) the RNN. While such questions can be posed for any machine learning model, the crux of the problem for RNNs is how the recurrent structure of the model and the dynamical nature of the data shape the answers to these problems. For example, it is often observed that when there are long-term dependencies in the data (Bengio et al., 1994; Hochreiter et al., 2001), RNNs may encounter problems in learning, but such statements have rarely been put on precise footing.
Much of the time series literature investigates statistical properties and estimation methods of data with long range dependence (Samorodnitsky, 2006; Taqqu et al., 1995; Beran, 1992; Doukhan et al., 2003). One can also combine these classic statistical methodologies with the RNN-like architectures to design hybrid models with various applications (Loukas & Oke, 2007; Diaconescu, 2008; Mohan & Gaitonde, 2018; Bukhari et al., 2020).
In the literature, a number of results have been obtained pertaining to the analysis of training dynamics of RNNs. A positive result for training by GD is established in Hardt et al. (2018), but this is in the setting of identifying hidden systems, i.e. the target functional comes from a linear dynamical system, hence it must possess good decay properties provided stablity. On the other hand, convergence can also be ensured if the RNN is sufficiently over-parameterized (large m𝑚mitalic_m; Allen-Zhu et al. (2019)). However, both of these settings may not be sufficient in reality. Here we provide an alternative analysis of a setting that is representative of the difficulties one may encounter in practice. In particular, the curse of memory that we established here is consistent with the difficulty in RNN training often observed in applications, where heuristic attributions to memory are often alluded to Hu et al. (2018); Campos et al. (2017); Talathi & Vartak (2015); Li et al. (2018). The analysis here makes the connection between memories and optimization difficulties precise, and may form a basis for future developments to overcome such difficulties in applications.
C
The uniform distribution on {(1,1,−1),(1,−1,1),(−1,1,1)}111111111\{(1,1,-1),(1,-1,1),(-1,1,1)\}{ ( 1 , 1 , - 1 ) , ( 1 , - 1 , 1 ) , ( - 1 , 1 , 1 ) } has the same pairwise biases.
It suffices to show that for every clause, there exists a distribution of satisfying assignments that agrees with the global (pairwise) biases.
The uniform distribution on {(1,1,−1),(1,−1,1),(−1,1,1)}111111111\{(1,1,-1),(1,-1,1),(-1,1,1)\}{ ( 1 , 1 , - 1 ) , ( 1 , - 1 , 1 ) , ( - 1 , 1 , 1 ) } has the same pairwise biases.
Our intuition for this conjecture is that this configuration comes from taking the uniform distribution over satisfying assignments where all but one of the Xisubscript𝑋𝑖X_{i}italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are the same, which we expect are the hardest satisfying assignments to distinguish from the unsatisfying assignments.
The following distribution on satisfying assignments has the same pairwise biases.
D
Even in the case of the smaller Coppersmith-Winograd tensor C⁢W1𝐶subscript𝑊1CW_{1}italic_C italic_W start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, the known limitations are weaker, and it is not ruled out that one could achieve ω<2.239𝜔2.239\omega<2.239italic_ω < 2.239 using it (see [ambainis, Table 1]).
The main contribution of this paper is a new refined version of the laser method which we then use to obtain the new bound on ω𝜔\omegaitalic_ω. The laser method (as coined by Strassen [laser]) is a powerful mathematical technique for analyzing tensors. In our context, it is used to lower bound the “value” of a tensor in designing matrix multiplication algorithms. The laser method also has applications beyond bounding ω𝜔\omegaitalic_ω itself, including to other problems in arithmetic complexity like computing the “asymptotic subrank” of tensors [Alman19], and to problems in extremal combinatorics like constructing tri-colored sum-free sets [kleinberg]. We believe our improved laser method may have other diverse applications.
After the preliminary version of this paper, subsequent work [duan2023faster, williams2024new] designed a further improved matrix multiplication algorithm, achieving ω<2.371552𝜔2.371552\omega<2.371552italic_ω < 2.371552. The key idea behind these improvements is a new asymmetric way to apply the laser method to C⁢Wq𝐶subscript𝑊𝑞CW_{q}italic_C italic_W start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT and its subtensors. Interestingly, their analysis, and particularly a new observation called “combination loss” [duan2023faster], appears quite specific to the tensor C⁢Wq𝐶subscript𝑊𝑞CW_{q}italic_C italic_W start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT, and it is unclear if the same approach would yield improvements for other tensors to which the laser method applies. It also appears difficult to use this asymmetric approach in conjunction with our refined laser method. Roughly, the new zeroing outs of the refined laser method may interfere with other subtensors which are intended to be kept in the asymmetric approach; we refer the reader to [williams2024new, End of Section 2.2] for a more technical discussion.
We present our new probabilistic argument for dealing with distributions β∈Dα𝛽subscript𝐷𝛼\beta\in D_{\alpha}italic_β ∈ italic_D start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT other than α𝛼\alphaitalic_α in Section 3, and then we show how to incorporate it into the laser method in Section 4. We then get into the details of actually applying the laser method to C⁢Wq𝐶subscript𝑊𝑞CW_{q}italic_C italic_W start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT to achieve our new bound on ω𝜔\omegaitalic_ω: In Section 5 we discuss the computational problem of applying the laser method to a given tensor, and some algorithms and heuristics for solving it, and in Section 6 we detail how to apply the laser method to C⁢Wq𝐶subscript𝑊𝑞CW_{q}italic_C italic_W start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT specifically. Our new bound of ω<2.3728596𝜔2.3728596\omega<2.3728596italic_ω < 2.3728596 is achieved by applying our refined laser method to C⁢W5⊗32𝐶superscriptsubscript𝑊5tensor-productabsent32CW_{5}^{\otimes 32}italic_C italic_W start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊗ 32 end_POSTSUPERSCRIPT, the same tensor used by [legall].
In this subsection, we give an overview of our improvement to the laser method. We assume familiarity with basic notions related to tensors and matrix multiplication; unfamiliar readers may want to read Section 2 first.
D
=ℒsup,absentsubscriptℒsup\displaystyle=\mathcal{L}_{\mathrm{sup}},= caligraphic_L start_POSTSUBSCRIPT roman_sup end_POSTSUBSCRIPT ,
Π1subscriptΠ1\displaystyle\Pi_{1}roman_Π start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT
For the sets Σ1subscriptΣ1\Sigma_{1}roman_Σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, ℒsupsubscriptℒsup\mathcal{L}_{\mathrm{sup}}caligraphic_L start_POSTSUBSCRIPT roman_sup end_POSTSUBSCRIPT, Π1subscriptΠ1\Pi_{1}roman_Π start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, ℒsubsubscriptℒsub\mathcal{L}_{\mathrm{sub}}caligraphic_L start_POSTSUBSCRIPT roman_sub end_POSTSUBSCRIPT and ℝcsubscriptℝ𝑐\mathbb{R}_{c}blackboard_R start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT, the following equalities hold true:
\underline{r}_{n}=x_{*}roman_inf start_POSTSUBSCRIPT italic_n ∈ blackboard_N end_POSTSUBSCRIPT start_ARG start_FLOATSUPERSCRIPT ′ end_FLOATSUPERSCRIPT end_ARG under¯ start_ARG italic_r end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT = italic_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT. Hence, x∗∈Π1subscript𝑥subscriptΠ1x_{*}\in\Pi_{1}italic_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ∈ roman_Π start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT follows by Lemma 16.
The following lemma is a basic result by Zheng and Weihrauch, characterizing the sets Σ1subscriptΣ1\Sigma_{1}roman_Σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and Π1subscriptΠ1\Pi_{1}roman_Π start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT through the suprema and infima of computable sequences of rational numbers.
A
\mathcal{R}\}.italic_L start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( italic_D start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_D start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT ) ≜ roman_inf start_POSTSUBSCRIPT ( italic_R , italic_L ) end_POSTSUBSCRIPT { italic_L | ( italic_R , italic_L , italic_D start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_D start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT ) ∈ caligraphic_R } .
In addition to the privacy and utility, the released rate, which represents the necessary number of bits per letter to transmit the privatized data, is also an important metric of a privatization mechanism as mentioned in Section II-A. We have studied the optimal tradeoff between privacy and utility. The next interesting problem is to know what is the minimum released rate to achieve the optimal privacy-utility tradeoff.
Due to the lack of information of which task to be carried out, a robust privatization based on a given set of possible tasks is considered. We first derive the single-letter characterization of the optimal privacy-utility tradeoff. By applying log-loss distortion as the utility metric, the minimum privacy leakage problem is formulated as a compound version of the privacy funnel problem. Under the assumption that the raw data comprises multiple independent components and the private feature is a component-wise deterministic function of the raw data, we show that the minimum privacy leakage problem can be decomposed into multiple parallel privacy funnel problems with corresponding weightings, each of which represents the amount of released information of each component of the raw data. We further solve the privacy funnel problem and show that the minimum leakage problem is equivalent to a linear program. The sufficient released rate to achieve the minimum privacy leakage is also studied. Numerical results are provided to illustrate the robustness of our approach as well as the impact of the selection of the set of possible tasks on the privacy-utility tradeoff.
that can achieve the minimum released rate in problem (44), and the minimum rate is
Note that in the above definition, L∗superscript𝐿L^{*}italic_L start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is not a function of the released rate R𝑅Ritalic_R. Instead, it is the minimum leakage with arbitrarily large released rate. The impact of the released rate and the sufficient condition to achieve the optimal privacy-utility tradeoff will be studied in Section VI.
D
Additionally, we need a way to increase and decrease the approximation level (the n𝑛\mathit{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}%
\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{n}}}italic_n index), for reasons we explain below.
\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{n-2}}}italic_n - italic_2 steps.
\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{n=3}}}italic_n = italic_3) for this type is:
\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{t^{\prime}}}}italic_t start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT.
A
While the fact that ERM relies on spurious correlations has become empirical folk wisdom, only a few studies have made efforts to carefully model this. Broadly, there are two kinds of existing models that explain this phenomenon. One existing model is to imagine that both the invariant and the spurious features are only partially predictive of the label (Tsipras et al., 2019; Sagawa et al., 2020b; Arjovsky et al., 2019; Ilyas et al., 2019; Khani & Liang, 2020), as a result of which the classifier that maximizes accuracy cannot ignore the spurious feature (see Fig 1).
Spurious correlations. Empirical work has shown that deep networks find superficial ways to predict the label, such as by relying on the background (Beery et al., 2018; Ribeiro et al., 2016) or other kinds of shortcuts (McCoy et al., 2019; Geirhos et al., 2020). Such behavior is of practical concern because accuracy can deteriorate under shifts in those features (Rosenfeld et al., 2018; Hendrycks & Dietterich, 2019). It can also lead to unfair biases and poor performance on minority groups
While the fact that ERM relies on spurious correlations has become empirical folk wisdom, only a few studies have made efforts to carefully model this. Broadly, there are two kinds of existing models that explain this phenomenon. One existing model is to imagine that both the invariant and the spurious features are only partially predictive of the label (Tsipras et al., 2019; Sagawa et al., 2020b; Arjovsky et al., 2019; Ilyas et al., 2019; Khani & Liang, 2020), as a result of which the classifier that maximizes accuracy cannot ignore the spurious feature (see Fig 1).
The other existing model is based on the “simplicity bias” of gradient-descent based deep network training (Rahaman et al., 2018; Neyshabur et al., 2015; Kalimeris et al., 2019; Arpit et al., 2017; Xu et al., 2019; des Combes et al., 2018). In particular, this model typically assumes that both the invariant and spurious features are fully predictive of the label, but crucially posits that the spurious features are simpler to learn (e.g., more linear) than the invariant features, and therefore gradient descent prefers to use them (Shah et al., 2020; Nar et al., 2019; Hermann & Lampinen, 2020).
The most popular strategy is to learn useful features while constraining them to have similar distributions across domains (Ganin et al., 2016; Li et al., 2018b; Albuquerque et al., 2020). Other works constrain these features in a way that one can learn a classifier that is simultaneously optimal across all domains (Peters et al., 2016; Arjovsky et al., 2019; Krueger et al., 2020).
C
There is a constant c𝑐citalic_c such that for any d⩾3𝑑3d\geqslant 3italic_d ⩾ 3 it holds that any graph
Given any graph Gdsubscript𝐺𝑑G_{d}italic_G start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT that is drawn in a d𝑑ditalic_d-dimensional grid, we construct a
n𝑛nitalic_n variables, we can generate a graph Gdsubscript𝐺𝑑G_{d}italic_G start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT of degree at most 4444 drawn in a
G𝐺Gitalic_G of maximum degree 2⁢d2𝑑2d2 italic_d on n𝑛nitalic_n vertices can be drawn in a d𝑑ditalic_d-dimensional
cover on graphs drawn in a d𝑑ditalic_d-dimensional grid. In the second stage, given
C
In order to make the download rate decision of each user, the physical layer subproblem (PHY-LS) is solved before the download of every chunk. Note that a beam alignment period is considered before the download of each chunk, and that the beam coherence time is assumed to be longer than the chunk download time. Therefore, the PHY-LS is a time scheduling problem per beam coherence time, i.e, its goal is to find which user is scheduled to have the resources for each time slot for the upcoming beam coherence time.
Our objective twofold: (i) minimize Max⁢(𝒲)Max𝒲\text{Max}(\cal{W})Max ( caligraphic_W ) and, (ii) use the remaining resources to maximize the available rate for each user, so the application layer subproblem (APP-LS) can use the rate to maximize the QoE of each user. Since a user’s QoE is a concave function with respect to the achievable rate [13], and fairness among users also needs to be ensured, we consider the following optimization problem:
The problem defined in (5a)-(5k) is a non-convex optimization problem that has integer (non-convex) constraints. Integer-constrained problems are known to be NP hard in general [27]. Very limited problems in this class of discrete optimization are known to be solvable in polynomial time. Moreover, the FoV of each user is not known non-casually. In addition, the available rate can be calculated only for a short time ahead (the beam coherence time). Therefore, we propose to solve the problem per chunk and approximate its solution by decoupling it into two subproblems. The objective of the first subproblem is to find the rate of each user Ru⁢(t)subscript𝑅𝑢𝑡R_{u}(t)italic_R start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT ( italic_t ) for the next beam coherence period such that Max⁢(𝒲)Max𝒲\text{Max}(\cal{W})Max ( caligraphic_W ) is minimized, and the second subproblem is to maximize the application layer’s QoE metrics given Ru⁢(t)subscript𝑅𝑢𝑡R_{u}(t)italic_R start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT ( italic_t ) of each user. Next, we discuss the two subproblems and then we propose our framework to solve the resulting optimization problem.
The PHY-LS described in the previous subsection finds which user is scheduled to use the resources during the time slots comprised in the next beam coherence interval. Hence, the achievable rate for each user during the next beam coherence period is exposed to the application layer algorithm which leverages this knowledge to download video tiles such that the application layer’s QoE metric described in section (3) is maximized. Therefore, the application layer optimization problem (APP-LS) can be solved per user since the users’ problems are decoupled after finding the achievable rate for each one of them. Thus, for each user we have the following:
In this paper, we have proposed a cross layer optimization approach for multi-user 360 degree video streaming. The proposed framework maximizes the available rate to each user and allows each user to efficiently utilize the available rate to maximize its QoE. A QoE metric that maintains a tradeoff between maximizing each user’s QoE, and ensuring fairness among users has been considered. We have shown that the main problem can be decoupled into two subproblems: (i) the physical layer subproblem whose objective is to find the download rate of each user, and (ii) the application layer subproblem whose objective is to use that rate to find a quality decision per tile such that the user’s QoE is maximized. We have shown that the physical layer subproblem can be solved optimally with low complexity. Moreover, we have proposed a reinforcement learning based approach to solve the application layer subproblem. Our conducted experiments have revealed the robustness of our scheme and demonstrated its significant performance improvement compared to several 360 degree streaming algorithms.
A
Table 10: Prediction results on the training set by the generalised beta-CoRM models with vague gamma, objective Lomax, Lomax and half-Cauchy type priors respectively.
To analyse the effects of the prior on the beta-CoRM models proposed in the previous sections we consider a synthetic data set composed of 5 imbalanced groups with 250 total observations and 300 binary features as graphically represented in Figure 2. We are interested in looking at the computational cost of posterior inference and the impact of feature selection on the predictive performance of the models. It is important to notice that for the predictive performance, the test set has been generated using the same parameters as the synthetic data. Furthermore, and for illustrative purposes on the feature selection process we obtain the optimal number of features by using the true labels of the test set. Of course, we acknowledge that on real-world applications this is not possible and hence, to find the “best” features we could rely on a cross validation procedure.
With respect the feature selection it is further interesting to notice that the four generalised models coincide in 98 features, and this number increases to 119 common features if we just consider the generalised beta-CoRM models with gamma-gamma priors. To fully understand the process of feature selection on this noisy data set we present in Figure 7 the data restricted to the optimal figures for the four generalised beta-CoRM models. Then we are able to see better defined groups with the exception of the first, sixth and last family. Nevertheless the models are still able to identify some influencing features in the groups, which we can exploit to improve the predictive performance of the original model on the test set as seen in Table 11.
Now that we have compared the posterior inference of the beta-CoRM models and the impact of the prior we can turn our attention to the feature selection procedure of the generalised beta-CoRM models. To this end we are mainly interested on the optimal number of features, the threshold at which they are found and of course, the impact on the predictive performance. Firstly, in Table 5 we present the results on the first two characteristics mentioned above for the generalised beta-CoRM models. One can appreciate that the thresholds found get larger as we consider scale-mixture gamma priors with heavier tails. Clearly, with the half-Cauchy type prior having the largest threshold. A behaviour we have already seen in the global score parameters as explained in Section 6.2.
Of course, it is important to remember that the main advantage of the generalised models is that they allow us to find an optimal number of features to reduce the uncertainty in the data and which can be further used to improve the predictive performance of the model. To this end and contrary to the synthetic data we obtain the optimum threshold and the optimal features by using the training set as our validation set as well. With this in mind, we present in Table 10 the results on the training set including the maximum accuracy achieved, the threshold used for the generalised beta-CoRM models and the number of features required to achieve the maximum accuracy. Clearly, with the generalised versions we are able to better explain the data with the objective Lomax prior being the best one. Furthermore, we can again appreciate the similarities between the models with vague gamma priors and the ones having the gamma-gamma prior structure.
B
To further improve modelling, in-depth extraction of indicator features and using high-quality data are suggested, which may involve a broader range of risk potentials and driving scenarios, such as lane changing. Methodologically, new algorithms on clustering and AutoML can be integrated to refine the solution.
The occurrence of risk conditions (i.e., the minority class) is usually much lower than the number of safety instances (i.e., the mass majority class). Incorrect assignment of risk instances into a safety class entails a great misclassification cost [17]. Machine learning on imbalanced data is challenging, since algorithms are generally driven by global optimisation, which is likely biased towards the majority, and the minority class might be wrongly ignored as noise or outliers [18]. Moreover, there are observed intrinsic challenges in class imbalance, such as: (1) presence of small disjuncts, (2) lack of density and information, (3) problem of overlapping between classes, (4) non-obvious borderline instances for the distinction between positive and negative classes, and (5) the identification of noisy data, among others [19]. In supervised learning, there are several tactics to reduce the impacts of class imbalance, such as data under-sampling and/or oversampling, cost-sensitive loss functions [20], [18]. However, the techniques to handle unsupervised learning on imbalanced data is not well investigated. Yet risk clustering is to find convincing and useful partitions to retrieve the imbalanced classes.
In view of a large number of real-world applications suffering from the challenges of class imbalance and lack of ground truth (e.g., expensive or difficult to obtain beforehand), we demonstrate a reliable solution with a case on road safety, which holds great potentials.
Early risk diagnosis and effective anomaly detection play a key role in a range of advanced solutions towards Smart Road, especially with the development of autonomous and connected vehicles (CAV) and roadside sensing [1]. Smart road will add huge benefits and synergistic effects to standalone smart vehicles, which can enable safety capacity beyond ego vehicle sensing [2]. Roadside sensing and CAV provide the capability of data acquisition, however, data analysis for risk diagnosis and anomaly detection remain weak, due to the inherent challenges of lack of risk definition, or no ground truth [3, 4]. There is a perennial quest to develop measures to early-identify anomaly and risk potentials from generalised traffic conditions, especially based on easy collectable data.
The quality of clustering can be generally measured by two ways, namely, external validation and internal validation. External validation is by comparing the clustering result to the ground truth or well-defined reference. However, in most real applications, one can hardly claim that a complete knowledge of ground truth is available or valid. Thus, external evaluation is mostly used for synthetic data and tuning models.
B
The experimental conditions of the GA for GPU and many core CPU loop statement offload are as follows.
Number of generations T: No more than the gene length. 16 for 3mm, 20 for NAS.BT and 6 for tdFIR.
First, as a preparation, I proposed an automatic offload method for loop statements for a many core CPU as one of various offloading destination environments, with reference to the evolutionary computation method for a GPU. Next, I studied the order of offload trials for each offloading device and the speedup method when there were multiple offload candidates with one node. Specifically, the function block offload for many core CPU, the function block offload for GPU, the function block offload for FPGA, the loop statement offload for many core CPU, the loop statement offload for GPU, the loop statement offload for FPGA are verified in this order. This is because functional blocks offloading can be faster than loop statements offloading, and FPGA verifications take longer time to measure performance.
Gene length: Number of GPU and many core CPU processable loop statements. 18 for 3mm, 120 for NAS.BT and 6 for tdFIR.
Number of individuals M: No more than the gene length. 16 for 3mm, 20 for NAS.BT and 6 for tdFIR.
C
γ⁢λ3𝛾superscript𝜆3\gamma\lambda^{3}italic_γ italic_λ start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT
γ⁢λ3𝛾superscript𝜆3\gamma\lambda^{3}italic_γ italic_λ start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT
γ⁢λ5𝛾superscript𝜆5\gamma\lambda^{5}italic_γ italic_λ start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT
γ⁢λ3/2𝛾superscript𝜆32\gamma\lambda^{3/2}italic_γ italic_λ start_POSTSUPERSCRIPT 3 / 2 end_POSTSUPERSCRIPT
\varsigma\lambda^{3}}.italic_λ start_POSTSUBSCRIPT italic_μ end_POSTSUBSCRIPT = 1 , italic_λ start_POSTSUBSCRIPT italic_J end_POSTSUBSCRIPT = italic_γ italic_λ start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT , italic_λ start_POSTSUBSCRIPT caligraphic_P end_POSTSUBSCRIPT = italic_γ italic_ς italic_λ , italic_λ start_POSTSUBSCRIPT caligraphic_I end_POSTSUBSCRIPT = italic_γ square-root start_ARG italic_ς italic_λ start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT end_ARG .
C
Ecsuperscript𝐸𝑐E^{c}italic_E start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT, one character bigram ci⁢ci+1subscript𝑐𝑖subscript𝑐𝑖1{c_{i}}{c_{i+1}}italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_c start_POSTSUBSCRIPT italic_i + 1 end_POSTSUBSCRIPT, its representation Eb⁢csuperscript𝐸𝑏𝑐{E^{bc}}italic_E start_POSTSUPERSCRIPT italic_b italic_c end_POSTSUPERSCRIPT, and one trigram ci,ci+1,ci+2subscript𝑐𝑖subscript𝑐𝑖1subscript𝑐𝑖2c_{i},c_{i+1},c_{i+2}italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_c start_POSTSUBSCRIPT italic_i + 1 end_POSTSUBSCRIPT , italic_c start_POSTSUBSCRIPT italic_i + 2 end_POSTSUBSCRIPT its representation Et⁢b⁢csuperscript𝐸𝑡𝑏𝑐{E^{tbc}}italic_E start_POSTSUPERSCRIPT italic_t italic_b italic_c end_POSTSUPERSCRIPT is obtained by indexing from Et⁢b⁢csuperscript𝐸𝑡𝑏𝑐{E^{tbc}}italic_E start_POSTSUPERSCRIPT italic_t italic_b italic_c end_POSTSUPERSCRIPT accordingly. The three looking-up tables are model parameters which would be
Table 4: An example of a Sindhi sentence, all words end with the non-joiner letters. (i) denote the words with white space (the tokens are separated with ‘-’ symbol), (ii) without white space (iii) Roman transliteration of Sindhi sentence (iv) is the English translation of a Sindhi sentence.
Table 5: Sindhi word types with an example of space insertion, along with English translation. (i) represent the words with white space (‘-’ symbol represents space), and (ii) without space. The Roman transliteration is given for ease of reading.
Table 6: An example of employed character-level sequence tagging scheme for SWS task. The [X] label represents the white spaces. The given Sindhi sentence can be read from right to left, and the Roman transliteration of each Sindhi token can be read from left to right.
Table 7: An example of Sindhi subword decomposition for subword representation learning
D
Since the SDP relaxation is exact, it holds that rank⁡(P∗)=1ranksuperscript𝑃1\operatorname{rank}(P^{*})=1roman_rank ( italic_P start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ) = 1. Therefore, P∗superscript𝑃P^{*}italic_P start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT can be expressed as
P=[1xz]⁢[1x⊤z⊤]∈𝕊nx+nz+1.𝑃matrix1𝑥𝑧matrix1superscript𝑥topsuperscript𝑧topsuperscript𝕊subscript𝑛𝑥subscript𝑛𝑧1P=\begin{bmatrix}1\\
P∗=[1x∗z∗]⁢[1x∗⊤z∗⊤]superscript𝑃matrix1superscript𝑥superscript𝑧matrix1superscript𝑥absenttopsuperscript𝑧absenttopP^{*}=\begin{bmatrix}1\\
P∗=[1vw]⁢[1v⊤w⊤]=[1v⊤w⊤vv⁢v⊤v⁢w⊤ww⁢v⊤w⁢w⊤]superscript𝑃matrix1𝑣𝑤matrix1superscript𝑣topsuperscript𝑤topmatrix1superscript𝑣topsuperscript𝑤top𝑣𝑣superscript𝑣top𝑣superscript𝑤top𝑤𝑤superscript𝑣top𝑤superscript𝑤topP^{*}=\begin{bmatrix}1\\
V=[e⊤XZ],e∈ℝr,X∈ℝnx×r,Z∈ℝnz×r,formulae-sequence𝑉matrixsuperscript𝑒top𝑋𝑍formulae-sequence𝑒superscriptℝ𝑟formulae-sequence𝑋superscriptℝsubscript𝑛𝑥𝑟𝑍superscriptℝsubscript𝑛𝑧𝑟V=\begin{bmatrix}e^{\top}\\
C
The paper is organized as follows: Section II covers key concepts, Section III introduces the BE-RAN framework, Section IV details core security mechanisms, Section V provides performance analysis, Section VI discusses challenges, and Section VII concludes the paper.
Current 5G RAN typically comprises Centralized Units (CU), Distributed Units (DU), and Radio Units (RU) operating at different OSI layers [7], with the RAN Intelligent Controller (RIC) operating at upper OSI layers. This disaggregated architecture is expected to evolve further in 6G, potentially introducing new functional splits and increased intelligence at the edge. BE-RAN’s flexible approach to identity management and authentication is designed to accommodate these future developments, aligning with the vision of decentralized radio access networks for 5.5G and beyond [14].
The functional split of RAN at lower layers opens up possibilities for distributed features that could enhance privacy and security in decoupled RAN architectures. In this context, the adoption of distributed ledger technology in RAN presents an opportunity for implementing blockchain-native infrastructure [10, 11, 12]. By integrating blockchain-enabled identity management and authentication functions within RAN, we can potentially provide more secure and user-centric services at a lower cost [8], while also facilitating the seamless integration of communication, sensing, and computing capabilities required for next-generation networks.
To address the growing demand for distributed communication in both industrial and consumer applications, we propose Blockchain-enabled Radio Access Network (BE-RAN). This system incorporates blockchain-enabled identity management and mutual authentication (BeMutual) as core functions, evolving RAN with a decentralization focus to provide enhanced privacy and connectivity for distributed scenarios.
BE-RAN incorporates key RAN features and blockchain concepts to implement core network functions at the RAN level, laying a foundation for future 6G networks. While current 5G architectures form the basis of our research, the principles and innovations proposed in BE-RAN are designed with forward compatibility in mind, anticipating the evolving needs of 6G and beyond [13].
D
Function sorted_adjacency_list_rt(Tr)superscript𝑇𝑟(T^{r})( italic_T start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT ) is
In: Trsuperscript𝑇𝑟T^{r}italic_T start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT rooted tree.
In: Trsuperscript𝑇𝑟T^{r}italic_T start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT rooted tree.
In: Trsuperscript𝑇𝑟T^{r}italic_T start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT rooted tree at r𝑟ritalic_r.
In: Trsuperscript𝑇𝑟T^{r}italic_T start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT rooted tree at r𝑟ritalic_r.
A
In what follows, we aim to quantify the tradeoffs between excess adversarial risk, privacy, and runtime for output perturbation. We begin by considering the conceptual output perturbation algorithm, in which we assume that we can compute an exact (α=0𝛼0\alpha=0italic_α = 0) saddle point of HD⁢(w,𝐯).subscript𝐻𝐷𝑤𝐯H_{D}(w,\mathbf{v}).italic_H start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT ( italic_w , bold_v ) .
5.2.2 Conceptual output perturbation for differentially private adversarial training
Algorithm 6 Black Box Output Perturbation Algorithm for Implementing DP Adversarial Training
5.2.3 Efficiently Implementing Output Perturbation for Private Adversarial Training
In the next subsection, we present a practical, efficient algorithm for implementing differentially private adversarial training with runtime bounds for attaining the above robustness guarantees.
A
Fig.3 demonstrates the benefit of using our combined localization loss compared to regression only on Carvana dataset (Carvana, 2010). Higher accuracy has been obtained using the combined loss. Fig.4 shows our instance segmentation results on Cityscapes dataset. It is worth noting that our results are reported on "vehicles" class only.
Insta-YOLO does not have this limitation. Since the four box points are independent of each other, the resulting polygon will be oriented without additional parameter to learn and free of the angle encoding problem. Table 3 shows that our algorithm supersedes (Ali et al., 2018) in accuracy by 5% and runs at 2.7 times the speed, while it provides competitive results compared to Yolact
The standard method in literature for instance segmentation is to build a two-stage pipeline (He et al., 2017). First, objects are detected and highlighted using bounding boxes. Then, a semantic segmentation model processes the areas of the detected boxes to produce objects masks. This approach suffers from various drawbacks: (1) Computational complexity: due to costly upsampling (2) Slow processing: since the two-stages have to run sequentially (3) Does not fit for tasks that involve oriented boxes, such as remote sensing applications (Tang et al., 2017), or Bird-eye-View LiDAR (Ali et al., 2018).
Various applications such as (Tang et al., 2017) and (Ali et al., 2018) require prediction of oriented bounding boxes. Such method requires an additional regression operation to be performed to predict the angle of the box. Conventional oriented boxes methods suffer from angle encoding problem. For practical implementation constraints, box angle is predicted in the range of 0-180 degrees. This means that a slight change of a box oriented with 2 degrees can cause the prediction jump from 2 to 178 which might mislead the network and cause aggressive jumps in prediction.
Two-stage instance segmentation methods usually deal with the problem as a detection task followed by segmentation, they detect the bounding boxes of the objects in the first stage, and a binary segmentation is performed for each bounding box in the second stage. Mask R-CNN (He et al., 2017), which is an extension for the Faster R-CNN, adds an additional branch that computes the mask to segment the objects and replaces RoI-Pooling with RoI-Align to improve the accuracy. Hence, the Mask R-CNN loss function combines the losses of the three branches; bounding box, recognized class, and the segmented mask. Mask Scoring R-CNN (Huang et al., 2019) adds a mask-IoU branch to learn the quality of the predicted masks in order to improve the performance of the instance segmentation by producing more precise mask predictions. The mAP is improved from 37.1% to 38.3% compared to the Mask R-CNN on the COCO dataset. PANet (Liu et al., 2018) is built upon the Mask R-CNN and FPN networks. It enhanced the extracted features in the lower layers by adding a bottom-up pathway augmentation to the FPN network in addition to proposing adaptive feature pooling to link the features grids and all feature levels. PANet achieved mAP of 40.0% on the COCO dataset, which is higher than the Mask R-CNN using the same backbone by 2.9% mAP. Although the two-stage methods can achieve state-of-the-art performance, they are usually quite slow and can not be used for real-time applications. Using one TITAN GPU, Mask R-CNN runs at 8.6 fps, and PANet runs at 4.7 fps. Real-time instance segmentation usually requires running above 30 fps.
C
We study the classical problem of nonparametric dependence detection through a novel perspective of binary expansion. The novel insights from the extension of the Euler formula and the binary expansion approximation of uniformity (BEAUTY) shed lights on the unification of important tests into the novel framework of the binary expansion adaptive symmetry test (BEAST), which considers a data-adaptively weighted sum of symmetry statistics from the binary expansion. The one-dimensional oracle on the weights leads to a benchmark of optimal power for nonparametric tests while being agnostic of the alternative. By approximating the oracle weights with resampling and regularization, the proposed BEAST demonstrates consistent performance and is effectively powerful against a variety of complex dependency forms, showcasing its potential across diverse scenarios.
To facilitate the analysis of large datasets, some desirable attributes of distribution-free tests of independence include (a) a robust high power against a wide range of alternatives, (b) a clear interpretation of the form of dependency upon rejection, and (c) a computationally efficient algorithm. An example of recent development towards these goals is the binary expansion testing (BET) framework and the Max BET procedure in Zhang (2019). It was shown that the Max BET is minimax optimal in power under mild conditions, has clear interpretability of statistical significance and is implemented through computationally efficient bitwise operations Zhao et al. (2023b). Potential improvements of the Max BET include the followings: (a) The procedure is only univariate and needs to be generalized to higher dimensions. (b) The multiplicity correction is through the conservative Bonferrnoni procedure, which leaves room for further enhancement of power. In Lee et al. (2023), random projections are applied to the multivariate observations to reduce the dimension to one, so that the univariate methods of Zhang (2019) are applicable. An ensembled approach involving distance correlation is further used to improve the power towards monotone relationships.
Two important special cases of goodness-of-fit tests are the test of uniformity and the test of independence. The test of uniformity can be formulated as
When U1subscript𝑈1U_{1}italic_U start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and U2subscript𝑈2U_{2}italic_U start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are Unif⁢[−1,1]Unif11\text{Unif}[-1,1]Unif [ - 1 , 1 ] distributed, the binary expansion up to depth D𝐷Ditalic_D effectively leads to a discretization of [−1,1]2superscript112[-1,1]^{2}[ - 1 , 1 ] start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT into a 2D×2Dsuperscript2𝐷superscript2𝐷2^{D}\times 2^{D}2 start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT × 2 start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT contingency table. Classical tests for contingency tables such as χ2superscript𝜒2\chi^{2}italic_χ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-test can thus be applied. Similar tests include Fisher’s exact test and its extensions (Ma and Mao, 2019). Multivariate extensions of these methods include Gorsky and Ma (2018); Lee et al. (2023).
Our study on powerful nonparametric tests of uniformity can be further extended and generalized to many directions. For example, extensions to general goodness-of-fit tests and two-sample tests can be investigated through the BEAST approach. Recent papers in these directions include Brown and Zhang (2023); Zhao et al. (2023a). Tests of other distributional properties related to uniformity, such as tests of Gaussianity and tests of multivariate symmetry can also be studied through the BEAST approach. Insights from these tests can be used in constructing distribution-free models of dependence Brown et al. (2022).
D
Large Model Approaches often have deeper architectures and more parameters, allowing them to capture complex features and patterns. Popular approaches can include segmenting point clouds with large image models (such as SAM [79], [188]) and natural language models like ChatGPT. The advanced capabilities of large models enable them to capture intricate patterns and semantic relationships, leading to improved performance and accuracy in segmentation tasks.
Generalization and Robustness: Models trained on specific datasets may not generalize well to different types of 3D data, such as those acquired from different sensors or environments. Developing robust models that perform well across various domains remains a challenge. Besides, Variations in object shapes, sizes, and orientations in real-world can affect model performance. Ensuring robustness to such variations is critical for reliable 3D segmentation. Below
Transfer Learning involves leveraging pre-trained models to improve performance on specific tasks, where the models are generally trained on large, often unrelated datasets [192], [234]. In the context of 3D segmentation, transfer learning can significantly enhance accuracy performance and address challenges related to data scarcity and quality.
Meta-Learning trains models to adapt rapidly to new, unseen data with minimal additional training [231]. By optimizing the learning process and leveraging prior knowledge from various tasks, meta-learning enhances a model’s ability to generalize, perform well with few examples, and transfer knowledge across different tasks.
Given the differences in domain knowledge required for semantic, instance, and part segmentation tasks in 3D segmentation, this paper reviews the deep learning techniques for each of these three segmentation tasks separately.
C
Let us remark how bizarre this theorem appears from a cryptographer’s point of view. If 𝖡𝖰𝖯=𝖰𝖬𝖠𝖡𝖰𝖯𝖰𝖬𝖠\mathsf{BQP}=\mathsf{QMA}sansserif_BQP = sansserif_QMA, then no computationally-secure classical cryptographic primitives exist, because such primitives can be broken in 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP, which is contained in 𝖰𝖬𝖠𝖰𝖬𝖠\mathsf{QMA}sansserif_QMA. So, our construction is a black-box separation between PRUs and all nontrivial quantum-secure classical cryptography—a relativized world in which any computationally-secure cryptography must use quantum communication.
We use the following definitions of pseudorandom quantum states (PRSs) and pseudorandom unitaries (PRUs), which were introduced by Ji, Liu, and Song [JLS18].
Ji, Liu, and Song [JLS18] define a pseudorandom state (PRS) ensemble as a keyed family of quantum states {|φk⟩}k∈{0,1}κsubscriptketsubscript𝜑𝑘𝑘superscript01𝜅\{|\varphi_{k}\rangle\}_{k\in\{0,1\}^{\kappa}}{ | italic_φ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ⟩ } start_POSTSUBSCRIPT italic_k ∈ { 0 , 1 } start_POSTSUPERSCRIPT italic_κ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT such that states from the ensemble can be generated in time polynomial in κ𝜅\kappaitalic_κ, and such that no polynomial-time quantum adversary can distinguish polynomially many copies of a random |φk⟩ketsubscript𝜑𝑘|\varphi_{k}\rangle| italic_φ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ⟩ from polynomially many copies of a Haar-random state. They also define an ensemble of pseudorandom unitary transformations (PRUs) analogously as a set of efficiently implementable unitary transformations that are computationally indistinguishable from the Haar measure. These definitions can be viewed as quantum analogues of pseudorandom generators (PRGs) and pseudorandom functions (PRFs), respectively. The authors then present a construction of PRSs assuming the existence of quantum-secure one-way functions, and also give a candidate construction of PRUs that they conjecture is secure.
Several applications of pseudorandom states and unitaries are known. PRSs and PRUs are useful in quantum algorithms: in computational applications that require approximations to the Haar measure, PRSs and PRUs can be much more efficient than t𝑡titalic_t-designs, which are information-theoretic approximations to the Haar measure that are analogous to t𝑡titalic_t-wise independent functions.111t𝑡titalic_t-designs are also sometimes called “pseudorandom” in the literature, e.g. [WBV08, BHH16a]. We emphasize that t𝑡titalic_t-designs and PRSs/PRUs are fundamentally different notions and that they are generally incomparable: a t𝑡titalic_t-design need not be a PRS/PRU ensemble, or vice-versa. Additionally, a variety of cryptographic primitives can be instantiated using PRSs and PRUs, including quantum money schemes, quantum commitments, secure multiparty communication, one-time digital signatures, some forms of symmetric-key encryption, and more [JLS18, AQY22, MY22b, BCQ23, MY22a, HMY23]. Finally, Bouland, Fefferman, and Vazirani [BFV20] have established a fundamental connection between PRSs and any possible resolution to the so-called “wormhole growth paradox” in the AdS/CFT correspondence.
Theorem 2 thus provides a negative answer (in the quantum black box setting) to a question of Ji, Liu, and Song [JLS18] that asks if quantum-secure one-way functions are necessary for pseudorandom states.
D
Numerical experiments on G-set are performed in Section 4. Concluding remarks are given in Section 5.
It should be pointed out that both MOH and CirCut are originally designed for the maxcut problem (1.3) and this work modifies them to generate approximate reference solutions for the anti-Cheeger cut problem (1.2). The descriptions for MOH and CirCut are presented in Appendix A and Appendix B, respectively.
That is, the anti-Cheeger cut problem (1.5) and the maxcut problem (1.6) are fully treated on equal terms by CIA2.
It should be pointed out that both MOH and CirCut are originally designed for the maxcut problem (1.3) and we have to modify them to produce approximate reference solutions for the anti-Cheeger cut problem (1.2) in this work. The interested readers may find more details on MOH and CirCut for the anti-Cheeger cut in Appendix A and Appendix B, respectively.
The similarity between the anti-Cheeger cut problem (1.5) and the maxcut problem (1.6), which shares a common numerator I⁢(𝒙)𝐼𝒙I(\mbox{\boldmath$x$})italic_I ( bold_italic_x ) for the cut values,
A
−μ⋅Mt⋅y⊺<0⋅𝜇superscript𝑀𝑡superscript𝑦⊺0-\mu\cdot M^{t}\cdot y^{\intercal}<0- italic_μ ⋅ italic_M start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT ⋅ italic_y start_POSTSUPERSCRIPT ⊺ end_POSTSUPERSCRIPT < 0 if and only if μ⋅Mt⋅z⊺>1⋅𝜇superscript𝑀𝑡superscript𝑧⊺1\mu\cdot M^{t}\cdot z^{\intercal}>1italic_μ ⋅ italic_M start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT ⋅ italic_z start_POSTSUPERSCRIPT ⊺ end_POSTSUPERSCRIPT > 1.
This is where the value of w𝑤witalic_w is important. The result holds if y=z−e𝑦𝑧𝑒y=z-eitalic_y = italic_z - italic_e,
where w⊺superscript𝑤⊺w^{\intercal}italic_w start_POSTSUPERSCRIPT ⊺ end_POSTSUPERSCRIPT is the transpose of w𝑤witalic_w, and
and we just need to show that y=z−e𝑦𝑧𝑒y=z-eitalic_y = italic_z - italic_e satisfies Equations (3.2.2), namely that
In this way, we obtain the main result of this section: an approximation of the value with
A
The models are implemented with PyTorch and trained/tested on one NVIDIA GTX 1080 Ti graphic card. The experimental settings are kept the same across all experiments for fair comparisons.
Following the mainstream evaluation of SR models [18], we present our model performance with self-ensemble [26], which forms the model ‘KASR+’
Interestingly, from these tables, we can verify that models with self-ensemble (‘KASR+’ models) are able to boost performance for both PSNR and SSIM, but not for LPIPS. This can be because the self-ensemble operation performs a pixel-level ensemble that may not effectively increase the perceptual quality of the images.
In the experiments, comparisons between the proposed model with existing SR models are performed. In the ablation study, the effectiveness of each component is validated. Also, we examine the effectiveness of the proposed KASR framework combined with multiple mainstream super-resolution models on two real-world datasets.
By default, the model labelled as ‘KASR’ in the experiments below consists of our proposed KASR paired with the EDSR model [18].
A
5:        if mod⁢(k,τ1)=0mod𝑘subscript𝜏10\mathrm{mod}\left(k,\tau_{1}\right)=0roman_mod ( italic_k , italic_τ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) = 0 then
3:     for each client node i∈𝒞𝑖𝒞i\in\mathcal{C}italic_i ∈ caligraphic_C in parallel do
We consider an edge-assisted federated learning system as shown in Fig. 1, which consists of C𝐶Citalic_C client nodes (denoted as set 𝒞𝒞\mathcal{C}caligraphic_C) and D𝐷Ditalic_D edge servers (denoted as set 𝒟𝒟\mathcal{D}caligraphic_D). Each client node is associated with an edge server according to some pre-defined criteria, e.g., physical proximity and network coverage. We denote the set of client nodes associated with the d𝑑ditalic_d-th edge server as 𝒞dsubscript𝒞𝑑\mathcal{C}_{d}caligraphic_C start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT, which form one edge cluster. Some of the edge servers are inter-connected via high-speed cables, which facilitates information exchange in SD-FEEL. We use a connected graph 𝒢𝒢\mathcal{G}caligraphic_G to model the connectivity among the edge servers, and denote the one-hop neighbors of the d𝑑ditalic_d-th edge server as 𝒩dsubscript𝒩𝑑\mathcal{N}_{d}caligraphic_N start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT.
FEEL [7]: This is a conventional edge-assisted FL scheme, where an edge server randomly schedules five client nodes at each iteration due to the limited number of wireless channels. Note that this is consistent with the number of accessible client nodes for each edge server in SD-FEEL.
6:           for each edge server d∈𝒟𝑑𝒟d\in\mathcal{D}italic_d ∈ caligraphic_D in parallel do
D
We keep the logic minimal. In [19], more conceivable principles of agency and ability are discussed, and many are rejected. However, any sensible principle (e.g., exploiting the set-theoretical relationships between the acting entities) can find its way into a formalization of a more precise particular domain.
to acknowledge the special character of a group without members, any ability of the empty group is rejected. This is stipulated by the formula ¬can∅⁢ϕsubscriptcanitalic-ϕ\lnot\textsc{can}_{\emptyset}\phi¬ can start_POSTSUBSCRIPT ∅ end_POSTSUBSCRIPT italic_ϕ which is adopted as an axiom.
That is, in what follows, for any formula φ𝜑\varphiitalic_φ, we adopt φ𝜑\varphiitalic_φ as an axiom of an extension of the logic lbda by stating ⊢φprovesabsent𝜑\vdash\varphi⊢ italic_φ.
infer an ability to ϕitalic-ϕ\phiitalic_ϕ from an occurrence of ϕitalic-ϕ\phiitalic_ϕ-ing, this is of little significance since we cannot reason about what will become of the ability in the
We now present the temporalisation of the static core logic of being deemed able with the until-since logic.
B
Also, analyzing the market capitalization of 264 coins, we find out that 140 (71%) coins are below the $20 million of market capitalization, with 44 (22%) below $1 million.
Instead, Bread is a low market cap cryptocurrency with higher volatility. This means that this asset is more prone to quick market oscillations as well as market manipulations.
Also, analyzing the market capitalization of 264 coins, we find out that 140 (71%) coins are below the $20 million of market capitalization, with 44 (22%) below $1 million.
In particular, the median market capitalization of the cryptocurrencies for exchange is $25,574,192 for Binance, $2,619,703 for YoBit, $2,512,627 for BitTrex, and $144,373 for Cryptopia.
The market capitalization of targeted coins is low, considering that the first asset with less than $20 million is at the 616⁢t⁢h616𝑡ℎ616th616 italic_t italic_h position of the cryptocurrency ranking by market capitalization333According to CoinMarketCap data retrieved on February 18, 2021. Typically, Binance is the market of choice for the pump and dump operations on currencies with higher market capitalization, Cryptopia for those with lower market capitalization.
D
M≥D,t+1⊆M≥D,tsuperscriptsubscript𝑀𝐷𝑡1superscriptsubscript𝑀𝐷𝑡M_{\geq}^{\mathit{D},t+1}\subseteq M_{\geq}^{\mathit{D},t}italic_M start_POSTSUBSCRIPT ≥ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_D , italic_t + 1 end_POSTSUPERSCRIPT ⊆ italic_M start_POSTSUBSCRIPT ≥ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_D , italic_t end_POSTSUPERSCRIPT.
Since problem 𝒫𝒫\mathcal{P}caligraphic_P meets the respectful decisions criterion
Problem 𝒫𝒫\mathcal{P}caligraphic_P meets the respectful decisions criterion (with
Problem 𝒫𝒫\mathcal{P}caligraphic_P meets the respectful decisions criterion (with
Since problem 𝒫𝒫\mathcal{P}caligraphic_P meets the respectful decisions criterion
A
We propose Regularised Ising MIMO (RI-MIMO), which is based on the proposed RI formulation, and show that it is asymptotically optimal and can achieve near-optimal performance, in the relevant BER regime (an uncoded BER of 10−3superscript10310^{-3}10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT-10−6superscript10610^{-6}10 start_POSTSUPERSCRIPT - 6 end_POSTSUPERSCRIPT), within practical complexities.
In this section, we propose the RI-MIMO detector, based on our novel regularised Ising formulation of maximum-likelihood MIMO receiver, which mitigates the error floor problem. We use a single auxiliary spin variable to transform the Ising problem into a form compatible with CIMs. We finally propose TRIM, a novel tree search algorithm based on RI-MIMO, which enhances the performance for higher-order modulations.
We see from Fig 11(left) that RI-MIMO and TRIM allows us to operate using aggressive modulations and coding schemes and hence achieve much better performance. In particular RI-MIMO achieves around 2.5×\times× more throughput in low-SNR regime (≈7.5⁢d⁢Babsent7.5𝑑𝐵\approx 7.5dB≈ 7.5 italic_d italic_B) and 2×\times× more throughput in mid-SNR regime( >>> 15dB). In the high-SNR ( >>> 20dB) both MMSE and RI-MIMO-64 seem to provide similar throughput, because RI-MIMO-64 is not sufficient for 16 QAM modulation (as noted before). However, TRIM allows us to get good performance with 16 QAM and provides a 2×\times× throughput gain in the high-SNR regime. With Increasing SNR, the channel capacity also increases; hence we would expect the AMC module to use more aggressive modulations in order to achieve the best possible capacity.
We further evolve RI-MIMO into Tree search with RI-MIMO (TRIM) that allows us to achieve better performance when the complexity of the underlying MIMO detection problem increases with higher-order modulations.
We start comparing the optimal decoder (the Sphere Decoder) and the linear MMSE decoder against RI-MIMO and the unregularized ML-MIMO using as a test case BPSK 16×\times×16. This case will represent a baseline for our benchmarks and their sophistication. Note that a trivial way to remove the error floor is to take the better solution out of those generated by MMSE and CIM-ML-Masubscript𝑀𝑎M_{a}italic_M start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT. We see from Fig 2 that RI-MIMO provides much better BER than CIM-ML, mitigating the error floor problem associated with it. We note that if we run concurrently MMSE and ML-MIMO for each instance and we select the best of both results (CIM-ML+MMSE), we are still less performant than RI-MIMO. TRIM performs similar to RI-MIMO when both algorithms execute the same number of total anneals. (TRIM1-32 vs RI-MIMO-64). However, as we will see later, this is not the case when higher modulations are used. The best performing algorithm is TRIM1-64, which achieves near-optimal performance for 16×16161616\times 1616 × 16 and 20×20202020\times 2020 × 20 MIMO with BPSK modulation. For 24×24242424\times 2424 × 24 MIMO, we see that the performance gap between TRIM1-64 and Sphere Decoder increases, and a higher number of anneals are required to bridge the gap. Note that, for MIMO systems with a larger number of antennas, along with an increase in the number of anneals, increasing the depth of the tree search might be required to maintain the performance gains of TRIM.
C
The actual channel, the BS measurements, and the precoding are time-varying in general. Thus, we have access to the input data (features) and training outputs (labels) sequentially. Older data samples tend to become irrelevant.
The proposed online deep learning model performs the mapping from the SINR measurements to the optimal MCS. The most significant advantage is achieved on the rises and falls of the SINR quality because ODL is more adaptive to the instant SINR than OLLA and instantly converges to the optimal MCS. The following Fig. 8 shows the uniform advantage of the ODL algorithm over OLLA.
Compared with Q-learning, the main difference in our ODL approach is the use of a binary logarithmic loss function (log-loss) instead of Q-learning Temporal-Difference (TD)-Loss [15]. This way, we move to the binary classification problem instead of maximizing the delayed rewards (a) and modeling the influence on the system of our actions (b).
The novelty of our work is in the proposed scheme of online deep learning with a new optimization target. On the one hand, it is simpler and more effective than the existing Q-learning approach ([10, 11]) to the AMC problem. On the one hand, it outperforms the basic OLLA approach because of the better utilization of the available channel//SINR information.
Observations (a) and (b) motivate the use of the traditional deep learning approach rather than Q-learning. We consider acknowledgment prediction as a binary classification problem and use the scheme (2) to select the optimal MCS. Observation c) motivates the use of the online approach.
D
Instead, hard bounding functions (so to speak, 100% confidence intervals) are sufficiently competitive with those 99% confidence intervals already at the second (m=2𝑚2m=2italic_m = 2) or third (m=3𝑚3m=3italic_m = 3) iteration.
Before closing this section, let us stress again that the proposed framework has been constructed for quite a large class of multidimensional stochastic differential equations with jumps, based on the previous developments (Sections 3.1, 3.2, 3.3 and 3.4) alone, without coming into the present subsection (Section 3.5), that is, without imposing uniform boundedness on jump rate or a variety of regularity conditions of Theorem 3.9, to say nothing of differentiability on the drift and diffusion coefficients as in the exact simulation of one-dimensional sample paths [1, 7, 10, 16].
The aim of the present work is to establish recursive representations for a large class of stochastic differential equations with jumps.
The underlying class is large enough to accommodate the multivariate time-inhomogeneous dynamics, even in the absence of uniform ellipticity.
Let us, last but not least, remind that the proposed framework is general enough to accommodate quite a large class of multidimensional inhomogeneous stochastic differential equations with jumps and, particularly, does not require the drift and diffusion coefficients to be as smooth as in existing univariate exact simulation methods.
D
For k⩾1𝑘1k\geqslant 1italic_k ⩾ 1 and r⩾0𝑟0r\geqslant 0italic_r ⩾ 0, let f1⁢(k,r):=f⁢(k,h)assignsubscript𝑓1𝑘𝑟𝑓𝑘ℎf_{1}(k,r):=f(k,h)italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_k , italic_r ) := italic_f ( italic_k , italic_h ), where f𝑓fitalic_f is the function from Theorem 10.
Next, let us show that every vortex V∈𝒲𝑉𝒲V\in\mathcal{W}italic_V ∈ caligraphic_W must be bipartite.
It suffices to show that c+⁢(F)⩾cW⁢(S{1,2})superscript𝑐𝐹subscript𝑐𝑊subscript𝑆12c^{+}(F)\geqslant c_{W}(S_{\{1,2\}})italic_c start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ( italic_F ) ⩾ italic_c start_POSTSUBSCRIPT italic_W end_POSTSUBSCRIPT ( italic_S start_POSTSUBSCRIPT { 1 , 2 } end_POSTSUBSCRIPT ) holds since then we obtain
Let us show that the theorem holds with this choice of f1subscript𝑓1f_{1}italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT.
It suffices to show that c+⁢(F)⩾cW⁢(S{3})superscript𝑐𝐹subscript𝑐𝑊subscript𝑆3c^{+}(F)\geqslant c_{W}(S_{\{3\}})italic_c start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ( italic_F ) ⩾ italic_c start_POSTSUBSCRIPT italic_W end_POSTSUBSCRIPT ( italic_S start_POSTSUBSCRIPT { 3 } end_POSTSUBSCRIPT ) holds since then we obtain
C
{2}^{2}\leq\frac{R_{0}^{2}}{2}.italic_f ( over^ start_ARG italic_x end_ARG start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ) - italic_f ( italic_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ) ≤ divide start_ARG italic_μ italic_R start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG 4 end_ARG , ∥ over^ start_ARG italic_x end_ARG start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT - italic_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ≤ divide start_ARG italic_R start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG 2 end_ARG .
Upper bound for ③. We derive the upper bound for ③ using the same technique as for ①. First of all, we notice that the summands in ③ are conditionally unbiased:
Upper bound for ③. We derive the upper bound for ③ using the same technique as for ①. First of all, we notice that the summands in ③ are conditionally unbiased:
From mathematical induction and the union bound for probability events, it follows that the inequalities
From mathematical induction and the union bound for probability events, it follows that inequalities
D
\draw(10,10) node[lab] (b1) 1; \draw(30,10) node[lab] (b3) 0; \draw(40,10) node[lab] (b4) 1;
\draw(10,20) node[lab] (v2) v2subscript𝑣2v_{2}italic_v start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT;
\draw(10,10) node[lab] (b1) 1; \draw(30,10) node[lab] (b3) 0; \draw(40,10) node[lab] (b4) 1;
\draw(10,50) node[lab] (f1) 1; \draw(30,50) node[lab] (f3) 0; \draw(40,50) node[lab] (f4) 1;
\draw(20,30) node[lab] (d2) 1; \draw(30,30) node[lab] (d3) 1; \draw(50,30) node[lab] (d5) 0;
D
Embedded methods select features during parameter optimisation by manipulating the objective function, such as Lasso [18] or the structure of a model, such as CART [19].
Unlike wrapper and embedded methods, filter methods do not involve model training, which makes them faster than the other two methods.
The classification results show that the filter methods generally have worse classification performance but higher computational speed than the embedded methods and the wrapper methods.
These eight methods cover three types of feature selection methods, namely filter, wrapper, and embedded methods.
Feature selection methods can generally be categorised into filter, wrapper, and embedded methods [1].
A
The authors declare that they have no known competing financial interests or personal relationships that could have
The authors declare that they have no known competing financial interests or personal relationships that could have
Figure 4: (a), (b) and (c), (d) are the mean correlations between weight and two properties (length and diversity) of paths on Cora and Citeseer respectively. In (b) and (d), to remove the influence of length, the path length is fixed on 10. Y-axis denotes the average weight of the corresponding paths.
This work was supported in part by National Key Research and Development Program of China (No. 2020YFB2103402), Shenzhen Science and Technology Program (No. JCYJ20230807115959041), and the open project of Sichuan Provincial Key Laboratory of Philosophy and Social Science for Language Intelligence in Special Education (No. YYZN-2023-3).
To analyze how the properties of paths influence weight scores, we draw the correlations between weight scores and two properties of paths, i.e., length and diversity, on Figure 4. Length is defined as the number of nodes in a path; Diversity is defined as the number of different categories of labeled node in a path. For example, we have a path (x1,x2,x3subscript𝑥1subscript𝑥2subscript𝑥3x_{1},x_{2},x_{3}italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT) and their corresponding categories (AI, Agents, AI), the length and diversity of this path is three and two respectively. As demonstrated in Figures 4(a) and 4(c), correlations between two nodes decay as the distance between them becomes larger, this corresponds with the homophily assumption [28] that the correlation is strong if two nodes are adjacent; In Figures 4(b) and 4(d), the path weight increases with the reduction of node diversity. Intuitively, if a path contains fewer categories, the path contains more information and less noise about the corresponding class. This nature is helpful for the task of node classification.
C
Hessian at the optimum is ρ1−ρdρ1−ρ2subscript𝜌1subscript𝜌𝑑subscript𝜌1subscript𝜌2\frac{\rho_{1}-\rho_{d}}{\rho_{1}-\rho_{2}}divide start_ARG italic_ρ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - italic_ρ start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT end_ARG start_ARG italic_ρ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - italic_ρ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_ARG.
The bound on the condition number in Theorem 18 decomposes two components: the first, κ𝐅𝐃𝐀⋆superscriptsubscript𝜅𝐅𝐃𝐀⋆\kappa_{{\bf FDA}}^{\star}italic_κ start_POSTSUBSCRIPT bold_FDA end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT,
The condition number bound from Corollary 19 decomposes to two components: the first
The following theorem provides a bound on the condition number of the Riemannian
The condition number bound from Corollary 10 decomposes into two components: the first
B
2DPASS [72] enhances the representation learning of 3D semantic segmentation network by distilling multi-modal knowledge to single point cloud modality. In this way, 2DPASS can use LiDAR-only input in test-time. However, the model performance is unsatisfactory under the scene with sparser point clouds (e.g., A2D2 with 16-beam LiDARs). In contrast, our EPMF can achieve promising performance by fusing 2D images and 3D point clouds during inference.
To better understand the benefits of PMF, we visualize the predictions of PMF on the benchmark data sets. From Figure 8, compared with Cylinder3D, PMF achieves better performance at the boundary of objects. For example, as shown in Figure 8 (d), the truck segmented by PMF has a more complete shape. More critically, PMF is robust in different lighting conditions. Specifically, as illustrated in Figure 9, PMF outperforms the baselines on more challenging scenes (e.g., night). In addition, as demonstrated in Figure 8 (e) and Figure 9 (c), PMF generates dense segmentation results that combine the benefits of both the camera and LiDAR, which are significantly different from existing LiDAR-only and fusion-based methods.
In this work, we have proposed a perception-aware multi-sensor fusion scheme for 3D LiDAR semantic segmentation. Unlike existing methods that conduct feature fusion in the LiDAR coordinate system, we project the point clouds to the camera coordinate system to enable a collaborative fusion of the perceptual features from the two modalities. By fusing complementary information from both cameras and LiDAR, PMF is robust in complex outdoor scenes with extremely sparse point clouds or poor lighting conditions. Moreover, we propose EPMF which improves the efficiency and effectiveness of PMF. Specifically, we introduce cross-modal alignment and cropping to reduce unnecessary computation. Besides, we also adjust the architecture of the fusion network by fusing high-level features into the camera stream and exploring the design of contextual modules under perspective projection. The experimental results on three benchmarks show the superiority of our method. In the future, we will extend EPMF to other challenging tasks, e.g., object detection and semantic scene completion.
Besides, unlike existing methods that perform feature fusion in the LiDAR domain, PMF [82] exploits a collaborative fusion of multimodal data in camera coordinates. In this work, we further extend PMF to improve its efficiency and performance.
In multi-sensor fusion methods, fusing multimodal data from different sensors is an important problem. Existing fusion-based methods [47, 64] mainly lift the dense 2D image features to the 3D LiDAR coordinates using spherical projection [50] and conduct feature fusion in the sparse LiDAR domain. However, these methods suffer from a critical limitation: as the point clouds are very sparse, most of the appearance information from the RGB images is missing after un-projecting it to the LiDAR coordinates. For example, as shown in Figure 1 (c), the car and motorcycle in the image become distorted with spherical projection. As a result, existing fusion-based methods have difficulty capturing the appearance information from the projected RGB images.
C
The embeddability problem has been largely considered in the special case where the input 2-complex is (homeomorphic to) a surface. This problem is already NP-hard [39], and the existing algorithms that are fixed-parameter tractable in the genus are notoriously complicated; we review them now.
Mohar [28] has given an algorithm for embedding graphs on a fixed surface that takes linear time in the input graph, for every fixed surface. This algorithm is very technical and relies on several other articles. The dependence on the genus is not made explicit, but seems to be doubly exponential [20].
General graph minor theory provides an algorithm for the same purpose. The graph minor theorem by Robertson and Seymour [35] implies that, for every fixed surface 𝒮𝒮\mathscr{S}script_S, there is a finite list of graphs 𝒪𝒮subscript𝒪𝒮\cal O_{\mathscr{S}}caligraphic_O start_POSTSUBSCRIPT script_S end_POSTSUBSCRIPT such that a graph G𝐺Gitalic_G can be embedded on 𝒮𝒮\mathscr{S}script_S if and only if G𝐺Gitalic_G does not contain any graph in 𝒪𝒮subscript𝒪𝒮\cal O_{\mathscr{S}}caligraphic_O start_POSTSUBSCRIPT script_S end_POSTSUBSCRIPT as a minor. Moreover, there is an algorithm that given any surface 𝒮𝒮\mathscr{S}script_S (specified by its genus and orientability) outputs the list 𝒪𝒮subscript𝒪𝒮\cal O_{\mathscr{S}}caligraphic_O start_POSTSUBSCRIPT script_S end_POSTSUBSCRIPT [2], and there is an algorithm to decide whether a graph M𝑀Mitalic_M is a minor of another graph G𝐺Gitalic_G running, for fixed M𝑀Mitalic_M, in time cubic in the size of G𝐺Gitalic_G [34][14, Theorem 6.12]. These considerations thus lead to an algorithm to decide embeddability of a graph on a surface that runs, if the input surface is fixed, in cubic time in the size of the input graph.
Kawarabayashi, Mohar, and Reed, in an extended abstract [20], have given a simpler linear-time algorithm for this problem, with a singly-exponential dependence in the genus, but not all details are presented, which makes the approach hard to check [22, p. 3657, footnote].
In this paper, we describe an algorithm for deciding the embeddability of graphs into topological spaces that are, in a sense, as general as possible: two-dimensional simplicial complexes (or 2-complexes for brevity), which are made from vertices, edges, and triangles glued together. (We remark that every graph is embeddable in ℝ3superscriptℝ3\mathbb{R}^{3}blackboard_R start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT and thus in a 3-simplex, so considering higher-dimensional simplicial complexes is irrelevant.) In a previous article, jointly written with Mohar [13], we proved that, given a graph G𝐺Gitalic_G and a 2-complex 𝒞𝒞\mathscr{C}script_C, one can decide whether G𝐺Gitalic_G embeds into 𝒞𝒞\mathscr{C}script_C in polynomial time for fixed 𝒞𝒞\mathscr{C}script_C; but the algorithm has running time f⁢(c)⋅nO⁢(c)⋅𝑓𝑐superscript𝑛𝑂𝑐f(c)\cdot n^{O(c)}italic_f ( italic_c ) ⋅ italic_n start_POSTSUPERSCRIPT italic_O ( italic_c ) end_POSTSUPERSCRIPT, where n𝑛nitalic_n and c𝑐citalic_c are the respective sizes of G𝐺Gitalic_G and 𝒞𝒞\mathscr{C}script_C. Using a very different strategy, we prove in this paper that it is actually fixed-parameter tractable (FPT) in the complexity of the input complex, by providing an algorithm that is quadratic in n𝑛nitalic_n and exponential in a polynomial in c𝑐citalic_c.
A
Given that S⁢(μ)𝑆𝜇S(\mu)italic_S ( italic_μ ) is a Laurent polynomial, the proof about
∥S⁢(μ)∥−1superscriptdelimited-∥∥𝑆𝜇1\lVert S(\mu)\rVert^{-1}∥ italic_S ( italic_μ ) ∥ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT is identical to [nosh:15, Lemma
∥S⁢(μ)−1⁢Q⁢(μ)∥delimited-∥∥𝑆superscript𝜇1𝑄𝜇\displaystyle\lVert S(\mu)^{-1}Q(\mu)\rVert∥ italic_S ( italic_μ ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_Q ( italic_μ ) ∥
upper bound for ∥S⁢(μ)−1⁢Q⁢(μ)∥delimited-∥∥𝑆superscript𝜇1𝑄𝜇\lVert S(\mu)^{-1}Q(\mu)\rVert∥ italic_S ( italic_μ ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_Q ( italic_μ ) ∥ is equal to 1111.
≤∥S⁢(μ)−1∥⁢∥Q⁢(μ)∥absentdelimited-∥∥𝑆superscript𝜇1delimited-∥∥𝑄𝜇\displaystyle\leq\lVert S(\mu)^{-1}\rVert\lVert Q(\mu)\rVert≤ ∥ italic_S ( italic_μ ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ∥ ∥ italic_Q ( italic_μ ) ∥
A
Streufert 2023 specifies an arbitrary pentaform game and shows [a] that the piece-form collection partitions the pentaform and [b] that this piece-form partition is coarser than the pentaform’s slice partition. Thus it can use Corollary 4.2(b) to show that each piece form is a pentaform. On this foundation the paper is able to build a notion of “piecewise Nashness” which generalizes dynamic-programming’s Bellman equation to arbitrary games.
In the opposite direction, Section 4.2 defines a “block” to be a quintuple set which satisfies all but one of the axioms. Then it essentially shows that the union of a “separated” collection of blocks is itself a block (Proposition 4.4), and that the union of an expanding sequence of pentaforms is itself a pentaform (Proposition 4.5). These are convenient tools for building new pentaforms from known components, as illustrated by the finite-horizon examples of Section 4.2 and the infinite-horizon example of Streufert 2023, Section 2.2. These techniques are complementary to those of Capucci, Ghani, Ledent, and Nordvall Forsberg 2022 in the computer-science literature (footnote 21).
For example, equation (10) shows that the pentaform Q2superscript𝑄2Q^{2}italic_Q start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT (Figure 2.2 or 3.2) has information-set situations, while equation (9) shows that the pentaform Q3superscript𝑄3\smash{Q^{3}}italic_Q start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT (Figure 2.3 or 3.0) does not. Thus the set of pentaform games with information-set situations is a proper subset of the set of pentaform games. Theorem 5.2 shows that the operator 𝖯𝖯\mathsf{P}sansserif_P maps traditional games into this proper subset of pentaform games. This reflects the fact that the information sets of traditional games are less general than the situations of pentaform games.353535The lesser generality of traditional games is an artifact of Definition 5.1 rather than a disadvantage of the diverse literature that Definition 5.1 represents. For example, Myerson 1991 specifies a game with information states like the situations here (footnote 8), and such a construction has been left out of Definition 5.1 because it is unusual in the literature (other constructions have been left out as well, as explained in the first paragraph of Section 5.1).
This Section 4.2 shows how to construct pentaforms as unions of “blocks”.212121This Section 4.2 is related to the ongoing work of Ghani, Kupke, Lambert, and Nordvall Forsberg 2018; Bolt, Hedges, and Zahn 2023; and Capucci, Ghani, Ledent, and Nordvall Forsberg 2022. Both that literature and this Section 4.2 seek to systematically construct games out of game fragments. A precise comparison is elusive because the mathematical foundations are very different. More is said there about utility. The relative advantages here include constructing general infinite-horizon games, using the relatively simple operation of union, and using relatively finely-grained axioms.
Finally, a “pentaform game” is constructed by combining a pentaform with utility functions.333This paper does not formally consider probability. If a game has a finite number of nodes, mixed strategies and expected utilities can be derived by standard means (for example, Myerson 1991, Chapter 4). Meanwhile, if there are an infinite number of nodes, the very concepts of mixed strategy and expected utility can lead to subtle measurability issues. To ground this new specification in the literature, the paper defines the concept of a “traditional game” to represent the literature’s many specifications of finite-horizon and infinite-horizon games. Such a traditional game is defined in the usual way as a tree adorned with information sets, actions, players, and utility functions. The paper’s main result is Theorem 5.4, which shows that there is a constructive and intuitive bijection from the collection of traditional games to the collection of pentaform games with information-set situations. This suggests that pentaform games can equivalently formulate all discrete444“Discreteness” means that each decision node has a finite number of predecessors. Non-discrete games include those in continuous time, as in Dockner, Jørgenson, Long, and Sorger 2000; and those yet more general, as in Alós-Ferrer and Ritzberger 2016, Chapters 1–5. [Discreteness is defined in terms of decision nodes because Alós-Ferrer and Ritzberger 2016, Section 6.2, admits a terminal node at the end of each infinite run (that is, each infinite play). Such terminal nodes do not appear in the present paper.]
C
2Ω⁢(n)superscript2Ω𝑛2^{\Omega(n)}2 start_POSTSUPERSCRIPT roman_Ω ( italic_n ) end_POSTSUPERSCRIPT
2Ω⁢(n)superscript2Ω𝑛2^{\Omega(n)}2 start_POSTSUPERSCRIPT roman_Ω ( italic_n ) end_POSTSUPERSCRIPT
2Ω⁢(n)superscript2Ω𝑛2^{\Omega(n)}2 start_POSTSUPERSCRIPT roman_Ω ( italic_n ) end_POSTSUPERSCRIPT
2Ω⁢(n)superscript2Ω𝑛2^{\Omega(n)}2 start_POSTSUPERSCRIPT roman_Ω ( italic_n ) end_POSTSUPERSCRIPT
2Ω⁢(n)superscript2Ω𝑛2^{\Omega(n)}2 start_POSTSUPERSCRIPT roman_Ω ( italic_n ) end_POSTSUPERSCRIPT
A
The Winograd Schema Challenge (WSC) [51] is proposed as an alternative to the Turing Test [63], focusing on a machine’s ability to resolve referential ambiguities that are trivial for humans but challenging for AI. A Winograd schema consists of a pair of sentences that differ by only one or two words, leading to a referential ambiguity that requires common sense reasoning and world knowledge to resolve. The WSC avoids pitfalls associated with statistical methods and simple linguistic tricks, making it a more robust measure of machine understanding. The challenge emphasizes the need for NLP systems to engage in deep reasoning, distinguishing it from other NLP tasks that rely heavily on statistical approaches. As such, the WSC represents a significant advancement in evaluating AI’s capacity for genuine comprehension and intelligent behavior.
While naive physics commonsense knowledge is generally universal across human societies, intuitive psychology commonsense can vary depending on linguistic or cultural backgrounds, particularly regarding daily activities, social behaviors, and norms. Some existing studies have incorporated multilingual settings when developing commonsense knowledge resources and benchmarks. For instance, XCOPA [65] is a multilingual benchmark that translates and re-annotates the English COPA [52] into 11 languages. Similarly, X-CSQA and X-CODAH [27] are translations of the English CSQA [47] and CODAH [61] benchmarks, with questions that might involve cultural differences intentionally removed. However, simply translating existing resources is insufficient for capturing the nuances of multilingual commonsense knowledge, as noted by Sakai et al. [152]. Instead of relying on translation, they built on a multilingual commonsense knowledge base (ConceptNet) and used LLMs to generate and validate questions, options, and additional distractors. Despite these efforts, the differences in social behaviors and norms across multicultural contexts remain underexplored. Datasets like SocialDial [28], which focuses on Chinese social culture, and NormDial [29], which studies social norms in both Chinese and American cultures in conversational settings, are important steps forward. Expanding commonsense knowledge resources and benchmarks to encompass more diverse cultural backgrounds is crucial, and this task will likely require collaboration with experts in sociology and linguistics.
COPA (Choice Of Plausible Alternatives) [52] is a benchmark that involves causal inference between events. The dataset comprises 1,000 examples, each presenting an event followed by a question asking the model to select the correct cause or effect from two options. Triangle-COPA [64] is a variation of COPA, containing 100 examples in the same format, but supplemented with videos depicting interactions between a circle and a triangle. The questions in Triangle-COPA focus more on emotions and intentions. However, scalability remains a challenge for both datasets when evaluating modern language models. To advance NLP tools in languages other than English and address the Anglocentric bias in commonsense reasoning models, XCOPA [65] was introduced. This multilingual dataset supports causal commonsense reasoning across 11 languages and is typologically diverse.
HeadlineCause [80] is a dataset designed to detect implicit causal relations between pairs of news headlines, addressing the challenges in existing datasets that focus predominantly on either commonsense causal reasoning or explicit causal relations. Comprising over 5,000 headline pairs in English and 9,000 in Russian, this dataset was annotated via crowdsourcing and includes a variety of relationships, from unrelated headlines to those involving causation and refutation. The dataset is particularly notable for its emphasis on implicit, inter-sentence causal relations, which require models to leverage both commonsense and world knowledge.
Social IQa [50] is a large-scale benchmark for evaluating and improving NLP models’ social and emotional intelligence through commonsense reasoning about social interactions. It includes 38,000 multiple-choice questions designed to challenge models in understanding motivations, emotional reactions, and outcomes of everyday scenarios. By using a crowdsourcing method that reduces biases in incorrect answers, Social IQa offers a robust dataset for training NLP models. Despite advancements, models like BERT still fall short of human performance on these tasks, but Social IQa has proven effective for enhancing performance on related commonsense reasoning challenges such as the Winograd Schema Challenge [51] and COPA [52].
B
We quantify how many projections are needed to achieve a certain accuracy in a general case (Theorem 6).
Considering the simplicity of the thresholding after random projection classification method, Occam’s razor principle suggests that such a classifier should be used for any training dataset that can be well classified after a random projection, as one expects the resulting classifier to generalize well. The first part of this paper quantifies the reason why, by showing how the simplicity of this classification method is related to a low probability of classification error. Specifically, Theorem 1 provides an upper bound on the likely difference between the training error and the population error, in terms of the size of the training set N𝑁Nitalic_N and the number of projections n𝑛nitalic_n. This bound is expressed independently of the space dimension, independently of k𝑘kitalic_k, and is lower than that for a non-random linear classifier in the dimension of the extended feature space. In Corollary 1 we provide a similar bound for the expectation of the magnitude of the generalization gap and in Theorem 2 we strengthen our results by applying chaining technique. For an extremely large number of samples N𝑁Nitalic_N this bound compares very favorably to the bound for any family of classifiers with a VC dimension larger than O⁢(ln⁡(n))𝑂𝑛O(\ln(n))italic_O ( roman_ln ( italic_n ) ). For reasonably large N𝑁Nitalic_N, the classifier we consider has better generalization properties than ones with VC dimension of order O⁢(ln⁡(n)/ln⁡(N))𝑂𝑛𝑁O(\ln(n)/\ln(N))italic_O ( roman_ln ( italic_n ) / roman_ln ( italic_N ) ).
where dV⁢Csubscript𝑑𝑉𝐶d_{VC}italic_d start_POSTSUBSCRIPT italic_V italic_C end_POSTSUBSCRIPT is the VC dimension of the class ℱℱ\mathcal{F}caligraphic_F. In this section we prove a similar result for the generalization gap of the method of thresholding after random projection where we roughly speaking replace the VC dimension by ln⁡(n)𝑛\ln(n)roman_ln ( italic_n ) term. So far we used the exact number of different dichotomies that the method of random projections applied n𝑛nitalic_n times can produce on a fixed number of datapoints. There is another property of the set of dichotomies given by the method of random projections 𝒜ℱn,k⁢(𝒙N)subscript𝒜subscriptℱ𝑛𝑘superscript𝒙𝑁\mathcal{A}_{\mathcal{F}_{n,k}}(\boldsymbol{x}^{N})caligraphic_A start_POSTSUBSCRIPT caligraphic_F start_POSTSUBSCRIPT italic_n , italic_k end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( bold_italic_x start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT ) which we have not used yet. This property allows for a different bound on the generalization gap, where we also eliminate the logarithmic term in the number of samples from the numerator. Our class of functions is not only small in magnitude, it also has a simple geometric structure that results in low covering numbers. Classification after each projection results in dichotomies that lie in a chain on a hypercube - which means that there exists an ordering of dichotomies, such that the Hamming distance between consecutive dichotomies is equal to one.
In Section 5 we also include an example of a classification problem where the number of random projections needed for obtaining a classifier with high accuracy is small.
In general, according to Formula 21, the number of projections n𝑛nitalic_n needed for achieving a high classification accuracy could be very large,
C
The results obtained by the Lagrange multiplier method and our new method are shown in Tables 1 and 2, respectively.
Numerical solutions obtained by the new method achieve optimal convergence rate of 2222nd order under the L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT norm inside the domain.
The testing results for the Lagrange multiplier method and the new method are shown in Tables 3 and 4, respectively. The numerical solutions obtained by the new method again achieve optimal convergence rate of 2222nd order under the L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT norm inside the domain. For solutions obtained by the Lagrange multiplier method, the L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT error convergence rate is still not optimal. Furthermore, the numerical results obtained by the new method is much more accurate in this test.
When the Lagrange multiplier method is used, the L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT error convergence rate is not optimal, although the numerical solutions still converge.
Theoretical analysis and numerical experiments show that our proposed method achieves optimal convergence rate under both L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT norm and H1superscript𝐻1H^{1}italic_H start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT semi-norm.
A
An enumeration problem is the task of listing a set of elements without redundancies. It is an important and old class of problems: the Baguenaudier game [32] from the 19191919th century can be seen as the problem of enumerating integers in Gray code order. Ruskey even reports [38] on thousand-year-old methods to list simple combinatorial structures such as the subsets or the partitions of a finite set. Modern enumeration algorithms date back to the 1970197019701970s with algorithms computing circuits or spanning trees of a graph [41, 37], while fundamental complexity notions for enumeration have been formalized 30303030 years ago by Johnson, Yannakakis and Papadimitriou [27].
In this paper, we complemented our algorithms with lower bounds in a model of computation where the underlying enumeration algorithm is accessed in a blackbox fashion. We have shown that no regularization scheme can achieve a worst case delay that is linear in the real amortized delay. Moreover, we show that if one wants to preserve the order during the enumeration process, then either the space used by the algorithm or the worst case delay has to be exponential in the size of the solutions. In most enumeration problems, it means that either the space or the delay have to be exponential in the size of the input.
The main specificity of enumeration problems is that the size of the enumerated set is typically exponential in the size of the input.
On particular consequence of Theorem 13 is that one cannot use regularization schemes to prove that classes DelayP\polysuperscriptDelayP\poly\mathrm{DelayP}^{\poly}roman_DelayP start_POSTSUPERSCRIPT end_POSTSUPERSCRIPT and AmDelayP\polysuperscriptAmDelayP\poly\mathrm{AmDelayP}^{\poly}roman_AmDelayP start_POSTSUPERSCRIPT end_POSTSUPERSCRIPT are the same for all possible orders. Indeed, the problems considered in these classes being in EnumPEnumP\mathrm{EnumP}roman_EnumP, the size of solution is polynomial in the size of the input. Hence Theorem 13 states that either the delay or the space used by a regularization scheme on such problems will have a behavior exponential in the input size.
For most problems, the set to enumerate is too large, or may not be needed in its entirety. It is then desirable to efficiently generate a part of the set for statistical analysis or on the fly processing. In this case, a more relevant measure of the complexity and hence of the quality of the enumeration algorithm is its delay, that is, the time spent between two consecutive outputs. One prominent focus has been to design algorithms whose delay is bounded by a polynomial in the size of the input. Problems admitting such algorithms constitute the class DelayPDelayP\mathrm{DelayP}roman_DelayP and many problems are in this class, for example the enumeration of the maximal independent sets of a graph [27], or answer tuples of restricted database queries [21] (see [46] for many more examples).
B
We may assume that ξ𝜉\xiitalic_ξ is the smallest leaf in the subtree of T𝑇Titalic_T rooted at ξ|2⁢jevaluated-at𝜉2𝑗\xi|_{2j}italic_ξ | start_POSTSUBSCRIPT 2 italic_j end_POSTSUBSCRIPT.
If ξ′superscript𝜉′\xi^{\prime}italic_ξ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT does not exist, then return ⊤top\top⊤.
Otherwise, we can set it to the smallest leaf of the next subtree rooted at that depth using Tighten (we return ⊤top\top⊤ if this next subtree does not exist).
Otherwise, we can set it to the smallest leaf of the next subtree rooted at that depth using Tighten (we return ⊤top\top⊤ if this next subtree does not exist).
Otherwise, it is the smallest leaf of the next subtree rooted at that depth, which can be obtained via Tighten.
B
_{1}\leq\mathcal{O}(\varepsilon).∥ italic_ρ start_POSTSUBSCRIPT italic_L italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_Y italic_Y start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_M end_POSTSUBSCRIPT - italic_U start_POSTSUBSCRIPT italic_k / 4 end_POSTSUBSCRIPT ⊗ italic_ρ start_POSTSUBSCRIPT italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_Y italic_Y start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_M end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ≤ caligraphic_O ( italic_ε ) .
As stated before, the main conceptual hurdle in extending the analysis from [CGL20] to the quantum case, lies in finding the proper framework in which we can express the correlations that arise from quantum side information.
Very little is known about the security of non-malleable extractors against quantum side information. The initial challenge lies in defining a non-malleable extractor with quantum side information, as we need to provide security with updated quantum side information when the adversary modifies (E,S)→(E′,S′)→𝐸𝑆superscript𝐸′superscript𝑆′(E,S)\to(E^{\prime},S^{\prime})( italic_E , italic_S ) → ( italic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_S start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ).
In this scenario, we require that the output remains nearly independent given the quantum side information and any of the tampered outputs. For example, in the seeded case (see Definition 21):
In this section, we define and prove the quantum security of 2222-source non-malleable extractor. As specified before, the parameters in our construction are set similarly in line with the construction of [CGL20] considering the use of quantum secure seeded extractors in the alternating extraction. The following parameters hold throughout this section.
A
\operatorname{dist}(\cdot,P\cup\partial B)\right\|_{L_{\gamma}(B)}^{\alpha}.∥ italic_f - ∑ start_POSTSUBSCRIPT italic_x ∈ italic_P end_POSTSUBSCRIPT italic_f ( italic_x ) italic_u start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT italic_L start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT ( italic_B ) end_POSTSUBSCRIPT ≤ italic_C ∥ roman_dist ( ⋅ , italic_P ∪ ∂ italic_B ) ∥ start_POSTSUBSCRIPT italic_L start_POSTSUBSCRIPT italic_γ end_POSTSUBSCRIPT ( italic_B ) end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT .
if this quantity exists and is finite. It is independent of the choice of the coordinates x𝑥xitalic_x.
and the desired convergence n⁢an→a→𝑛subscript𝑎𝑛𝑎na_{n}\to aitalic_n italic_a start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT → italic_a follows. This completes the proof of (14) and thus of Theorem 3.
The first property is proven precisely this way in [18] and thus we turn to the second property.
This definition of the integral is independent of the atlas and the partition of unity.
C
Given unlabeled target data 𝒟targetunlabeledsuperscriptsubscript𝒟targetunlabeled\mathcal{D}_{\text{target}}^{\text{unlabeled}}caligraphic_D start_POSTSUBSCRIPT target end_POSTSUBSCRIPT start_POSTSUPERSCRIPT unlabeled end_POSTSUPERSCRIPT, we can leverage it to further enhance our IntentBERT, by simultaneously optimizing a language modeling loss on 𝒟targetunlabeledsuperscriptsubscript𝒟targetunlabeled\mathcal{D}_{\text{target}}^{\text{unlabeled}}caligraphic_D start_POSTSUBSCRIPT target end_POSTSUBSCRIPT start_POSTSUPERSCRIPT unlabeled end_POSTSUPERSCRIPT and the supervised loss in Eq. (2). The language modeling loss can help to learn semantic representations of the target domain while preventing overfitting to the source data.
Specifically, we use MLM as the language modeling loss, in which a proportion of input tokens are masked with the special token [M⁢A⁢S⁢K]delimited-[]𝑀𝐴𝑆𝐾[MASK][ italic_M italic_A italic_S italic_K ] and the model is trained to retrieve the masked tokens. The joint training loss is formulated as:
d=768𝑑768d=768italic_d = 768) as the encoder, Adam Kingma and Ba (2015) as the optimizer, and PyTorch library for implementation. The model is trained with Nvidia GeForce RTX 2080 Ti GPUs. For supervised pre-training, we use validation to control early-stop to prevent overfitting. Specifically, we use HWU64 for validation when pre-training with OOS and vice versa. The training is stopped if no improvement in accuracy is observed in 3333 epochs. For joint pre-training, λ𝜆\lambdaitalic_λ is set to 1111. The number of training epochs is fixed to 10101010, since it is not prone to overfitting.
where λ𝜆\lambdaitalic_λ is a hyperparameter that balances the supervised loss and the unsupervised loss.
Further, to leverage unlabeled data in the target domain, we design a joint pre-training scheme, which simultaneously optimizes the classification error on the source labeled data and the language modeling loss on the target unlabeled data. This joint-training scheme can learn better semantic representations and significantly outperforms existing two-stage pre-training methods Gururangan et al. (2020). A visualization of the embedding spaces produced by strong baselines and our methods is provided in Fig. 1, which clearly demonstrates the superiority of our pre-trained models.
A
A Hierarchical Deterministic Wallet can be used when a large organization wants to allow its various departments to use cryptocurrencies to receive payments. The master private key is kept secure with the administrator and each branch is given a private key. The branches would then generate the public key corresponding to their private keys. The extended or master public key can be shared with all the departments. This model is suitable as all the departments can operate without having to interfere with each other as all the receipts are collected separately. However, there is a security flaw in this model. The vulnerability, which is also publicly specified in the documentation of BIP 32, allows any department having a private key to generate the master private key, given the master public key. This does not provide the separation that we wanted and the organization is not protected against insider attacks. In this situation, usage of 2-out-of-3 multi-sig transactions may be better. However, no research has been done to formally prove it.
Monero uses ring signatures and enforces all the members in the set to hold the same amount of coins which is equal to the number of coins to be consumed in the transaction. The members other than the real sender pull up this amount arbitrarily from the blockchain, making the inputs of the transaction untraceable. Monero makes use of key images that are associated with every ring signature to avoid double-spending. Since the signature does not reveal the sender’s identity and the balance of all the members looks the same, the identity of the sender of a transaction is hidden. A diagrammatic representation of ring signatures can be shown in fig. 3
A user who wants to generate a ring signature is assigned a set. The set consists of multiple users whom all have public-private key pairs. The user generates the ring signature on a message by knowing the public keys belonging to the rest of the members of the set. Coordination among the members of the set is not required, nor is there any need for a centralized authority, unlike CoinJoin. The other members do not need to be aware of their involvement in the signature. None of the members can forge each other’s signature. The ring signature is irrevocable. The verification of the ring signature can be done with the knowledge of the message, the ring signature, and all the public keys owned by the members of the set. The signer’s identity remains hidden behind this set[13]. The goal of using ring signatures in Cryptocurrencies is to protect the privacy of a sender of a transaction by making it computationally infeasible to determine the sender’s address given the signature[5].
Ring signatures have a requirement that all the members on the ring must have the same outputs. The value of this output is not concealed. Even though ring signatures are successful in hiding the identity of the sender or the signer of the transaction, they fail in hiding the amount transferred in the transaction. This piece of information can be used as a clue to deduct inferences like transaction patterns. For instance, if a target, who is the sender whose identity an attacker wishes to uncover, always pays a recipient who does not make use of stealth addresses and whose public address is known to the attacker, he/she would be able to find this transaction by searching for transactions that transfer the specified amount to the recipient. The time when the transaction is generated can also aid in narrowing down the target transaction from several potential transactions. Even though the sender’s address stays concealed, due to the use of ring signatures, the transactions that the sender makes for the specified amount can still be found. The attacker can make many inferences based upon this information, such as finding the frequency of such transactions and can potentially determine the address of the sender. More importantly, a sender may want to keep the transaction amount private. Ring Confidential Transactions help achieve this goal.
In a ring containing n𝑛nitalic_n members, a signer can create a valid signature only if the m𝑚mitalic_m private keys belonging to the members’ key vectors are known to the signer. MLSAGs are unforgeable under the discrete logarithm assumption.
B
In Section 7, we illustrate with some computational estimates for large problem instances which are at the limit of practicality. Unfortunately, as we will see, the asymptotic behavior does not seem to fully kick in even at such large scales. At the current time, it appears that the new algorithm is unlikely to beat alternative algorithms; further optimizations and improvements may be needed.
Since our goal is to get a new practical algorithm, the asymptotic estimates have limited value on their own. In this section, we calculate specific costing for problems which are at the upper limits of practical size. We want to compare our algorithm to Strassen’s algorithm and the KK algorithm.
Recursion base case. In practice, with recursive matrix decompositions such as the KK algorithm or Strassen’s algorithm, it is not optimal to follow the recursion all the way down to the underlying ring. Instead, one should switch over to a direct “base case” implementation of matrix multiplication for the innermost calculations. For the KK algorithm, unlike for ordinary matrix multiplication, this is not simply a matter of computational convenience: when we switch to the base case, we compute the matrix multiplication exactly, and so we will need to take this into account both for analyzing the algorithm accuracy as well as its runtime.
It seems difficult to make progress on better practical algorithms for matrix multiplication. In [12], Karppa & Kaski proposed an innovative and novel approach to break this impasse: they described a “broken” or “opportunistic” form of matrix multiplication, which uses a variant of Strassen’s algorithm to compute a tensor which includes a subset of the terms in the full matrix multiplication tensor. For brevity, we refer to their algorithm as the KK algorithm. By iterating for sufficiently many repetitions, this can be used to solve BMM with high probability. With appropriate choice of parameters, the overall runtime is O⁢(nlog2⁡(6+6/7)⁢log⁡n)≈n2.776𝑂superscript𝑛subscript2667𝑛superscript𝑛2.776O(n^{\log_{2}(6+6/7)}\log n)\approx n^{2.776}italic_O ( italic_n start_POSTSUPERSCRIPT roman_log start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( 6 + 6 / 7 ) end_POSTSUPERSCRIPT roman_log italic_n ) ≈ italic_n start_POSTSUPERSCRIPT 2.776 end_POSTSUPERSCRIPT, a notable improvement over Strassen. See also [13] for further details.
We note that [12] also discussed generalizations to other types of broken matrix multiplication tensors. We will restrict our attention to Strassen-based pseudo-multiplication, for two main reasons. First, unlike in [12], our analysis heavily depends on specific structural properties of the Strassen tensor as opposed to just counting the gross number of terms. Second, it is not clear if any other tensors are practical, especially in light of the fact that Strassen’s algorithm appears to be the only truly practical fast multiplication algorithm.
D
Since the right-hand side of (215) represents E˙˙𝐸\dot{E}over˙ start_ARG italic_E end_ARG of (213) if dc=0subscript𝑑𝑐0d_{c}=0italic_d start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT = 0 in (176), the comparison lemma of Lemma 2.1 gives (207) as in Theorem 6.4. The rest follows from the fact that u𝑢uitalic_u of Theorem 4.6 is a feasible solution of (214) if p=0𝑝0p=0italic_p = 0 [79, 80].
As discussed in Remark 4.2, Theorem 6.5 can also be formulated in a stochastic setting as in Theorem 6.4 using the technique discussed in [119].
For the NCM framework with stochastic perturbation, we can utilize the spectrally-normalized DNN of Definition 6.3 to guarantee the Lipschitz condition on ∂M/∂xi𝑀subscript𝑥𝑖\partial M/\partial x_{i}∂ italic_M / ∂ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, appeared in the stochastic contraction condition of Theorem 2.5 (see Proposition 1 of [15]). We could also utilize the technique proposed in [62, 63, 64], which designs Lipschitz-bounded equilibrium neural networks using contraction theory, for obtaining a result analogous to Lemma 6.2. The pseudocode for the NCM construction in a stochastic setting can be found in [15].
We could consider stochastic perturbation in Theorem 4.6 using Theorem 2.5, even with the differential control law of the form (144) or (146) as demonstrated in [119]. Also, although the relation (142) or (148) is not included as a constraint in Theorem 4.2 for simplicity of discussion, the dependence of W¯˙˙¯𝑊\dot{\bar{W}}over˙ start_ARG over¯ start_ARG italic_W end_ARG end_ARG on u𝑢uitalic_u in Theorem 4.2 can be removed by using it in a similar way to [119].
Using the technique of Remark 8.1, we can construct adaptive control for the Lagrangian system in Example 2.6 as follows:
A
Appendix D describes low-rank kernel logistic regression. For readability, all proofs are contained in Appendix A.
Let X,Y𝑋𝑌X,Yitalic_X , italic_Y be random variables taking values in some separable, complete metric
Let hℎhitalic_h from (10) and h~~ℎ\tilde{h}over~ start_ARG italic_h end_ARG from
constants and distributions of independent random variables, that are both problematic for the
Let z1=(x1,y1),…,zn=(xn,yn)formulae-sequencesubscript𝑧1subscript𝑥1subscript𝑦1…subscript𝑧𝑛subscript𝑥𝑛subscript𝑦𝑛z_{1}=(x_{1},y_{1}),\dots,z_{n}=(x_{n},y_{n})italic_z start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) , … , italic_z start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT = ( italic_x start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) be a sample of points in 𝒵𝒵{\mathcal{Z}}caligraphic_Z.
A
For any constant α>3⁢68𝛼368\alpha>\frac{3\sqrt{6}}{8}italic_α > divide start_ARG 3 square-root start_ARG 6 end_ARG end_ARG start_ARG 8 end_ARG, there exists a constant C𝐶Citalic_C such that for any n𝑛nitalic_n-vertex graph G𝐺Gitalic_G and edge coloring of G𝐺Gitalic_G with α⁢n𝛼𝑛\alpha nitalic_α italic_n colors, if each color class is a matching of size 2, then the rainbow girth of G𝐺Gitalic_G is at most C⁢log⁡n𝐶𝑛C\log nitalic_C roman_log italic_n.
This suggests that in the antipodal case to that of stars, when the sets of edges are matchings, the conjecture can be strengthened. (“Antipodal” is with respect to the covering number, which is 1111 in a star, and the number of edges in a matching.)
To prove (8), we shall apply Chebyshev’s inequality. For this purpose we have to estimate Var⁡XVar𝑋\operatorname{Var}Xroman_Var italic_X. With a look at (3), we have
For the argument in the proof of Theorem 2.4 to work with α⁢n𝛼𝑛\alpha nitalic_α italic_n colors, we need to have
For any constant α>3⁢68𝛼368\alpha>\frac{3\sqrt{6}}{8}italic_α > divide start_ARG 3 square-root start_ARG 6 end_ARG end_ARG start_ARG 8 end_ARG, there exists a constant C𝐶Citalic_C such that for any n𝑛nitalic_n-vertex graph G𝐺Gitalic_G and edge coloring of G𝐺Gitalic_G with α⁢n𝛼𝑛\alpha nitalic_α italic_n colors, if each color class is a matching of size 2, then the rainbow girth of G𝐺Gitalic_G is at most C⁢log⁡n𝐶𝑛C\log nitalic_C roman_log italic_n.
C