context
stringlengths
80
2.5k
A
stringlengths
80
2.59k
B
stringlengths
80
1.95k
C
stringlengths
80
3.07k
D
stringlengths
80
3.07k
label
stringclasses
4 values
These techniques are limited by the object representations used by the target detectors, and tend to be constrained to simple motion models that do not adequately represent the motion of an object.
The techniques used to estimate the egomotion (Section 4.1.2) must be adapted to estimate third-party motions in a geocentric frame.
Those regions are then used to estimate the egomotion trajectory relative to those static points and the rest of the scene is usually ignored as noise.
The linearized cost in (16) is then used to estimate the geocentric trajectory of every third-party motion in the scene.
Fully addressing the MEP requires applying more expressive motion estimation techniques, such as those used to estimate egomotion, to the other third-party motions in a scene.
D
Note that in the model reduction experiments we used approximate estimation of the ELPD and did not test the performance on actual hold out data. In practice it is important to interpret the projected posterior as explained in McLatchie et al. (2024), to assess whether it can actually be used for out-of-sample prediction in addition to just model ranking during the model space search. A safe alternative is to refit the final found submodel using its own posterior instead of the projected posterior for final evaluation or prediction. In addition, it can be useful to diagnose the model search path using cross-validation over the entire model search if signs of over-optimism are detected (McLatchie et al., 2024). One sign is the ELPD of the submodel exceeding the ELPD of the reference model, which we did not see in our experiments, and therefore did not cross-validate the search path due to increased computational demands. The final model size determination is always inherently dependent on some threshold rule and setting the optimal threshold depends on factors such as the amount of noise in the data, as seen in our experiments (see also Sec. S2). For the models discussed in this paper, it seems beneficial to look at both the relative difference in ELPD and cumulative relevance, and whether these metrics plateau rather than whether they exceed a given threshold.
class of GP models to be used in large scale applications of various fields of science as the computational complexity is linear with respect to data size. We have presented a scalable approximation scheme for mixed-domain covariance functions,
We thank Aki Vehtari and Gleb Tikhonov for useful comments on early versions of this manuscript, and acknowledge the computational resources provided by Aalto Science-IT, Finland.
In this work, we present a scalable approximation and model reduction scheme for additive mixed-domain GPs, where the covariance structure depends on both continuous and categorical variables. We extend the Hilbert space reduced-rank approximation (Solin and Särkkä, 2020) for said additive mixed-domain GPs, making it applicable to e.g. analysis of large longitudinal data sets. The approach scales linearly with respect to data set size and allows a wide variety of categorical kernels that specify possible correlation over groups. It allows an arbitrary observation model and full Bayesian inference for the model hyperparameters, and is suitable for longitudinal data as it allows product kernels of continuous and categorical kernels for modeling for example group-specific effects that sum to zero. Furthermore, we demonstrate how to use the projection predictive technique (see e.q. (Pavone et al., 2020)) for said models and compare it with a variance decomposition based covariate relevance assessment technique (Timonen et al., 2021) to obtain recommendations how to efficiently produce small and interpretable additive GP models for longitudinal data. We offer an open source R package (R Core Team, 2023) called lgpr2111https://github.com/jtimonen/lgpr2 that implements the described longitudinal approximate GP models and model reduction techniques.
Note that in the model reduction experiments we used approximate estimation of the ELPD and did not test the performance on actual hold out data. In practice it is important to interpret the projected posterior as explained in McLatchie et al. (2024), to assess whether it can actually be used for out-of-sample prediction in addition to just model ranking during the model space search. A safe alternative is to refit the final found submodel using its own posterior instead of the projected posterior for final evaluation or prediction. In addition, it can be useful to diagnose the model search path using cross-validation over the entire model search if signs of over-optimism are detected (McLatchie et al., 2024). One sign is the ELPD of the submodel exceeding the ELPD of the reference model, which we did not see in our experiments, and therefore did not cross-validate the search path due to increased computational demands. The final model size determination is always inherently dependent on some threshold rule and setting the optimal threshold depends on factors such as the amount of noise in the data, as seen in our experiments (see also Sec. S2). For the models discussed in this paper, it seems beneficial to look at both the relative difference in ELPD and cumulative relevance, and whether these metrics plateau rather than whether they exceed a given threshold.
B
The structure preservation is ensured by a proper choice of the weighting matrices of the Riccati equations which yields as a side product a passive LQG-like controller which ensures that the closed-loop system is regular, impulse-free, and asymptotically stable.
Similarly as in [27], the balanced system never needs to be explicitly computed and the balancing and truncation step can be combined.
In Figure 1, we show the results for reduced models obtained by Algorithm 1 and the classical LQG-BT method from [27]. For our approach, we distinguish between the canonical port-Hamiltonian representation associated with a finite element discretization of the model equations and an improved representation constructed along the findings from subsection 4.3. We remark the following two observations that confirm our theoretical discussion: on the one hand, the error as well as the error bound for an optimal choice of the Hamiltonian decays significantly faster than in the canonical case. Moreover, in terms of the ℋ∞subscriptℋ\mathcal{H}_{\infty}caligraphic_H start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT-error of the coprime factors, our approach yields reduced systems that are comparable to those obtained by classical (unstructured) LQG balanced truncation.
As shown in Figure 2, we obtain similar results as in the case of the transport network. In particular, changing the Hamiltonian in the system representation drastically reduces the error as well as the error bound of our approach by several orders of magnitude. Moreover, the error corresponding to an optimal choice is on a level that is almost identical to the one of the unstructured approach from [27].
Similarly as in classical LQG balanced truncation, the approximation error of the reduced-order model obtained by the new method can be estimated a priori by an error bound in the gap metric.
D
0≤b⁢(𝐰,A)≤10𝑏𝐰𝐴1\displaystyle 0\leq b(\mathbf{w},A)\leq 10 ≤ italic_b ( bold_w , italic_A ) ≤ 1
For SAME we now show that bm⁢i⁢n=0subscript𝑏𝑚𝑖𝑛0b_{min}=0italic_b start_POSTSUBSCRIPT italic_m italic_i italic_n end_POSTSUBSCRIPT = 0, bm⁢a⁢x=1subscript𝑏𝑚𝑎𝑥1b_{max}=1italic_b start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT = 1 and both can be reached independent of A𝐴Aitalic_A. Since A𝐴Aitalic_A defines the bias space B𝐵Bitalic_B used by SAME, we need to show that the extreme values can be reached independent of B𝐵Bitalic_B. First, we can state that
Hence we can show that the extrema depend on the attribute sets A𝐴Aitalic_A and B𝐵Bitalic_B:
Hence we can show that the extrema depend on the attribute sets A𝐴Aitalic_A and B𝐵Bitalic_B:
To show that both extreme cases can be reached independent of A𝐴Aitalic_A, we consider the following extreme cases:
D
Now, we provide the comparison between the PMFs of optimal noise distribution with regard to Gaussian and geometric distributions in Fig. 8 for the same MSE parameter for all the distributions. From the plot, we can observe that the probability mass at η=0𝜂0\eta=0italic_η = 0 is maximum for the proposed mechanism, which validates our claim of the least error rate among these mechanisms.
For the SD neighborhood and δ=0𝛿0\delta=0italic_δ = 0, the optimal noise PMF for the modulo addition mechanism is:
Next, we find an explicit solution for the optimum noise PMF f⋆⁢(η),η∈[n]superscript𝑓⋆𝜂𝜂delimited-[]𝑛f^{\star}(\eta),\eta\in[n]italic_f start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT ( italic_η ) , italic_η ∈ [ italic_n ] for the SD and BD neighborhood cases. In Section 3.2.4, we discuss the case of discrete vector queries whose entries are independently subjected to the mechanism vs. the optimal solution.
For a given ϵ>0italic-ϵ0\epsilon>0italic_ϵ > 0, the privacy loss for the SD neighborhood case with the optimal noise mechanism, is a discontinuous function of ϵitalic-ϵ\epsilonitalic_ϵ, where:
In the following figures, we show the structure of the PMF associated with the optimal noise mechanism. First, we consider the SD neighborhood case.
D
As there is no natural translation of such approaches to the model of population protocols,
In this section we show how to perform sequential composition, using the outputs of one
Note also that after each step of the composition protocol (Q,S⁢t⁢e⁢p)𝑄𝑆𝑡𝑒𝑝(Q,Step)( italic_Q , italic_S italic_t italic_e italic_p ),
In the section after that we define and construct sequential composition of protocols.
If we define the set of input states Is={q1}subscript𝐼𝑠subscript𝑞1I_{s}=\{q_{1}\}italic_I start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = { italic_q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT },
C
&0.\end{array}start_ARRAY start_ROW start_CELL ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_K end_POSTSUPERSCRIPT over¯ start_ARG italic_b end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT roman_Δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_V ( italic_t , italic_x ) + 2 over¯ start_ARG italic_x end_ARG start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT over¯ start_ARG italic_C end_ARG ( italic_t ) start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT italic_D ( italic_t ) + 2 italic_u start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT ( italic_R ( italic_t ) + italic_D ( italic_t ) start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT italic_D ( italic_t ) ) end_CELL start_CELL = end_CELL start_CELL 0 , end_CELL end_ROW start_ROW start_CELL ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_K end_POSTSUPERSCRIPT over¯ start_ARG italic_e end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT roman_Δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_V ( italic_t , italic_x ) + 2 over¯ start_ARG italic_x end_ARG start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT over¯ start_ARG italic_C end_ARG ( italic_t ) start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT italic_F ( italic_t ) + 2 italic_w start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT ( italic_F ( italic_t ) start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT italic_F ( italic_t ) - italic_γ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_I ) end_CELL start_CELL = end_CELL start_CELL 0 . end_CELL end_ROW end_ARRAY
Solving those equations for u𝑢\displaystyle uitalic_u and w𝑤\displaystyle witalic_w yields
Since q⁢(t,x,u)𝑞𝑡𝑥𝑢\displaystyle q(t,x,u)italic_q ( italic_t , italic_x , italic_u ) is strictly convex in u𝑢\displaystyle uitalic_u and λ⁢(t,x,u)𝜆𝑡𝑥𝑢\displaystyle\lambda(t,x,u)italic_λ ( italic_t , italic_x , italic_u ) is affine in u𝑢\displaystyle uitalic_u, then there exists a unique global minimum to (16). Computing the derivative of the expression in brackets in (21) with respect to u𝑢\displaystyle uitalic_u yields the optimal control input given in (19) and substituting this expression in (16) yields (20).
where the control input u𝑢\displaystyle uitalic_u is adapted and independent of w𝑤\displaystyle witalic_w. In order to solve this problem, the following cost is introduced:
and where we have used the fact that D⁢(t)T⁢F⁢(t)=0𝐷superscript𝑡T𝐹𝑡0\displaystyle D(t)^{\mathrm{{\scriptstyle T}}}F(t)=0italic_D ( italic_t ) start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT italic_F ( italic_t ) = 0. Then, computing the derivative of the expression in the min-max expression in (108) with respect to u𝑢\displaystyle uitalic_u and w𝑤\displaystyle witalic_w, and equating those expressions to zero yields
A
The limit as n→∞→𝑛n\to\inftyitalic_n → ∞ of the right-hand side is 1−y−x′1𝑦superscript𝑥′1-y-x^{\prime}1 - italic_y - italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, which is positive by assumption.
On the other hand, for sufficiently small ε𝜀\varepsilonitalic_ε, agent n𝑛nitalic_n’s MMS is 1−(n−1)⁢εn1𝑛1𝜀𝑛\frac{1-(n-1)\varepsilon}{n}divide start_ARG 1 - ( italic_n - 1 ) italic_ε end_ARG start_ARG italic_n end_ARG, so her NMMS is (1−(n−1)⁢ε)2superscript1𝑛1𝜀2(1-(n-1)\varepsilon)^{2}( 1 - ( italic_n - 1 ) italic_ε ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT.
The limit as n→∞→𝑛n\to\inftyitalic_n → ∞ of the right-hand side is 1−y−x′1𝑦superscript𝑥′1-y-x^{\prime}1 - italic_y - italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, which is positive by assumption.
The difference on the left-hand side, when non-negative, corresponds to the weighted envy of i𝑖iitalic_i towards j𝑗jitalic_j.
Then, choose ε𝜀\varepsilonitalic_ε small enough so that the left-hand side is smaller than the right-hand side, and therefore the inequality holds.
D
Mironov [6] was first to discuss the implications of the fact that one cannot represent—and thus cannot sample from—all real numbers on a finite-precision computer.
Focusing on the Opacus DP library implementation [4] by Facebook, we also show that DP-SGD is vulnerable to information leakage
For one of our two floating-point attacks, in addition to observing a single DP output that the adversary wishes to attack,
is protected by the DP mechanism based on a theoretical normal distribution using real values.
Focusing on the Laplace mechanism, Mironov’s attack proceeds by observing that certain floating-point values cannot be generated by a DP computation
D
Concerns were raised about provision of data to third parties without explicit agreement:
More restricted access control should be applied to sensitive data (e.g., genomics data),
One interviewee suggested that access to sensitive data can be partial and conditional:
One interviewee pointed out that there might be multiple AI algorithms suitable for a task,
The access restrictions attached to sensitive data may prevent projects from proceeding:
B
}(\bm{q}_{k},\bm{q}_{i})}{\sum_{j=1}^{N}K_{h}(\bm{q}_{k},\bm{q}_{j})}divide start_ARG ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT ∇ start_POSTSUBSCRIPT bold_italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT italic_K start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( bold_italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , bold_italic_q start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) end_ARG start_ARG ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT italic_K start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( bold_italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , bold_italic_q start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) end_ARG + ∑ start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT divide start_ARG ∇ start_POSTSUBSCRIPT bold_italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT italic_K start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( bold_italic_q start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , bold_italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) end_ARG start_ARG ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT italic_K start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( bold_italic_q start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , bold_italic_q start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) end_ARG will be almost zero, which also fails to approximate ∇ln⁡fNh∇superscriptsubscript𝑓𝑁ℎ\nabla\ln f_{N}^{h}∇ roman_ln italic_f start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT.
However, the median trick is not suitable for the FENE potential, as the equilibrium distribution is no longer Gaussian type and the median of the pairwise distance can become very large. Numerical experiments show that taking kernel bandwidth h=0.01ℎ0.01h=0.01italic_h = 0.01 produces a good result for N=200𝑁200N=200italic_N = 200 for the FENE model in simple extension flows. We also fix kernel bandwidth h=0.01ℎ0.01h=0.01italic_h = 0.01 and N=200𝑁200N=200italic_N = 200 for the following numerical experiments of the FENE models. We’ll explore the effects of different kernel bandwidth hℎhitalic_h in future work. The temporal step-size is taken as Δ⁢t=10−3Δ𝑡superscript103\Delta t=10^{-3}roman_Δ italic_t = 10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT.
where Kh⁢(𝒒,𝒒j)subscript𝐾ℎ𝒒subscript𝒒𝑗K_{h}(\bm{q},\bm{q}_{j})italic_K start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( bold_italic_q , bold_italic_q start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) is a smooth kernel function and hℎhitalic_h is the kernel bandwidth [34]. A typical choice of Kh⁢(𝒒,𝒒j)subscript𝐾ℎ𝒒subscript𝒒𝑗K_{h}(\bm{q},\bm{q}_{j})italic_K start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( bold_italic_q , bold_italic_q start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) is the Gaussian kernel, given by
A key step in the above deterministic particle scheme is to replace the empirical measure fNsubscript𝑓𝑁f_{N}italic_f start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT by fNhsuperscriptsubscript𝑓𝑁ℎf_{N}^{h}italic_f start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT using kernel smoothing (49) when computing ln⁡f𝑓\ln froman_ln italic_f in the free energy. The choice of kernel bandwidth hℎhitalic_h is important for the accuracy and robustness of the numerical scheme below.
The optimal kernel bandwidth depends on the potential Ψ⁢(𝐪)Ψ𝐪\Psi(\bm{q})roman_Ψ ( bold_italic_q ) and the macroscopic flow. In the current study, we choose the kernel bandwidth hℎhitalic_h through multiple numerical experiments (see the numerical sections for details).
D
_{j\ell}.( bold_italic_t start_POSTSUBSCRIPT roman_ℓ , italic_k end_POSTSUBSCRIPT ⊗ ∇ italic_λ start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT ) : ( ∇ italic_λ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⊗ bold_italic_t start_POSTSUBSCRIPT italic_i + 1 , italic_j end_POSTSUBSCRIPT ) = ∇ italic_λ start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT ⋅ bold_italic_t start_POSTSUBSCRIPT italic_i + 1 , italic_j end_POSTSUBSCRIPT = italic_δ start_POSTSUBSCRIPT italic_j roman_ℓ end_POSTSUBSCRIPT .
If we identify entries of the matrix proxy as nodes of a graph, a constraint sequence will define a path of nodes. See Fig. 7. Indices in different constraint sequences are different. Namely for τ≠τ′𝜏superscript𝜏′\tau\neq\tau^{\prime}italic_τ ≠ italic_τ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, (i+τ,i)≠(j+τ′,j)𝑖𝜏𝑖𝑗superscript𝜏′𝑗(i+\tau,i)\neq(j+\tau^{\prime},j)( italic_i + italic_τ , italic_i ) ≠ ( italic_j + italic_τ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_j ) as either i≠j𝑖𝑗i\neq jitalic_i ≠ italic_j or i+τ≠j+τ′𝑖𝜏𝑗superscript𝜏′i+\tau\neq j+\tau^{\prime}italic_i + italic_τ ≠ italic_j + italic_τ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT. On the graph, different constraint sequences will correspond to disjoint paths.
When i=k𝑖𝑘i=kitalic_i = italic_k, by ℓ≠i,i+1ℓ𝑖𝑖1\ell\neq i,i+1roman_ℓ ≠ italic_i , italic_i + 1, it follows
}i,j\in f^{*},i\neq j,bold_italic_n start_POSTSUBSCRIPT italic_f ∪ { italic_i } end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT ⋅ bold_italic_n start_POSTSUBSCRIPT italic_F start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT = 0 for italic_i , italic_j ∈ italic_f start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , italic_i ≠ italic_j ,
When i=ℓ𝑖ℓi=\ellitalic_i = roman_ℓ, by i≠j𝑖𝑗i\neq jitalic_i ≠ italic_j, it follows
D
Morris (2017) and Brooks and Du (2021). Private private signals also appear as counterexamples of information aggregation in financial markets: see the discussion in Ostrovsky (2012) and similar observations in the computer science literature (Feigenbaum, Fortnow, Pennock, and
generalizes our Theorem 2: it can be used to show that the recommender has a dominant privacy-preserving recommendation in our sense even when only observing a noisy signal about ω𝜔\omegaitalic_ω. See §6.1 in Strack and Yang (2023) for a detailed comparison of the papers.
Generalizing this example, we view the attribute and the recommendation as two signals about the state. We study private private information structures, in which signals about the state are statistically independent of each other. Requiring independence between these signals imposes a joint restriction on their informativeness. This paper studies the maximal informativeness of signals independent of each other and applies the results to settings such as the recommender’s problem above.
If the state and the attribute are independent, then the recommender can simply report the state. However, if the state and the attribute are correlated, reporting the state also inadvertently reveals some information about the attribute. The recommender faces a privacy-constrained information-design problem: how to optimally give information about the state, while keeping the recommendation independent of the sensitive attribute? Intuitively, as the correlation between the sensitive attribute and the state increases, the privacy constraint becomes more restrictive and further limits how much information the recommender can provide.
In a follow-up paper, Strack and Yang (2023) consider our problem of optimal privacy-preserving recommendation and generalize the analysis in a number of directions. Most importantly, they show that the result on the existence of a dominant recommendation—obtained in our paper for the case of a binary state—extends to a real-valued state if the decision-maker’s objective is a function of the posterior mean. The realistic scenario where a recommender gets a noisy signal about a binary state reduces to this setting by treating the recommender’s belief as a new state variable. We further discuss the relation to their paper after presenting our results; see Footnote 11. A detailed discussion can also be found in §6.1 in their paper.
D
The crank and slider mechanism is shown in Figure 10 (a). This mechanism has 4444 links and 4444 joints. The number of revolute joints are 3333 and prismatic joint is 1111. The zebra crossing diagram for this mechanism is shown in Figure 10 (b). The steps involved in drawing the zebra crossing diagram is similar to that of the four bar mechanism.
There are 4444 black patches and 4444 white patches in the Zebra crossing diagram. Applying Equation 1, the number of loops (L) in the mechanism are
There are 4444 black patches and 4444 white patches in the zebra crossing diagram. The number of loops in the mechanism are calculated using Equation 1. The number of loops are,
There are 6666 black patches and 5555 white patches in the zebra crossing diagram. The number of loops (L) in the mechanism are calculated using Equation 1.
There are 6666 black patches and 5555 white patches in the zebra crossing diagram. The number of loops in the mechanism are calculated using Equation 1. The number of loops are,
B
Fig. 4 shows the ACC metric for BERT-base encoder layers on IMDB [77] dataset and several General Language Understanding Evaluation (GLUE) [78] benchmark tasks. The fitted curve to the ACC metric results shows that the ACC metric is reduced at later layers gradually. It indicates that the fraction of the word-vectors that contribute more in the last layers is less in those layers; hence more word-vectors can be eliminated. By eliminating less important word-vectors in each layer, the processed word-vectors are decreased gradually, and the inference time of the model is decreased.
Table I presents an analysis of the latency and TTFT share across the embedding, attention, and feed-forward layers in several Transformer-based models. As shown in the table, regardless of model size, the latency of the embedding layer is minimal, with the majority of latency attributed to the attention and feed-forward layers. Attention share ranges from around 20% to 47% of the total. In neural networks, latency and computational effort are closely related to the number of FLOPs. Consequently, reducing the number of FLOPs in the encoder layers, which account for most of the latency, significantly decreases both total latency and TTFT.
αE⁢Plsuperscriptsubscript𝛼𝐸𝑃𝑙\alpha_{EP}^{l}italic_α start_POSTSUBSCRIPT italic_E italic_P end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT is obtained from the fitted curve to the ACC metric (red lines in Fig. 4) at layer l𝑙litalic_l. In some tasks, the ACC values of layers are not smooth. To increase stability and prevent applying stress to the model in fine-tuning phase, in the proposed formulation, instead of ACC metric values (EA⁢C⁢Csubscript𝐸𝐴𝐶𝐶E_{ACC}italic_E start_POSTSUBSCRIPT italic_A italic_C italic_C end_POSTSUBSCRIPT), the fitted curve to the ACC metric (PA⁢C⁢Csubscript𝑃𝐴𝐶𝐶P_{ACC}italic_P start_POSTSUBSCRIPT italic_A italic_C italic_C end_POSTSUBSCRIPT) is used that presents a more smooth elimination trend. It is important to note that when an increasing behavior of the ACC value occurs in a layer, the elimination process is halted, and the remaining layers proceed using a fixed number of retained word-vectors. In (2), the αE⁢Psubscript𝛼𝐸𝑃\alpha_{EP}italic_α start_POSTSUBSCRIPT italic_E italic_P end_POSTSUBSCRIPT lower bound is set to one to ensure a monotonic descending shape of this hyper-parameter curve. This adaptive approach guarantees the preservation of crucial information and mitigates the impact of unexpected ACC behavior on subsequent layers.
As shown in Fig. 4, the behavior of the ACC metric strongly correlates with the intricate specifications inherent in each task. The ACC results consistently exhibit a monotonic decrease with the layer number, except in the case of the SST-2 [79] dataset, where unexpected behavior is observed. Notably, the SST-2 dataset, centered around sentiment analysis of movie reviews, shares a strong resemblance to the IMDB dataset. A comparative analysis of the ACC curves for SST-2 and IMDB reveals a notable disparity, primarily associated with the length of input sequences. Specifically, in the IMDB dataset, the majority of samples consist of around 512 tokens; conversely, in the SST-2 dataset, the majority of samples feature fewer than 50 tokens. It can be inferred that the limited number of input tokens leads to an undesired sensitivity to word-vectors, and in such cases, some word-vectors are revived through residual branches.
Fig. 4 shows the ACC metric for BERT-base encoder layers on IMDB [77] dataset and several General Language Understanding Evaluation (GLUE) [78] benchmark tasks. The fitted curve to the ACC metric results shows that the ACC metric is reduced at later layers gradually. It indicates that the fraction of the word-vectors that contribute more in the last layers is less in those layers; hence more word-vectors can be eliminated. By eliminating less important word-vectors in each layer, the processed word-vectors are decreased gradually, and the inference time of the model is decreased.
C
For the above low-rank covariance estimation model with p≥d+1𝑝𝑑1p\geq d+1italic_p ≥ italic_d + 1, an (n,n+1,0.1)𝑛𝑛10.1(n,n+1,0.1)( italic_n , italic_n + 1 , 0.1 ) sample amplification is possible if and only if n≥d𝑛𝑑n\geq ditalic_n ≥ italic_d.
In [AGSV20], a subset of the authors introduced the sample amplification problem, and studied two classes of distributions: the Gaussian location model and discrete distribution model. For these examples, they characterized the statistical complexity of sample amplification and showed that it is strictly smaller than that of learning. In this paper, we work towards a general understanding of the statistical complexity of the sample amplification problem, and its relationship with learning. The main contributions of this paper are as follows:
Theorem 7.2 shows that as opposed to learning, sample amplification fails to exploit the low-rank structure in the covariance estimation problem. As a result, the complexity of sample amplification coincides with that of learning in this example. Note that sample amplification is always no harder than learning: the learner could always estimate the distribution, generate one observation from the distribution and append it to the original samples. Therefore, Theorem 7.2 provides an example where the relationship between sample amplification and learning is the worst possible.
where it is the same as the class of all discrete distributions over d+1𝑑1d+1italic_d + 1 points, except that the learner has the perfect knowledge of p0=tsubscript𝑝0𝑡p_{0}=titalic_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = italic_t for some known t∈[1/(2⁢d),1/2]𝑡12𝑑12t\in[1/(2\sqrt{d}),1/2]italic_t ∈ [ 1 / ( 2 square-root start_ARG italic_d end_ARG ) , 1 / 2 ]. It is a classical result (see, e.g. [HJW15]) that the sample complexity of learning the distribution over 𝒫d,tsubscript𝒫𝑑𝑡{\mathcal{P}}_{d,t}caligraphic_P start_POSTSUBSCRIPT italic_d , italic_t end_POSTSUBSCRIPT with a small TV distance is still n=Θ⁢(d)𝑛Θ𝑑n=\Theta(d)italic_n = roman_Θ ( italic_d ), regardless of t𝑡titalic_t. However, the next theorem shows that the complexity of sample amplification is much smaller.
In all the examples we have seen in the previous sections, there is always a squared root relationship between the statistical complexities of sample amplification and learning. Specifically, when the dimensionality of the problem is d𝑑ditalic_d, the complexity of learning the distribution (under a small TV distance) is typically n=Θ⁢(d)𝑛Θ𝑑n=\Theta(d)italic_n = roman_Θ ( italic_d ), whereas that of the sample amplification is typically n=Θ⁢(d)𝑛Θ𝑑n=\Theta(\sqrt{d})italic_n = roman_Θ ( square-root start_ARG italic_d end_ARG ). In this section, we give examples where this relationship could break down in either direction, thus show that there is no universal scaling between the sample complexities of amplification and learning.
B
The vector field g0superscript𝑔0g^{0}italic_g start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT is often called the drift since, when all the inputs vanish, the state evolution is still non-vanishing in the presence of g0superscript𝑔0g^{0}italic_g start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT.
A second category of more general systems would take into account for a general nonlinear dependence of the dynamics with respect to the inputs (both known and unknown). In accordance with Equation (2.1), this dependence is affine.
By construction, the unknown input degree of reconstructability from any set of functions cannot exceed mwsubscript𝑚𝑤m_{w}italic_m start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT. In addition, it depends in general on x𝑥xitalic_x and, for TV systems, also on t𝑡titalic_t.
In many cases the system is time-invariant (from now on TI), namely it has not an explicit time-dependence and all the functions that appear in (2.1) do not depend explicitly on time. Nevertheless, we also account for an explicit time dependence to be as general as possible. From now on, we use the acronym TV to indicate a system with an explicit time-dependence (TV stands for Time-Variant). We also use the acronym UI to mean unknown input and UIO to mean unknown input observability.
The first operation executed by 𝒜−superscript𝒜\mathcal{A}^{-}caligraphic_A start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT is to express θ𝜃\thetaitalic_θ only in terms of the unknown inputs and the original state. In other words, all the vαsubscript𝑣𝛼v_{\alpha}italic_v start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT that appear in θ𝜃\thetaitalic_θ are expressed in terms of the unknown inputs (this is obtained by using (5.11) and its time derivatives when θ𝜃\thetaitalic_θ also depends on the time derivatives of vαsubscript𝑣𝛼v_{\alpha}italic_v start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT).
C
We are now in place to upper bound R1csuperscriptsubscript𝑅1𝑐R_{1}^{c}italic_R start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT, R2csuperscriptsubscript𝑅2𝑐R_{2}^{c}italic_R start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT, R3csuperscriptsubscript𝑅3𝑐R_{3}^{c}italic_R start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT and R4csuperscriptsubscript𝑅4𝑐R_{4}^{c}italic_R start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT; the proofs are similar to those for the lemmas in Section 4.1 and are included in Appendix C.
Under the event ℰ′superscriptℰ′\mathcal{E}^{\prime}caligraphic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, it holds that
Under the event ℰ′superscriptℰ′\mathcal{E}^{\prime}caligraphic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, the following holds with probability at least 1−δ/21𝛿21-\delta/21 - italic_δ / 2:
Under the event ℰ′superscriptℰ′\mathcal{E}^{\prime}caligraphic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, it holds that
Under the event ℰ′superscriptℰ′\mathcal{E}^{\prime}caligraphic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, it holds that
A
The difficulty in applying the quadratic gradient is to invert the diagonal matrix B~~𝐵\tilde{B}over~ start_ARG italic_B end_ARG in order to obtain B¯¯𝐵\bar{B}over¯ start_ARG italic_B end_ARG. We leave the computation of matrix B¯¯𝐵\bar{B}over¯ start_ARG italic_B end_ARG to data owner and let the data owner upload the ciphertext encrypting the B¯¯𝐵\bar{B}over¯ start_ARG italic_B end_ARG to the cloud. Since data owner has to prepare the dataset and normalize it, it would also be practicable for the data owner to calculate the B¯¯𝐵\bar{B}over¯ start_ARG italic_B end_ARG owing to no leaking of sensitive data information.
For a fair comparison with the baseline (Kim et al., 2018a), we utilized the same 10-fold cross-validation (CV) technique on the same iDASH dataset consisting of 1579 samples with 18 features and the same 5-fold CV technique on the other five datasets. Like (Kim et al., 2018a), We consider the average accuracy and the Area Under the Curve (AUC) as the main indicators. Tables 1 and 2 display the results of the two experiments, respectively. The two tables also provide the average evaluation running time for each iteration and the storage (encrypted dataset for the baseline work and encrypted dataset and B¯¯𝐵\bar{B}over¯ start_ARG italic_B end_ARG for our method). We adopt the same packing method that Kim et al. (2018a) proposed and hence our solution has similar storage of ciphertexts to (Kim et al., 2018a) with some extra ciphertexts to encrypt the B¯¯𝐵\bar{B}over¯ start_ARG italic_B end_ARG. We chose 1+0.9t1superscript0.9𝑡1+0.9^{t}1 + 0.9 start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT as our learning rate configuration.
where n𝑛nitalic_n is the number of examples in the training dataset. LR does not have a closed form of maximizing l⁢(𝜷)𝑙𝜷l(\bm{\beta})italic_l ( bold_italic_β ) and two main methods are adopted to estimate the parameters of an LR model: (a) gradient descent method via the gradient; and (b) Newton’s method by the Hessian matrix. The gradient and Hessian of the log-likelihood function l⁢(𝜷)𝑙𝜷l(\bm{\beta})italic_l ( bold_italic_β ) are given by, respectively:
Kim et al. (2018b) discussed the problem of performing LR training in an encrypted environment. They employed full-batch gradient descent during the training process and utilized the least-squares method to approximate the sigmoid function.
Privacy-preserving logistic regression training based on HE techniques faces a difficult dilemma that no homomorphic schemes are capable of directly calculating the sigmoid function in the LR model. A common solution is to replace the sigmoid function with a polynomial approximation by using the widely adopted least-squares method. We can call a function named “ polyfit(⋅⋅\cdot⋅) ” in the Python package Numpy to fit the polynomial in a least-square sense. We adopt the degree 5 polynomial approximation g⁢(x)𝑔𝑥g(x)italic_g ( italic_x ) developed by Kim et al. (2018a), utilizing the least squares approach to approximate the sigmoid function over the interval [−8,8]88[-8,8][ - 8 , 8 ]:
D
{0}\boldsymbol{P}}\boldsymbol{x}_{n}\Big{)}.roman_max start_POSTSUBSCRIPT bold_italic_P ∈ caligraphic_P start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUBSCRIPT roman_min start_POSTSUBSCRIPT bold_italic_θ ∈ roman_Θ end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_n = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT roman_ℓ ( italic_y start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT , bold_italic_θ start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT bold_italic_P bold_italic_x start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) .
The matrix 𝑲𝑲{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}%
We say that the matrix 𝑷𝑷{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}%
The first player chooses an orthogonal projection matrix 𝑷𝑷{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}%
In response, the second player chooses the parameters of a linear model 𝜽𝜽{\boldsymbol{\theta}}bold_italic_θ, with knowledge of 𝑷𝑷{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}%
C
Note that DVC and Ballé et al. [100] are not able to achieve the performance of H.265 codec, but our method can further improve and surpass the performance of the recent VVC codec.
As shown in Fig. 11 (a), our framework with semantic stream outperforms the traditional codec+post restoration method BasicVSR++ by a large margin. For example, our method outperforms BasicVSR++ model by 8% at the 0.06bpp bitrate level.
As for the decoder side, LFN in our method is about 16×\times× efficient than the state-of-the-art video restoration method BasicVSR++.
(1) BasicVSR++, where a recent state-of-the-art (SOTA) video restoration method BasicVSR++ [150] is adopted to enhance the lossy video by codec and then fed into the downstream task.
As for the encoder side, the computational cost of our method is about 30×\times× fewer than DVC due to not using the optical flow estimation network.
B
The most widely used AVA dataset [17] just demands the association of audio and visual data, whereas the more recent ASW dataset [2] requires the synchronization of two modalities, which excludes instances like dub-subbed movies.
However, adopting an entirely data-driven strategy would make lip sync data-hungry and hard to optimize, as a significant amount of data covering a wide range of combinations would be needed to address compound distracting factors.
As described in Sec. IV-A, it is expected that the effects of compound distracting factors will be reduced in the synthesized images while the expression information from the raw input will be preserved.
Further, the DSP is built using the parametric image formation model described in Sec. III and thus has two components: a network for estimating the required coefficients from the input, and a renderer that uses the coefficients to synthesize images.
As it would be impossible to handle dubbings using the lip motion cue alone, advanced relational modeling [18] is required.
D
The modern passenger vehicle has undergone major advancements in recent years due the demand for high-tech functionality, which is accompanied by a growing number of interconnected electronic components and a respective stream of sensor data. This data availability coupled with industry competition creates the need for intelligent information systems and design. Vehicle sensor data has applications for the advancement of autonomous driving security [1, 2], driver behavior analysis [3, 4], fuel consumption and efficiency [5, 6], road object detection [7], and sensor fusion for autonomous vehicle perception [8]. For vehicle manufacturers, computational intelligence is also applied to improve quality in areas such as production, logistics, and vehicle fleet management [9].
Traditional machine learning classification approaches to pattern recognition and fault detection, specifically of vehicle systems have been employed. Prytz et. al. [15] investigate the automated data analysis of connected vehicle sensors for fault detection and emphasize that interrelations of multiple connected signals are more likely to be indicative of abnormal conditions than individual signals. They perform supervised classification using linear regression, support vector machine, and random forest to predict faults using air-intake system data, but showed a low degree of accuracy. Theissler et. al. [16] also take a supervised approach to anomaly detection for vehicle fault prediction based on a data stream of eight vehicle sensor features related to engine temperature and control. They use an ensemble of nine traditional classifiers to detect anomalous or normal drive cycles with high classification performance. In another variation in intelligent vehicle fault detection [17], the authors explore using a model-based technique simulating dynamic engine patterns for fault detection in the air-path of automotive engines. A supervised neural network is used to classify four groups of emissions failures. This type of approach, in contrast to ours, requires domain knowledge of the specific automotive systems and is less generalized.
The automatic detection of faults using the powertrain sensor data in this study has practical applications for automotive manufactures, such as the development of new on-board diagnostics and predictive maintenance capabilities, and to improve durability testing by automotive manufacturers prior to deployment. This method for correlating abnormalities in powertrain sensor data presents a promising approach to vehicle fault detection, showcasing one of the many emerging applications where computational intelligence can be utilized to tackle the increased complexity of modern engineered systems.
The drive cycles data set is a multi-variate time-series record of 57 electronic sensor signals of powertrain components connected via the electronic control modules in hybrid-electric vehicles. Data are recorded by test engineers who capture a wide variety of driving conditions in the drive cycles. The electronic sensor signals are broadly categorized in Table I, and are related to the power and torque forces present in the powertrain of hybrid electric vehicles that demand a dynamic, electronically controlled torque interplay between the engine and battery powered electric motors. The data are composed mostly of continuous signal variables such as mechanical component torques, rotational speeds (RPM), and vehicle speed. Other sensor signals that represent the powertrain components are ordinal such as the transmission state (park, reverse, neutral, drive), the brake status, and battery state of charge. Raw data are composed of several drive-cycles grouped into two categories: a) drive-cycle data collected from vehicles with new batteries and normally operating powertrain components (data used for training and validation), and b) drive-cycles containing approximately 3-year old batteries with a common type of fault borne from a battery connectivity issue, and have no other notable faults in the powertrain (data unseen during training and used for testing). Both of the categories are comprised of diverse types and combinations of drive cycles that simulate different driving behaviors, decreasing any bias in the trained autoencoder related to different driving patterns. There are 271 healthy drive-cycles and 150 drive cycles with a documented fault. The median drive-cycle length is 14,0001400014,00014 , 000 time-steps sampled with a frequency of 10 hertz, 1,400 seconds of observations. To generate unbiased sampling of the data set for training the network, the drive-cycles are randomly sub-sampled with a 128 time-step sliding window with random cropping of 64 samples from each drive cycle. The batch size used is 256. The data are normalized by dividing each respective feature by a known maximum value which is provided by domain experts.
The focus of this work is to explore methods of automatic anomaly detection to distinguish rare and abnormal temporal patterns in embedded vehicle sensor data which in turn can be used for fault detection. The data are from several powertrain components interconnected in the vehicle’s electronic control modules and collected by engineers at Ford Motor company during drive cycle testing. The drive cycles refer to a standardized test methodology used by vehicle manufacturers to gauge various performance metrics of new vehicles by simulating a variety of typical driving conditions [10]. This work was commissioned as a key component in the development of early fault detection systems in hybrid-electric vehicle fleets. The research looks beyond the traditional diagnostics of on-board systems which monitor a limited sensor domain related to emissions control. On-board diagnostics have been an important component of passenger vehicle functionality for decades and have lead to standardized monitoring and communications protocols such as ISO 9141 and 15031 [11, 12]. The data we analyze, which is composed of electronic signals from several powertrain components during drive-cycles, have been less utilized for diagnostics. By analyzing the stream of data from 56 sensors related to the hybrid-electric vehicles’ torque and power distribution, we explore a less prescriptive but potentially more holistic vehicle health assessment that could extend sensor diagnostics in smart vehicles.
D
The notion of the dual quaternion, and its use to represent poses and rigid motions, seems to go back to McAulay [25], inspired by the earlier work by Clifford [8]. The notion of using dual quaternions to represent twists may be found in [1, 31]. A basic introduction to dual quaternions may be found in [22, 29], the latter also covering twists. Many authors have used dual quaternions to represent hierarchies of poses, that is, chains of manipulators [22, 31, 34, 35, 38]. Papers on representing kinematics or dynamics via dual quaternions include [1, 2, 10, 16, 23, 39, 40]. Dual quaternions have also found great use in computer graphics [19, 20].
This paper has given a comprehensive and consistent description of how to use dual quaternions to represent poses, rigid motions, twists, and wrenches. We have introduced the notion of the Lie derivative for dual quaternions. We have shown how these formula are helpful for first producing Newton-Raphson methods for solving the forward kinematics problem for parallel robots, and secondly for a self contained derivation for the dynamic equations of motion of the end effector that includes the inertia of the actuators.
Finally, in equation (93), we give an approximation of the normalization of a vector dual quaternion perturbation of the identity, which shows that it is equal up to the second order to the exponential of the vector dual quaternion. This equation was essential for calculating the Hessian in the forwards kinematics algorithms. We feel that this formula will be of independent interest to other researchers in the field of dual quaternions.
(The reader should be aware that [1, 16] have incorrect formulas for the logarithm and exponential of dual quaternions — the correct formulas may be found in [28], and [37] for the exponential.)
The notion of the dual quaternion, and its use to represent poses and rigid motions, seems to go back to McAulay [25], inspired by the earlier work by Clifford [8]. The notion of using dual quaternions to represent twists may be found in [1, 31]. A basic introduction to dual quaternions may be found in [22, 29], the latter also covering twists. Many authors have used dual quaternions to represent hierarchies of poses, that is, chains of manipulators [22, 31, 34, 35, 38]. Papers on representing kinematics or dynamics via dual quaternions include [1, 2, 10, 16, 23, 39, 40]. Dual quaternions have also found great use in computer graphics [19, 20].
C
Consequently, Corollary 4.16 can be applied, meaning that the per-bit difference between an expected interaction complexity term and the corresponding interaction information goes to zero.
Our main references are Chaitin (1987); Li and Vitányi (1997); Grünwald and Vitányi (2008).
We also combine Hu’s theorems for Shannon entropy and Kolmogorov complexity to generalize the well-known result that “expected Kolmogorov complexity is close to entropy” (Grünwald and Vitányi, 2008):
This generalizes the observation after Grünwald and Vitányi (2008), Theorem 10, to n>1𝑛1n>1italic_n > 1 and more complicated interaction terms.
For n=1𝑛1n=1italic_n = 1 and I={1}=1𝐼11I=\{1\}=1italic_I = { 1 } = 1,444For simplicity, we write sets as a sequence of their elements. we obtain:
C
In fact, the decomposition of the (0,0,12) and (2,0,6) both already required decomposing singular nodes with valence 6: (0,4,4,1), (0,2,8,1) and (1,3,3,1). We will refer to previously known decomposable singular nodes and their associated sphere triangulations as base cases.
Applying the splitting in Prop. 1 could result directly in base cases, where the rest of the decomposition is already known. If the splitting does not result in base cases, then it produces triangulations with fewer vertices. This can be repeated until there are not enough vertices to have a degree 6 vertex. Since sheet inflation at a singular node corresponds to splitting of a sphere triangulation, Prop. 1 allows us to find a sequence of sheets whose inflation results in singular nodes that
Since splitting a sphere triangulation replaces all vertices on the interior of either side with just one new vertex each, both resulting triangulations will have fewer vertices than 𝒯𝒯\mathcal{T}caligraphic_T.
To construct a splitting of 𝒯𝒯\mathcal{T}caligraphic_T into triangulations of fewer vertices, we need a pair of vertices a𝑎aitalic_a and b𝑏bitalic_b adjacent to u𝑢uitalic_u that are at least 3 edges apart from each other in 𝒞𝒞\mathcal{C}caligraphic_C such that there is path p𝑝pitalic_p from a𝑎aitalic_a to b𝑏bitalic_b through the interior of 𝒯−𝒰𝒯𝒰\mathcal{T}-\mathcal{U}caligraphic_T - caligraphic_U. This construction is illustrated in Figure 10.
Given a sphere triangulation 𝒯𝒯\mathcal{T}caligraphic_T with some vertex u𝑢uitalic_u of degree larger than 5, there exists a splitting such that either the number of vertices in both resulting triangulations decreases or the resulting triangulations are base cases.
D
Let ℰ2={qθ2}subscriptℰ2subscript𝑞subscript𝜃2\mathcal{E}_{2}=\{q_{\theta_{2}}\}caligraphic_E start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = { italic_q start_POSTSUBSCRIPT italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT } be an exponential family with support 𝒳2subscript𝒳2\mathcal{X}_{2}caligraphic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, and ℰ1={pθ1}subscriptℰ1subscript𝑝subscript𝜃1\mathcal{E}_{1}=\{p_{\theta_{1}}\}caligraphic_E start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = { italic_p start_POSTSUBSCRIPT italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT } a truncated exponential family of ℰ2subscriptℰ2\mathcal{E}_{2}caligraphic_E start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT with support 𝒳1⊂𝒳2subscript𝒳1subscript𝒳2\mathcal{X}_{1}\subset\mathcal{X}_{2}caligraphic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ⊂ caligraphic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. Let F1subscript𝐹1F_{1}italic_F start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and F2subscript𝐹2F_{2}italic_F start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT denote the log-normalizers of ℰ1subscriptℰ1\mathcal{E}_{1}caligraphic_E start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and ℰ2subscriptℰ2\mathcal{E}_{2}caligraphic_E start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and η1subscript𝜂1\eta_{1}italic_η start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and η2subscript𝜂2\eta_{2}italic_η start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT the moment parameters corresponding to the natural parameters θ1subscript𝜃1\theta_{1}italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and θ2subscript𝜃2\theta_{2}italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT.
Then the Kullback-Leibler divergence between a truncated density of ℰ1subscriptℰ1\mathcal{E}_{1}caligraphic_E start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and a
4 Kullback-Leibler divergence between a truncated density and a density of an exponential family
In §4, we show that the Kullback-Leibler divergence between a truncated density and a density of a same parametric exponential family amounts to a duo Fenchel-Young divergence or equivalently to a Bregman divergence on swapped parameters (Theorem 1). As an example, we report a formula for the Kullback-Leibler divergence between truncated normal distributions (Example 6).
The α𝛼\alphaitalic_α-skewed Bhattacharyya divergence for α∈(0,1)𝛼01\alpha\in(0,1)italic_α ∈ ( 0 , 1 ) between a truncated density of ℰ1subscriptℰ1\mathcal{E}_{1}caligraphic_E start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT with log-normalizer F1⁢(θ)subscript𝐹1𝜃F_{1}(\theta)italic_F start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_θ ) and another density of an exponential family ℰ2subscriptℰ2\mathcal{E}_{2}caligraphic_E start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT with log-normalizer F2⁢(θ)subscript𝐹2𝜃F_{2}(\theta)italic_F start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_θ ) amounts to a duo Jensen divergence:
A
However, our experiment can also help reveal whether different incentivisation schemes could improve practitioners’ motivation.
Due to the small sample size, significance tests for differences in the samples are not meaningful.
Still, we cannot observe a clear picture from our results whether a specific component dominates all others.
We find indications that different forms of financial incentives impact participants’ performance in software-engineering experiments. Due to the small sample sizes, our results are not statistically significant, but we still observe clear tendencies.
We investigated in how far financial incentives impact the performance of (student) participants in software-engineering experiments.
C
Although our TransKDs seem to be complex, they are not that heavy compared to SKDs [4] and achieve plug-and-play knowledge distillation.
The original student model with SegFormer-B0 achieves a relatively low performance of 18.19%percent18.1918.19\%18.19 % in mIoU on NYUv2.
Specifically, our TransKD-Base method achieves +5.18%percent5.18+5.18\%+ 5.18 %, +2.18%percent2.18+2.18\%+ 2.18 %, and +2.71%percent2.71+2.71\%+ 2.71 % improvements over the feature-map-only KR method, while using the SegFormer-B0, the PVTv2-B0 [25], and the Lite Vision Transformer (LVT) [34] model as the student, respectively.
Benchmarked against the feature-map-only method Knowledge Review [2], TransKD-Base enhances the distillation performance by 5.18%percent5.185.18\%5.18 % in mean Intersection over Union (mIoU) while adding negligible 0.21⁢M0.21𝑀0.21M0.21 italic_M parameter during the training phase, as shown in Fig. 2.
Still, the lightweight variant of our framework TransKD-Base is alone sufficient to conduct the distillation, yielding a surprising +5.18%percent5.18+5.18\%+ 5.18 % gain compared to KR [2] while just adding 0.21⁢M0.21𝑀0.21M0.21 italic_M parameters for patch embedding distillation.
D
\end{array}start_ARRAY start_ROW start_CELL italic_φ end_CELL start_CELL : end_CELL start_CELL [ 0 , ∞ [ end_CELL start_CELL ⟶ end_CELL start_CELL [ 0 , ∞ [ end_CELL end_ROW start_ROW start_CELL end_CELL start_CELL end_CELL start_CELL italic_λ end_CELL start_CELL ⟼ end_CELL start_CELL ∥ italic_P start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_B italic_P ( italic_P start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_B italic_P - italic_λ italic_I start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_v ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - italic_ε start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_CELL end_ROW end_ARRAY
It can be solved numerically quite easily with a finite difference method101010See for example [31].. Figure 5 provides illustrations of the computed neural network kernel foliation with such a method for the Xor function (5(a)), and for the Or function (5(b)).
However, finding the vanishing points of such a function is not an easy task. Several methods may be used. A numerical method such as the Newton’s method [27] could be applied.
Many authors consider neural network attacks and robustness properties in a Euclidean input space. Yet, it is commonly admitted that to learn from high dimensional data, data must lie in a low dimensional manifold ([12]). Such manifold has in general non-zero curvature and Riemannian geometry should therefore be a more appropriate setting to analyze distances and sensitivity from an attack point of view. Furthermore, to analyze neural network model separation capabilities and its robustness, it is critical to understand not only the topology of the decision boundaries in the input space but also the topology of iso-information regions induced by the neural network. Again, there is no reason to believe that these sub-manifolds have zero curvature in general. The Fisher information metric (FIM) is a valid metric for such purpose. Indeed, the network output is seen as a discrete probability that lies on a statistical manifold. The FIM may then be used as a Riemannian metric at the output and the pullback metric of the Fisher information as a metric for the input manifold ([1]). The importance of the FIM in the context of deep neural networks has already been pointed out by several authors.
This first method is local and does not take into account the curvature of the data. Hence, we propose a new method to improve the performances, especially in regions of 𝒳𝒳\mathcal{X}caligraphic_X where this curvature is high.
B
In this paper, we consider High-Multiplicity Scheduling Problems On Uniform Machines, where “high-multiplicity” refers to the following compact encoding:
machine multiplicity vector μ∈ℕ0τ𝜇superscriptsubscriptℕ0𝜏\mu\in\mathds{N}_{0}^{\tau}italic_μ ∈ blackboard_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_τ end_POSTSUPERSCRIPT. A job of size pjsubscript𝑝𝑗p_{j}italic_p start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT takes time
We are given d∈ℕ𝑑ℕd\in\mathds{N}italic_d ∈ blackboard_N job sizes in the form of a vector
τ∈ℕ𝜏ℕ\tau\in\mathds{N}italic_τ ∈ blackboard_N machine speeds in the form of a vector s∈ℕτ𝑠superscriptℕ𝜏s\in\mathds{N}^{\tau}italic_s ∈ blackboard_N start_POSTSUPERSCRIPT italic_τ end_POSTSUPERSCRIPT and a corresponding
p∈ℕd𝑝superscriptℕ𝑑p\in\mathds{N}^{d}italic_p ∈ blackboard_N start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT and a corresponding job multiplicity vector ν∈ℕ0d𝜈superscriptsubscriptℕ0𝑑\nu\in\mathds{N}_{0}^{d}italic_ν ∈ blackboard_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT; and
B
Denote by f:S→𝔼3:𝑓→𝑆superscript𝔼3f:S\to\mathbb{E}^{3}italic_f : italic_S → blackboard_E start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT the contracting C2superscript𝐶2C^{2}italic_C start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT map in Theorem 2. Let U𝑈Uitalic_U be a union of small polygonal disks centered at each singular vertex of S𝑆Sitalic_S. The strategy for the proof of Burago and Zalgaller is the following.
We can thus apply the basic construction of Lemma 3 and its tilted version as in Note 4 to perform Step (e). This eventually lead to a PL isometric embedding of S∖U𝑆𝑈S\setminus Uitalic_S ∖ italic_U. It remains to embed appropriately the neighborhood of the singular vertices as required by Step (f) to complete the PL isometric embedding of S𝑆Sitalic_S.
Compute an approximation f1subscript𝑓1f_{1}italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT of f𝑓fitalic_f that is almost conformal on S∖U𝑆𝑈S\setminus Uitalic_S ∖ italic_U and contracting over S𝑆Sitalic_S. Here, almost conformal means that f𝑓fitalic_f almost preserves angles, or more formally that its coefficient of quasi-conformality, or dilatation [FM12, Section 11.1.2], is close to one.
Refine the acute triangulation of S∖U𝑆𝑈S\setminus Uitalic_S ∖ italic_U uniformly to obtain an acute triangulation 𝒯𝒯\cal Tcaligraphic_T with small triangles. The meaning of small depends on the geometric properties of f1subscript𝑓1f_{1}italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and on the flexibility in Note 4.
Compute an acute triangulation of S∖U𝑆𝑈S\setminus Uitalic_S ∖ italic_U, where each triangle is acute.
D
Table 5: Comparative results with advanced benchmark set with 30 different runs.
Now we come to the point where we need novelty threshold value. It is totally based on application. Clearly from the table -1, if we set very low value of novelty threshold like case -4 (1-49%) that means we allow two PSO teams to search closely. If our application demand closeness we can set low novelty threshold value. If two regions are highly novel like case-1(100%) as per table-1, that signifies that these two regions are not overlapping and both the leader particle can initiate searching.
Most of the PSO variants discuss convergence analysis of their approach but in our method showing the convergence analysis is redundant as we did not change much on the analytical structure of PSO. Rather it is important to analyze how the divergence of novelty search benefits our algorithm and how both convergence of PSO and divergence of Novelty Search complement each other to provide much superior performance in various experimental functions.
As we observe from the Table -(3), that for Group A unimodal functions our NsPSO failed to provide best results. But certainly its comes second and third respectively in comparison with so far best multimodal PSO algorithms like CLPSO or ECLPSO. NsPSO outperform existing other Novelty Search plus PSO hybrid algorithms. Our algorithm is designed for more robust and with more explorative power. Though the Rosenbrock’s function is multimodal but due to novelty search our algorithm performs more exploration. Later we can improve by tuning parameters of novelty search such that it can perform much better on other unimodal functions.
If we observe the table carefully we can find that in this difficult benchmark functions also our proposed methods works really well. Though it may not comes first in few occasions but if we compare it with basic PSO method, NsPSO’s performance improves quite a lot. If we think through logically NsPSO’s performance should not deviate drastically because we did not change a lot in PSO rather we provide BBPSO the benefit of high exploration capability of Novelty Search, which is observed in table -(5).
D
The 1st, 2nd and 3rd rows correspond to Cora, CiteSeer, and PubMed datasets, respectively.
Within the evasion attack context, where the focus is on learned representations, we demonstrate the following property: given that the GSO error is bounded as in Theorem 1 and Proposition 1, the linear bound of each layer of GCNN (illustrated in Subsection VI-C1) permits the network’s stability against perturbation as long as the graph error remains within the bound.
For instance, under evasion attacks, [27] demonstrates the reduction on GCNN’s accuracy under small perturbations, while maintaining the degree distributions after the attack, and [30] demonstrates the significant drop of accuracy of GCN when 5% of edges are altered.
After affirming the linear sensitivity in Theorem 3, we also examine the stability of GCNN under significant graph perturbations by observing the accuracy changes of same GCNN candidates as in Section VI-C1.
Consistent with the experimental settings in Section VI-C1, the same GCNN candidates are utilized.
C
In Figure 5, we visualize the control and disturbance policies extracted from the Q function of the neural network corresponding to λ=0.0𝜆0.0\lambda=0.0italic_λ = 0.0 in Figure 4. The two policies are considered as to be reasonable because at most state, the control policy either drives the agent towards the target set or prevent the violation of the constraint. Notice that there are a few states, e.g., some points at the right lower corner of the disturbance law in Figure 5, where they do not complement to each other. One possible reason is that we learn a local optimal neural network Q function by minimizing the non-convex loss function (7).
In this experiment, we compare the reach-avoid set learned by Algorithm 1 with the one learned by tabular Q-learning, where we first grid the continuous state space and then run value iteration (4) over the grid. We treat the reach-avoid set learned by tabular Q-learning as the ground truth solution. We apply Algorithm 1 to learn neural network Q functions with 4444 hidden layers, where each hidden layer has 128 neurons with ReLu activation functions. This neural network architecture is chosen because empirically it provides sufficient model capacity for approximating the true value function.
We first consider learning the viability kernel where the constraint set is the same as the one in Subsection VI-B. The reward function is set to be r⁢(x)=−1𝑟𝑥1r(x)=-1italic_r ( italic_x ) = - 1, for all x∈ℝn𝑥superscriptℝ𝑛x\in\mathbb{R}^{n}italic_x ∈ blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT. In the first subplot of Figure 6, we visualize the learned value function. The value function is non-positive, which is predicted by Proposition 2. We visualize the learned viability kernel in the second subplot of Figure 6. We sampled 1000100010001000 initial positions in the learned viability kernel and simulated a trajectory for each of them with 600600600600 steps. The portion of those sampled points that can be maintained inside the constraint set is 79.2%percent79.279.2\%79.2 %.
In this subsection, we apply Algorithm 1 to learn viability kernel and backward reachable set for the 6-dimensional dynamical system in Subsection VI-B. The results empirically confirm Propositions 1 and 2. In the following two experiments, the same neural network architecture as in Section VI-B does not yield satisfactory results and we conjecture that this is due to the limited model capacity. As such, in this subsection, we increase the model capacity by adopting 4-layer neural networks with 256 neurons in each layer. We set the CQL penalty parameter to be λ=0.0𝜆0.0\lambda=0.0italic_λ = 0.0.
Due to the curse of dimensionality, tabular Q learning explained in the previous experiment suffers numerical difficulties in this 6-dimensional experiment. In this subsection, we apply Algorithm 1 with the same neural network architecture as in Section VI-A. We plot the learned reach-avoid set by projecting it onto a 2-dimensional plane. In Figure 4, we visualize the reach-avoid set as well as the effect of the CQL penalty λ𝜆\lambdaitalic_λ on the learned reach-avoid set. As the penalty λ𝜆\lambdaitalic_λ increases, the volume of the reach-avoid set shrinks while the empirical success rate improves. This suggests that a larger penalty λ𝜆\lambdaitalic_λ induces a more conservative estimation of the reach-avoid set and the policy.
C
𝖲⁢(𝒘)⁢𝒓=𝒘×𝒓,𝖲𝒘𝒓𝒘𝒓\mathsf{S}(\bm{w})\bm{r}=\bm{w}\times\bm{r},sansserif_S ( bold_italic_w ) bold_italic_r = bold_italic_w × bold_italic_r ,
A twist is the pair of vectors (𝒘,𝒗)𝒘𝒗(\bm{w},\bm{v})( bold_italic_w , bold_italic_v ) that describe the change of pose in the moving reference frame, that is:
If 𝒓0subscript𝒓0\bm{r}_{0}bold_italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the center of mass of the end effector in the moving frame, then the twist about the center of mass is given by
The reason for introducing the factor 2222 in definition (27) is so that the rate of change of work done to the end effector is given by
Let the pose η𝜂\etaitalic_η represent the reference frame that moves with the end effector. It is not necessary (although it can simplify things) that the center of mass of the end effector coincides with the origin of the moving frame.
A
Due to the impressive success of deep neural networks in feature extraction and classification of text, images, and many other modalities, they have been widely exploited by research scientists over the past few years for a variety of multi-modal tasks, including misinformation detection. We may categorize deep learning-based multi-modal misinformation detection into five categories: concatenation-based, attention-based, generative-based, graph neural network-based, and cross-modality discordance-aware architectures as demonstrated in Fig. 5. In what follows, we summarize and categorize the existing works into the aforementioned categories.
The majority of the existing work on multi-modal misinformation detection embeds each modality, e.g., text or image, into a vector representation and then concatenates them to generate a multi-modal representation that can be utilized for classification tasks. For instance, Singhal et al. propose using pretrained XLNet and VGG-19 models to embed text and image, respectively, and then classify the concatenation of the resulting feature vectors to detect misinformation (Singhal et al., 2019).
In another work (Segura-Bedmar and Alonso-Bartolome, 2022), Bartolome et al. exploit a Convolutional Neural Network (CNN) that takes as inputs both text and image corresponding to an article, and the outputs are concatenated into a single vector. Qi et al. extract text, Optical Character Recognition (OCR) content, news-related high-level semantics of images (e.g., celebrities and landmarks), and visual CNN features of the image. Then, in the stage of multi-modal feature fusion, text-image correlations, mutual enhancement, and entity inconsistency are merged by concatenation operation (Qi et al., 2021).
Xue et al. (Xue et al., 2021) propose a Multi-modal Consistency Neural Network (MCNN) which utilizes a similarity measurement module that measures the similarity of multi-modal data to detect the possible mismatches between the image and text. Lastly, Biamby et al. (Biamby et al., 2022) leverage the CLIP model (Radford et al., 2021) to jointly learn image/text representation to detect image-text inconsistencies in tweets. Instead of concatenating vector representations, CLIP jointly trains an image encoder and a text encoder to predict the correct pairings of a batch of (image, text) training examples.
Over the past decade, several detection models (Shu et al., 2017, 2020c; Islam et al., 2020; Cai et al., 2020) have been developed to detect misinformation. However, the majority of them leverage only a single modality for misinformation detection, e.g., text (Horne and Adali, 2017; Wu et al., 2017; Guacho et al., 2018; Shu et al., 2020a) or image (Huh et al., 2018; Qi et al., 2019; Abdali et al., 2021a; Choudhary and Arora, 2021), which miss the important information conveyed by other modalities. There are existing works (K. Shu and Liu, 2019; Shu et al., 2019a; Abdali et al., 2021b, 2020; Hakak et al., 2021) that leverage ensemble methods which create multiple models for each modality and then combine them to produce improved results. However, in many cases of multi-modal misinformation, loosely combining individual modalities is inadequate for detecting fake news, leading to the failure of the joint model.
A
To enhance model/ bias variety, we apply the Hard Debiasing Algorithm [bolukbasi] to the pretrained embeddings and train a classification head for the unmodified and debiased embeddings with k∈1,3𝑘13k\in{1,3}italic_k ∈ 1 , 3 (removing the first k principal components during debiasing [bolukbasi]), resulting in 66666666 different classifiers. Additionally, we conduct the experiments via cross-validation, i.e. training and evaluating biases on each test fold and reporting the mean biases over all folds.
In our experiments, we use the dataset for binary classification (toxic / not toxic) and, to limit computational costs, select subsets of the dataset where only identities of one specific bias type (race-color / religion / gender) are mentioned. We further limit our subset to samples, where exactly one identity is mentioned and where the majority of annotators agreed (2/3 majority) regarding both the toxicity and the identity label to avoid ambiguities.
For the Jigsaw dataset, we handle each protected attribute separately, i.e. creating a subset for each protected attribute and training models on these to limit the effects of intersectional bias.
We use three downstream datasets developed to investigate biases in LMs: In terms of classification we consider the Jigsaw Unintended Bias (Jigsaw) dataset [jigsaw] and the BIOS dataset [biosbias]. For MLM we use the CrowS-Pairs dataset [crowspairs].
Our experiments cover four protected attributes: race-color, religion, gender and age, which were included in sufficient numbers in at least one of the datasets. Table 1 shows the protected groups and the number of defining sets. These defining sets were used as attributes and removed from the target samples before computing cosine scores. These attributes were chosen based on the modified tokens in the CrowS-Pairs dataset, so we were able to perfectly remove protected attributes there. In the BIOS dataset a gender-scrubbed version of each biography was available (removing names and pronouns). In addition we removed the additional gender attributes from our defining sets (e.g. mother/father). In the Jigsaw dataset we removed terms from the defining sets referring to the labeled group. We assume that our approach doesn’t remove all mentions of protected attributes, which would be obvious to a human reader.
B
A vertex v𝑣vitalic_v of an interval graph G𝐺Gitalic_G is an extreme vertex of a toll convex set S⊆V⁢(G)𝑆𝑉𝐺S\subseteq V(G)italic_S ⊆ italic_V ( italic_G ) if and only if v𝑣vitalic_v is an end simplicial vertex of G⁢[S]𝐺delimited-[]𝑆G[S]italic_G [ italic_S ].
In order to characterize the graphs with toll convexities that are convex geometries, we need to resort to a well-known characterization of interval graphs. Three vertices of a graph form an asteroidal triple if between any pair of them there exists a path that avoids the neighborhood of the third vertex.
A graph G𝐺Gitalic_G is an interval graph if and only if G𝐺Gitalic_G is chordal and contains no asteroidal triple.
The above concepts can be transferred to the combinatorial field in a natural way. We refer the reader to [23]. Let G𝐺Gitalic_G be a graph and let 𝒞𝒞\mathscr{C}script_C be a convexity of G𝐺Gitalic_G. Given a set S⊆V⁢(G)𝑆𝑉𝐺S\subseteq V(G)italic_S ⊆ italic_V ( italic_G ), the smallest set H∈𝒞𝐻𝒞H\in\mathscr{C}italic_H ∈ script_C containing S𝑆Sitalic_S is called the convex hull of S𝑆Sitalic_S. A vertex x𝑥xitalic_x of a convex set S𝑆Sitalic_S is an extreme vertex of S𝑆Sitalic_S if S\{x}\𝑆𝑥S\backslash\{x\}italic_S \ { italic_x } is also convex. The convexity 𝒞𝒞\mathscr{C}script_C is a convex geometry if it satisfies the Krein-Milman (or Minkowski-Krein-Milman) property [34, 37]: Every convex set is the convex hull of its extreme vertices. The main question dealt with in this survey is: by fixing a rule r𝑟ritalic_r to define the convex sets (e.g., a rule based on some path system), determine the class of graphs whose r𝑟ritalic_r-convexities are convex geometries. For instance, by fixing induced paths, we can obtain the following characterization: a graph G𝐺Gitalic_G is chordal if and only if the monophonic convexity of G𝐺Gitalic_G is a convex geometry [23]. Ptolemaic graphs, interval graphs, proper interval graphs, weak bipolarizable graphs, and 3-fan-free chordal graphs can also be characterized in this way by considering, respectively, the geodesic convexity [23], the toll covexity [2], the weakly toll convexity [28], the m3superscript𝑚3m^{3}italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT-convexity [19], and the Steiner convexity [9]. In [27], a characterization of graphs with lksuperscript𝑙𝑘l^{k}italic_l start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT-convexities that are convex geometries are studied. Section 3 discusses in detail most of these characterizations.
Using arguments similar to those used in the previous section, one can prove that if the weakly toll convexity of G𝐺Gitalic_G is a convex geometry, then G𝐺Gitalic_G is chordal and cannot contain asteroidal triples and induced subgraphs isomorphic to K1,3subscript𝐾13K_{1,3}italic_K start_POSTSUBSCRIPT 1 , 3 end_POSTSUBSCRIPT. Hence, using the characterization of proper interval graphs in [40], G𝐺Gitalic_G is a proper interval graph. Conversely, assume that G𝐺Gitalic_G is a proper interval graph. Then, it is an interval graph. This means that, by Lemma 15, every vertex of G𝐺Gitalic_G that is not an end simplicial vertex lies in a tolled walk between two end simplicial vertices. Since every tolled walk is a weakly toll walk and, by Lemma 18, end simplicial vertices of a proper interval graph are extreme vertices, we have:
A
FlexFringe [VH17], which originated from the DFASAT [HV10] algorithm, is a framework for learning different kinds of automata using the red-blue state merging framework [LPP98]. Learning automata from traces can be seen as a grammatical inference [DlH10] problem where traces are modeled as the words of a language, and the goal is to find a model for this language, e.g., a (probabilistic) deterministic finite state automaton (P)DFA [HMU01]. Although the problem of learning a (P)DFA is NP-hard [Gol78] and hard to approximate [PW93], state merging is a well-known and effective heuristic method for solving this problem [LPP98].
State-merging starts with a large tree-shaped model called the prefix tree, which directly encodes the input traces. It then iteratively combines states by testing the similarity of their future behaviors using a Markov property [NN98] or a Myhill-Nerode congruence [HMU01]. This process continues until no similar states can be found. The result is a small model displaying the system states and transition structure hidden in the data. Figure 1 shows the prefix tree for a small example data set consisting of 20 sequences starting with an “a” event and ending in a “b” event. Running FlexFringe results in the PDFA shown in Figure 2.
One of the most successful (P)DFA learning algorithms and an efficient method for performing such tests is evidence-driven state-merging (EDSM) in the red-blue framework [LPP98]. FlexFringe implements this framework, using union/find structures to keep track of performed merges and to efficiently undo them, see Figure 3. Like most state merging methods, FlexFringe first constructs a tree-shaped PDFA A𝐴\mathit{A}italic_A known as a prefix tree from the input sample D𝐷Ditalic_D, see Figure 1. Afterward, it iteratively merges the states of A𝐴\mathit{A}italic_A. Initially, since every prefix leads to a unique state, A𝐴\mathit{A}italic_A is consistent with D𝐷Ditalic_D. A merge (see Algorithm 1, and Figures 3 and 4) of two states q𝑞qitalic_q and q′superscript𝑞′q^{\prime}italic_q start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT combines the states into one by setting the representative variable from the union/find structure of q′superscript𝑞′q^{\prime}italic_q start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT to q𝑞qitalic_q. After this merge, whenever a sequence computation returns q′superscript𝑞′q^{\prime}italic_q start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, it returns the representative q𝑞qitalic_q instead of q′superscript𝑞′q^{\prime}italic_q start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT. A merge is only allowed if the states are consistent, determined using a statistical test or distance computation based on their future sequences. A powerful feature of FlexFringe is that algorithm users can implement their own test by adding a single file called the evaluation function to the source code. For example, using only 100 lines of code, it is possible to add additional attributes such as continuous sensor reading to input symbols and use these in a new statistical test to learn regression automata [LHPV16].
Figure 2. An automaton model printed after running FlexFringe (top). It contains the same type of counts as the prefix tree. To obtain a PDFA from these counts, one needs to normalize them to obtain transition and final probabilities (bottom). Traces only end in the third state, making it the only possible ending state. The learned PDFA, therefore, correctly represents the set of traces starting with “a” and ending in “b”, i.e., a⁢(a|b)∗⁢b𝑎superscriptconditional𝑎𝑏𝑏a(a|b)^{*}bitalic_a ( italic_a | italic_b ) start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT italic_b. A learned PDFA can assign probabilities to new sequences by following transitions and multiplying their probabilities. It can also be used for anomaly detection, for instance, by checking whether a new trace ends in a state with a final probability of 0.
Given a finite data set of example sequences, D𝐷Ditalic_D called the input sample, the goal of PDFA learning (or identification) is to find a (non-unique) small PDFA 𝒜𝒜\mathcal{A}caligraphic_A that is consistent with D𝐷Ditalic_D. We call such sequences positive or unlabeled. In contrast, DFAs are commonly learned from labeled data containing both positive and negative sequences. PDFA size is typically measured by the number of states (|Q|𝑄|Q|| italic_Q |) or transitions (|{(q,a)∈Q×Σ:δ⁢(q,a)≠0}|conditional-set𝑞𝑎𝑄Σ𝛿𝑞𝑎0|\{(q,a)\in Q\times\Sigma:\delta(q,a)\not=0\}|| { ( italic_q , italic_a ) ∈ italic_Q × roman_Σ : italic_δ ( italic_q , italic_a ) ≠ 0 } |). Finding a small and consistent (P)DFA is a well-known hard problem and an active research topic in the grammatical inference community; see, e.g. [DlH10]. One of the main methods, originating from the famous RPNI [OG92] (for DFAs) and Alergia [CO94] (for PDFAs) algorithms, is state merging. This method starts from a large tree-shaped model that captures the input data exactly. It then iteratively combines (merges) states, resulting in an increasingly smaller model. The method ends when no further merge is possible, i.e., when all possible merges result in an inconsistent model.
A
\right]\big{(}M_{\theta}(\Delta x_{n})\big{)}\cdot z_{n-1}\right\}\bigg{)}.∇ start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( over~ start_ARG italic_ψ end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ∘ fraktur_S start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_z start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT , italic_θ ) ) = bold_Proj ( roman_d start_POSTSUBSCRIPT [ italic_M start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( roman_Δ italic_x start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) ] start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT roman_exp { [ ∇ start_POSTSUBSCRIPT italic_z start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_POSTSUBSCRIPT over~ start_ARG italic_ψ end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ] ( italic_M start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( roman_Δ italic_x start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) ) ⋅ italic_z start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT } ) .
the duality between gradient and differential, we are able to determine the gradient of ψ~nsubscript~𝜓𝑛{\widetilde{\psi}}_{n}over~ start_ARG italic_ψ end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT, which is the main ingredient of the Riemannian gradient descent algorithm of the development layer (i.e., Algorithm 2 below). Before further developments, let us first comment on the Riemannian gradient on matrix Lie groups.
With the gradient computation at hand (in particular, Theorem 3.1 and Proposition 3.1), we are now ready to describe the backpropagation of development layer in Algorithm 2.
One crucial remark is in order here. In view of Theorem 3.1 and Proposition 3.1, the development layer proposed in this paper possesses a recurrence structure analogous to that of the RNNs. This is the key structural feature of Algorithm 2. However, it is well known that the RNNs are, in general, prone to problems of vanishing and/or exploding gradients (see Bengio et al. (1994)). We emphasise that when G𝐺Gitalic_G is the orthogonal or unitary group, the gradient issues are naturally alleviated for the development layer.
To optimise the model parameters of the development layer, we exploit the recurrence structure in Eq. (1) and the Lie group-valued output to design an efficient gradient-based optimisation method. We combine backpropagation through time of RNNs and “trivialisation”, an optimisation method on manifolds (Lezcano-Casado (2019)). In particular, when 𝔤𝔤\mathfrak{g}fraktur_g is the Lie algebra of the orthogonal group, we can establish boundedness of the gradient. This alleviates the gradient vanishing/exploding problems of backpropagation through time, thus leading to a more stable training process.
B
Since general Cαsuperscript𝐶𝛼C^{\alpha}italic_C start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT domains can be covered by Cαsuperscript𝐶𝛼C^{\alpha}italic_C start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT domains of special type, it suffices to consider domains of special type.
the differentiable mapping Φ:E→ℝ2:Φ→𝐸superscriptℝ2\Phi:E\to{\mathbb{R}}^{2}roman_Φ : italic_E → blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT by
Let g:[−2⁢d,2⁢d]d−1→ℝ:𝑔→superscript2𝑑2𝑑𝑑1ℝg:[-2d,2d]^{d-1}\to{\mathbb{R}}italic_g : [ - 2 italic_d , 2 italic_d ] start_POSTSUPERSCRIPT italic_d - 1 end_POSTSUPERSCRIPT → blackboard_R be a continuously differentiable function on [−2⁢d,2⁢d]d−1superscript2𝑑2𝑑𝑑1[-2d,2d]^{d-1}[ - 2 italic_d , 2 italic_d ] start_POSTSUPERSCRIPT italic_d - 1 end_POSTSUPERSCRIPT satisfying
Let g:[−2,2]→ℝ:𝑔→22ℝg:[-2,2]\to{\mathbb{R}}italic_g : [ - 2 , 2 ] → blackboard_R be a continuously differentiable function on [−2,2]22[-2,2][ - 2 , 2 ] satisfying
The next step of our construction is to proceed from the C2superscript𝐶2C^{2}italic_C start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT to Cαsuperscript𝐶𝛼C^{\alpha}italic_C start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT graph domains in ℝ2superscriptℝ2{\mathbb{R}}^{2}blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. This will be done using approximation of Cαsuperscript𝐶𝛼C^{\alpha}italic_C start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT functions by their Steklov transform. Let g:[−4,4]→ℝ:𝑔→44ℝg:[-4,4]\to{\mathbb{R}}italic_g : [ - 4 , 4 ] → blackboard_R be a Cαsuperscript𝐶𝛼C^{\alpha}italic_C start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT-function with constant L>1𝐿1L>1italic_L > 1 for some 1≤α≤21𝛼21\leq\alpha\leq 21 ≤ italic_α ≤ 2; that is, g∈C1⁢[−4,4]𝑔superscript𝐶144g\in C^{1}[-4,4]italic_g ∈ italic_C start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT [ - 4 , 4 ] and
C
It is not hard to see that max and min are not strongly admissible and that noisy-or is not even admissible.
The above given definition of (strong) admissibility is fairly straightforward and natural, as well as useful for proving that some functions
as stated by the following lemma, the straightforward proof of which is left to the reader.
Section 4 defines the general notion of a logic that we will use, as well as the particular logics that will be considered later.
the only difference between strong admissibility (sensu novo) and admissibility (sensu novo) is that in the
A
In Theorem 5.1 we prove an analogous result where the bound is on the total number of splittings.
In Theorem 5.1 we prove an analogous result where the bound is on the total number of splittings.
Note that the problem definitions do not determine in advance which items will be split, but only bound their number, or bound the number of splittings. The solver may decide which items to split after receiving the input.
In all the works we surveyed, there is no global bound on the number of splitting jobs. As far as we know, bounding the number of splittings or split jobs was not studied before.
The number of splittings is at least the number of split items but might be larger. For example, a single item split into 10101010 different bins counts as 9999 splittings.
C
As a consequence, all interconnected devices and users stand at risk. Even though research on AI to protect against cyber threats has been ongoing for many years [12, 13], it is still unclear how to ensure the security of networks with AI integrated into their core operations. A significant drawback in AI security has derived from the black-box nature of those systems in one way or the other. Therefore, maintaining accountable and trustworthy AI in this regard is highly important.
Explainable Artificial Intelligence (XAI) represents an advancement over the opaque AI systems in networking. Starting with the 5G era, artificial intelligence (AI) is anticipated to assume various roles across all levels of mobile networks. Furthermore, explainable AI (XAI) would be the subsequent phase in attaining accountability and transparency in AI systems. The architecture of 5G and future networks has to be reconfigured to fully accept this new paradigm of wireless AI architecture and its data life cycle.
The standardization of application development using XAI for RIC or core and backhaul networks is necessary. With standardizations, organizations would implement strict access control mechanisms, advanced encryption and data masking techniques, and regular security audits to keep the unethical usage of XAI outputs in check. It is of immense significance for 6G and subsequent networks since the novel technological framework is being used in the real world for the first time. Such precautions guarantee that only authorized individuals may receive sensitive explanations derived from proprietary models and that the system is safeguarded from malevolent entities. In addition, it is essential to monitor XAI systems for the discovery of anomalies and to use differential privacy approaches to safeguard the secrecy of the underlying models. Regular security audits and penetration testing should also be conducted to identify and mitigate vulnerabilities. Ultimately, the critical aspect is the establishment of ethical principles and a framework for governance. It is essential to create a framework based on the above steps rather than a set of laws as it is more flexible, and its application can be molded to the rapidly changing landscape of AI techniques. Furthermore, it is essential to organize user education and awareness initiatives to enlighten users and developers about the ethical ramifications of XAI and the need to adhere to optimal security protocols.
The composition of more meticulous standards on the elements of XAI security and its provision of transparent AI/ML techniques for B5G security is a requirement. The European Partnership on Smart Networks and Services (SNS) established Europe’s strategic research and innovation roadmap. The initiative is based on an EU contribution of €900 million over the next seven years. The objective is to enable European players to develop R&I capabilities for 6G systems and lead markets for 5G and 6G infrastructure, which will serve as the foundation for digital and green transformation. The SNS work program will be the basis for calls for proposals aimed to launch in early 2022. Concerning standards, we believe that projects under calls such as ICT-52-2020 expect to provide valuable inputs to standardization bodies fostering the development of advanced 6G solutions. From the perspective of 3GPP, there are features and capabilities from existing 5G solutions that require full specification and are expected to be released at the end of 2023. The migration from legacy and existing proprietary radio protocols toward 3GPP protocols will take 5-10 years. AI/ML-assisted security still needs further development to respond to new security threats introduced by the dynamicity of 6G services and networks.
The Defense Advanced Research Projects Agency (DARPA) started the Explainable Artificial Intelligence (XAI) initiative in May 2017 to develop a set of new AI methodologies that would allow end-users to comprehend, adequately trust, and successfully manage the next generation of AI systems [14]. To further elaborate, it is a collective initialization of computer sciences and the social sciences, which includes human psychology of explanations. The overall success of 5G and beyond would ultimately stand on how far the AI used in its implementation is going to be resilient and trustworthy for the general public for utilization [10]. Extending research on possible techniques such as XAI in this regard is a crucial step that needs to be taken abruptly.
D
Initially proposed for synchronous networks, an MA may suppress point-to-point network messages according to rules that define its power.
For instance, a tree MA in a synchronous network might suppress any message except those transiting on an (unknown) spanning tree of the network, with this spanning tree possibly changing in each round.
This work takes a drastic turn away from this usual assumption and explores how BRB might be provided when processes execute on an unreliable network that might lose point-to-point messages.
This is because signatures allow for MA-tolerant BRB algorithms that are more efficient in terms of round and message complexity than those that can be constructed using k⁢2⁢ℓ𝑘2ℓk2\ellitalic_k 2 roman_ℓ-cast [4].
Initially proposed for synchronous networks, an MA may suppress point-to-point network messages according to rules that define its power.
A
D}_{\text{source}};\theta\right).italic_θ = start_OPERATOR roman_arg roman_min end_OPERATOR start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT ce end_POSTSUBSCRIPT ( caligraphic_D start_POSTSUBSCRIPT source end_POSTSUBSCRIPT ; italic_θ ) .
A PLM-based fine-tuning method Zhang et al. (a), called IntentBERT, utilizes a small amount of labeled utterances from public intent datasets to fine-tune PLMs with a standard classification task, which is referred to as supervised pre-training. Despite its simplicity, supervised pre-training has been shown extremely useful for few-shot intent detection even when the target data and the data used for fine-tuning are very different in semantics. However, as will be shown in Section 3.2, IntentBERT suffers from severe anisotropy, an undesirable property of PLMs Gao et al. (a); Ethayarajh (2019); Li et al. (2020).
Specifically, the pre-training is conducted by attaching a linear layer (as the classifier) on top of the utterance representation generated by the PLM:
Can we improve supervised pre-training via isotropization for few-shot intent detection?
After supervised pre-training, the linear layer is removed, and the PLM can be immediately used as a feature extractor for few-shot intent classification on target data. As shown in Zhang et al. (a), a parametric classifier such as logistic regression can be trained with only a few labeled samples to achieve good performance.
D
In this paper, we highlight why polynomial regression, while offering more interpretability than neural networks and being able to approximate the same function classes, is rarely used in practice. By deriving new finite sample and asymptotic L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT rates for series regression estimators, we show that the convergence rate for polynomial regression can be very slow. However, the rate can be improved when the polynomial embeddings are generated group-wise for partitions of the feature space rather than for all features. The improvement is particularly salient when the function class we are trying to estimate is smooth. Motivated by these results, we propose the use of BPR instead of standard polynomial regression as a potential substitute for neural networks. BPR draws from the theoretical insight of building polynomial embeddings for subsets of the feature space and from ensembling models by averaging multiple estimators to improve the out-of-sample performance. We show that it can perform similarly to more complex models, using fewer parameters, in an application to the MNIST handwritten digit data set. Finally, we show the favorable performance of our estimator in an application to crop classification.
Limitations and Future Work. This paper provides a formal reason why polynomial regression is ill-suited for prediction tasks in high-dimensional settings. However a limitation of the paper is that, while the main theorem (Theorem 1) applies to a large class of series regression models, the result for polynomial regression (Corollary 1.1) focuses on specific function classes (Holder smoothness of degree s) and a subset of models (partitioned polynomial regression). Future work should expand the application of the main result to include ensembles of polynomial regression models and formally link our theory with our proposed estimator the BPR. Furthermore, a more extensive benchmarking exercise should be carried out to compare BPR with existing state of the art methods. Given that in this paper we did not fully tune BPR, we expect future results to validate that BPR can perform as well as neural networks.
In this paper, we highlight why polynomial regression, while offering more interpretability than neural networks and being able to approximate the same function classes, is rarely used in practice. By deriving new finite sample and asymptotic L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT rates for series regression estimators, we show that the convergence rate for polynomial regression can be very slow. However, the rate can be improved when the polynomial embeddings are generated group-wise for partitions of the feature space rather than for all features. The improvement is particularly salient when the function class we are trying to estimate is smooth. Motivated by these results, we propose the use of BPR instead of standard polynomial regression as a potential substitute for neural networks. BPR draws from the theoretical insight of building polynomial embeddings for subsets of the feature space and from ensembling models by averaging multiple estimators to improve the out-of-sample performance. We show that it can perform similarly to more complex models, using fewer parameters, in an application to the MNIST handwritten digit data set. Finally, we show the favorable performance of our estimator in an application to crop classification.
Drawing from the theoretical insights, we propose the use of BPR as an alternative to neural networks that is computationally attractive and readily implementable in most machine learning software packages. By only building the polynomial embeddings for subsets of the feature space and averaging across multiple models, BPR can reduce the number of parameters needed to estimate models while maintaining low prediction and generalization errors. We analyze the performance of BPR in a standard prediction task to the MNIST hand-written digit database [lecun1998mnist]. Our application shows that training BPR using only a fraction of the available features in each estimator performs comparably well to larger models. Furthermore, BPR performs better than standard polynomial regressions and achieves a test accuracy on MNIST close to convolutional neural networks and state-of-the-art methods. We believe that a fully tuned BPR model could perform like state-of-the-art in standard prediction tasks.
Our main result, Theorem 1, extends the L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT convergence result in [belloni2015] by deriving new finite sample rates as well as asymptotic rates using the results from [rudelson2007sampling]. This result is valid for a large class of series regression models that satisfy assumption 1). Therefore, it is of interest beyond the case of polynomial regression further discussed in this paper.
A
EConn≤⁢(c)subscriptEConn𝑐\texttt{EConn}_{\leq}(c)EConn start_POSTSUBSCRIPT ≤ end_POSTSUBSCRIPT ( italic_c ): the set of graphs with edge-connectivity at most c𝑐citalic_c. A graph is c𝑐citalic_c-edge-connected
the graph property of 𝖣⁢[k]𝖣delimited-[]𝑘\mathsf{D}[k]sansserif_D [ italic_k ] is the set
if it has at least c𝑐citalic_c vertices, and if it remains connected whenever fewer than c𝑐citalic_c vertices are deleted.
in the case of the DP-core C-Hamiltonian the multiplicity 2O⁢(k)superscript2𝑂𝑘2^{O(k)}2 start_POSTSUPERSCRIPT italic_O ( italic_k ) end_POSTSUPERSCRIPT is smaller than the trivial
Hamiltonian: the set of Hamiltonian graphs. A graph is Hamiltonian if it contains a cycle that spans all its vertices.
D
We further observe that the detection delay depends on the post-change distribution. The delay is comparably large when changing from the multivariate standard normal to the mixed distribution. This matches our intuition: the mixed distribution is relatively similar to the pre-change distribution, rendering it difficult to detect a change between them.
Overall, the results on these synthetic streams indicate that MMDEW is (i) robust to the choice of α𝛼\alphaitalic_α and (ii) that α𝛼\alphaitalic_α has the expected influence on the behavior of the algorithm.
We introduced a novel change detection algorithm, MMDEW, that builds upon two-sample testing with MMD, which is known to yield powerful tests on many domains. To facilitate the efficient computation of MMD, we presented a new data structure, which allows to estimate MMD with polylogarithmic runtime and logarithmic memory complexity. Our experiments on standard benchmark data show that MMDEW obtains the best F1subscript𝐹1F_{1}italic_F start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-score on most data sets. At the same time, MMDEW only has two parameters—the level of the statistical test and the choice of kernel. This simplifies the proposed algorithm’s application in real-world use cases.
threshold. The level α𝛼\alphaitalic_α is a bound for the probability that the tests
The MTD plot on the right mirrors this observation: The MTD decreases with increasing α𝛼\alphaitalic_α.
A
One reason why current GNNs perform poorly on heterophilic graphs, could be the mismatch between the labeling rules of nodes and their linking mechanism. The former is the target that GNNs are expected to learn for classification tasks, while the latter specifies how messages pass among nodes for attaining this goal. In homophilic scenarios, both of them are similar in the sense that most nodes are linked because of their commonality which therefore leads to identical labels. In heterophilic scenarios, however, the motivation underlying why two nodes get connected may be ambiguous to the classification task. Let us take the social network within a university as an example, where students from different clubs can be linked usually due to taking the same classes and/or being roommates but not sharing the same hobbies. Namely, the task-relevant and irrelevant (or even harmful) information is typically mixed into node neighborhood under heterophily. However, current methods usually fail to recognize and differentiate these two types of information within nodes’ proximity, as illustrated in Fig. 1. As a consequence, the learned representations are prone to be entangled with false information, leading to non-robustness and sub-optimal performance.
One reason why current GNNs perform poorly on heterophilic graphs, could be the mismatch between the labeling rules of nodes and their linking mechanism. The former is the target that GNNs are expected to learn for classification tasks, while the latter specifies how messages pass among nodes for attaining this goal. In homophilic scenarios, both of them are similar in the sense that most nodes are linked because of their commonality which therefore leads to identical labels. In heterophilic scenarios, however, the motivation underlying why two nodes get connected may be ambiguous to the classification task. Let us take the social network within a university as an example, where students from different clubs can be linked usually due to taking the same classes and/or being roommates but not sharing the same hobbies. Namely, the task-relevant and irrelevant (or even harmful) information is typically mixed into node neighborhood under heterophily. However, current methods usually fail to recognize and differentiate these two types of information within nodes’ proximity, as illustrated in Fig. 1. As a consequence, the learned representations are prone to be entangled with false information, leading to non-robustness and sub-optimal performance.
However, existing techniques [26, 27, 18] mainly parameterize graph edges with node similarity or dissimilarity, while failing to explicitly correlate them with the prediction target. Even worse, as the assortativity of real-world networks is usually agnostic and node features are typically full of noises, the captured similarity/dissimilarity may not truly reflect the label-agreement/disagreement between nearby nodes. Consequently, the harmful-similarity between pairwise nodes from different classes could be mistakenly preserved for prediction.
This hypothesis is assumed without losing generality to both homophilic and heterophilic graphs. For a homophilic scenario, e.g., in citation networks, scientific papers tend to cite or be cited by others from the same area, and both of them usually possess the common keywords uniquely appearing in their topics. For a heterophilic scenario, students having different interests are likely be connected because of the same classes and/or dormitory they take and/or live in, but neither has direct relation to the clubs they have joined. This inspires us to classify graph edges by measuring the similarity between adjacent nodes in two different aspects, i.e., a graph edge is more relevant to a classification task if connected nodes are more similar in their task-relevant features, or otherwise. Our experimental analysis in Section 6.6 further provides evidences that even when our Hypothesis 1 may not hold, most adversarial edges (considered as the task-irrelevant ones) can still be recognized though neither types of node similarity exists.
Once the issue of GNNs’ learning beyond homophily is identified, a natural question arises: Can we design a new type of GNNs that is adaptive to both homophilic and heterophilic scenarios? Well formed designs should be able to identify the node connections irrelevant to learning tasks, and substantially extract the most correlated information for prediction. However, the assortativity of real-world networks is usually agnostic. Even worse, the features of nodes are typically full of noises, where similarity or dissimilarity between connected ones may not actually reflect their class relations. Existing techniques including [26, 27, 18] usually parameterize graph edges with node similarity or dissimilarity, and cannot well assess the correlation between node connections and the downstream target.
D
Suppose that E𝐸Eitalic_E is a 2-dimensional slope in ℝ4superscriptℝ4\mathbb{R}^{4}blackboard_R start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT characterized by subperiods p0,…,p3subscript𝑝0…subscript𝑝3p_{0},...,p_{3}italic_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , … , italic_p start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT, π𝜋\piitalic_π is a fine projection, and 𝒯𝒯\mathcal{T}caligraphic_T is a tileset obtained from E𝐸Eitalic_E and π𝜋\piitalic_π using the FP-method. Then for any tiling composed with tiles of 𝒯𝒯\mathcal{T}caligraphic_T (respecting Ammann bars rules), its i𝑖iitalic_i-shadow is ωi⁢(pi)subscript𝜔𝑖subscript𝑝𝑖\omega_{i}(p_{i})italic_ω start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT )-periodic for all i∈{0,1,2,3}𝑖0123i\in\{0,1,2,3\}italic_i ∈ { 0 , 1 , 2 , 3 }.
For more clarity, the diagram in Figure 10 summarizes the notations and the “traveling” between spaces that we use.
We can then use the lines to show that a shadow is periodic (in one direction) and determine its prime period: starting from a vertex of the shadow, we follow the line in the chosen direction until we hit another vertex, for each valid configuration of the tiles.
Figure 7 illustrates the difference between two valid projections, one being fine but not the other, on the slope of Cyrenaic tilings which we present in the next subsection. With the fine projection, projected subperiods have the same directions as the sides of the tiles.
Additionally, the lengths of the “integer versions” of subperiods are closely related to the distances between two consecutive Ammann bars in a given direction, as can be seen in Figure 1 (more details are given in Appendix A).
A
The motivation above suggests a learning problem that assumes the quantum state prepared at time step t=1,2,…,T𝑡12…𝑇t=1,2,\ldots,Titalic_t = 1 , 2 , … , italic_T is ρtsubscript𝜌𝑡\rho_{t}italic_ρ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT. Due to imperfect calibration, ρtsubscript𝜌𝑡\rho_{t}italic_ρ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT at different time t𝑡titalic_t is different, and our goal is to learn ρtsubscript𝜌𝑡\rho_{t}italic_ρ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT in an online fashion. The problem of online learning of a changing concept has received significant attention in the machine learning literature, and we adapt several of these techniques to the quantum setting, which involves high-dimensional spaces over the complex numbers.
Next, we consider a more sophisticated metric, the "adaptive regret," as introduced by hazan2009efficient [18]. This metric measures the maximum of the regret over all intervals, essentially taking into account a changing comparator. Many extensions and generalizations of the original technique have been presented in works such as daniely2015strongly [19, 20, 21]. Minimizing adaptive regret requires a combination of expert algorithms and continuous online learning methods, which we also adopt for the quantum setting.
In this section, we consider the case that ρtsubscript𝜌𝑡\rho_{t}italic_ρ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT may change over time. In particular, we do not assume the number of times that ρtsubscript𝜌𝑡\rho_{t}italic_ρ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT changes, but instead consider the total change over time measured by the path length. To this end, we study dynamic regret, where the path length 𝒫𝒫\mathcal{P}caligraphic_P of the comparator in nuclear norm is restricted:
Dynamic regret: We consider minimizing regret under the assumption that the comparator φ𝜑\varphiitalic_φ changes slowly:
The first metric we consider is the "dynamic regret" introduced by zinkevich2003online [17], which measures the difference between the learner’s loss and that of a changing comparator. The dynamic regret bounds are usually characterized by how much the optimal comparator changes over time, known as the "path length."
D
Reset control systems are effective in improving the performance of motion systems. To facilitate the practical design of reset systems, this study develops frequency response analysis methods for open-loop and closed-loop reset control systems, by assessing their steady-state responses to sinusoidal inputs. Results show the efficacy of the methods in predicting the precision of two-reset systems on precision motion stages. Moreover, the methods establish connections between open-loop and closed-loop analysis of reset systems. However, the paper primarily develops these analysis tools. Future research can explore practical applications of these frequency response analysis methods in designing reset control systems.
The frequency response analysis for closed-loop reset systems under sinusoidal disturbance and noise follows a similar derivation process as the theories presented in this paper. However, to emphasize and clarify the contribution of this paper, we have chosen not to include analysis for systems with disturbance or noise inputs here. Instead, we plan to address these aspects in our future research discussing disturbances.
The frequency response analysis is currently limited to two-reset systems. In our future research, we aim to develop techniques to identify two-reset systems and analyze multiple-reset systems, thereby expanding the scope of our analysis methods. Furthermore, the newly introduced Two-Reset Control System (T-RCS) in this research is designed to enforce two reset instants per steady-state cycle when the system is subjected to sinusoidal inputs. The T-RCS has shown improved steady-state tracking precision at low frequencies compared to traditional reset systems under sinusoidal reference inputs. The enhanced performance is attributed to the elimination of multiple-reset occurrences achieved by the T-RCS. The practical application of the T-RCS under various types of inputs are worth exploring in future research endeavors.
Reset control systems are effective in improving the performance of motion systems. To facilitate the practical design of reset systems, this study develops frequency response analysis methods for open-loop and closed-loop reset control systems, by assessing their steady-state responses to sinusoidal inputs. Results show the efficacy of the methods in predicting the precision of two-reset systems on precision motion stages. Moreover, the methods establish connections between open-loop and closed-loop analysis of reset systems. However, the paper primarily develops these analysis tools. Future research can explore practical applications of these frequency response analysis methods in designing reset control systems.
The lack of precise frequency response analysis methods for closed-loop reset systems and the disconnect between open-loop and closed-loop analysis in reset systems motivates this research. The objective of this research is to develop new frequency response analysis methods for both open-loop and closed-loop reset control systems with sinusoidal inputs. These methods aim to (1) predict the steady-state performance of the closed-loop reset control system by rectifying inaccuracies present in previous methods, and (2) establish a reliable connection between the frequency-domain analysis of the open-loop and closed-loop reset control systems.
A
We start with recalling the setting of Theorem 5.5. The graph G𝐺Gitalic_G is a connected 𝒪ksubscript𝒪𝑘\mathcal{O}_{k}caligraphic_O start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT-free graph of girth at least 11, and C𝐶Citalic_C is a shortest cycle in G𝐺Gitalic_G. The neighborhood of C𝐶Citalic_C is denoted by N𝑁Nitalic_N, and the vertex set V⁢(G)∖(C∪N)𝑉𝐺𝐶𝑁V(G)\setminus(C\cup N)italic_V ( italic_G ) ∖ ( italic_C ∪ italic_N ) is denoted by R𝑅Ritalic_R. The subset of R𝑅Ritalic_R consisting of the vertices adjacent to N𝑁Nitalic_N is denoted by S𝑆Sitalic_S. Since C𝐶Citalic_C is a shortest cycle, of size at least 11, each vertex of S𝑆Sitalic_S has a unique neighbor in N𝑁Nitalic_N, and a unique vertex at distance 2 in C𝐶Citalic_C. Moreover N𝑁Nitalic_N and S𝑆Sitalic_S are independent sets. In the setting of Theorem 5.5, R𝑅Ritalic_R is a forest.
We start with proving that the cardinality of S𝑆Sitalic_S is at least the cycle rank r⁢(G)𝑟𝐺r(G)italic_r ( italic_G ).
The proof of our main structural result, Theorem 1.1, spans from Section 4 to Section 8. After some preliminary results (Section 4), we show in Section 5 that it suffices to prove Theorem 1.1 when the graph G𝐺Gitalic_G has a simple structure: a cycle C𝐶Citalic_C, its neighborhood N𝑁Nitalic_N (an independent set), and the remaining vertices R𝑅Ritalic_R (inducing a forest). Instead of directly exhibiting a logarithmic-size feedback vertex set, we rather prove that every such graph contains a vertex of degree linear in the so-called “cycle rank” (or first Betti number) of the graph. For sparse 𝒪ksubscript𝒪𝑘\mathcal{O}_{k}caligraphic_O start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT-free graphs, the cycle rank is at most linear in the number of vertices and decreases by a constant fraction when deleting a vertex of linear degree. We then derive the desired theorem by induction, using as a base case that if the cycle rank is small, we only need to remove a small number of vertices to obtain a tree. To obtain the existence of a linear-degree vertex in this simplified setting, we argue in Section 6 that we may focus on the case where the forest G⁢[R]𝐺delimited-[]𝑅G[R]italic_G [ italic_R ] contains only paths or only large “well-behaving”
Our goal is to prove that there is a vertex whose degree is linear in the cycle rank r⁢(G)𝑟𝐺r(G)italic_r ( italic_G ).
in which case there is a cycle Cysubscript𝐶𝑦C_{y}italic_C start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT which is a connected component of G⁢[R′]𝐺delimited-[]superscript𝑅′G[R^{\prime}]italic_G [ italic_R start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ],
C
\forall x\in\mathbb{R}^{n},\gamma>0.( italic_I + italic_γ ∂ italic_g ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_x ) = roman_prox start_POSTSUBSCRIPT italic_γ italic_g end_POSTSUBSCRIPT ( italic_x ) , ∀ italic_x ∈ blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT , italic_γ > 0 .
The next theorem presents iteration and operation complexity of Algorithm 2 for finding an ε𝜀\varepsilonitalic_ε-residual solution of problem (1) with μ=0𝜇0\mu=0italic_μ = 0, whose proof is deferred to Section 6.
The above discussion leads to the following result regarding Algorithm 2 for finding a pair of ε𝜀\varepsilonitalic_ε-KKT solutions of problems (22) and (24).
The above discussion leads to the following result regarding Algorithm 2 for finding an ε𝜀\varepsilonitalic_ε-residual solution of problem (37).
The above discussion leads to the following result regarding Algorithm 2 for finding an ε𝜀\varepsilonitalic_ε-KKT solution of problem (29).
C
D⁢(ℓ⁢(x),ℓ⁢(y))𝐷ℓ𝑥ℓ𝑦D(\ell(x),\ell(y))italic_D ( roman_ℓ ( italic_x ) , roman_ℓ ( italic_y ) ) can be computed given only the sum ℓ′⁢(x)⊕ℓ′⁢(y)direct-sumsuperscriptℓ′𝑥superscriptℓ′𝑦\ell^{\prime}(x)\oplus\ell^{\prime}(y)roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) ⊕ roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_y ), as desired.
Let ℱℱ\mathcal{F}caligraphic_F be any class of graphs with an adjacency labeling scheme of size s⁢(n)𝑠𝑛s(n)italic_s ( italic_n ). Then
Let ℱℱ\mathcal{F}caligraphic_F be a hereditary class with an adjacency labeling scheme of size s⁢(n)𝑠𝑛s(n)italic_s ( italic_n ). Then:
Let ℱℱ\mathcal{F}caligraphic_F be any class of graphs with an adjacency labeling scheme of size s⁢(n)𝑠𝑛s(n)italic_s ( italic_n ). Then
Let ℱℱ\mathcal{F}caligraphic_F be a hereditary class of graphs that admits an adjacency labeling scheme of size s⁢(n)𝑠𝑛s(n)italic_s ( italic_n ).
D
This paper proposes an integrated constellation design and transfer model for solving the RCRP. Given a set of target points each associated with a time-varying coverage reward and a time-varying coverage threshold, the problem aims to maximize the total reward obtained during a specified time horizon and to minimize the total cost of satellite transfers. The bi-objective formulation results in a trade-off analysis, potentially a Pareto front analysis if all ε𝜀\varepsilonitalic_ε instances are solved to optimality, in the objective space spanned by the aggregated cost and the total coverage reward. Furthermore, as demonstrated in the illustrative example, the formulation can accommodate different types of orbits, not necessarily restricting orbital slots to be RGT orbits. The use of non-RGT orbits requires a user to specify the time horizon T𝑇Titalic_T for which the formulation is valid.
The contributions of this paper are as follows. We present an integer linear program (ILP) formulation of the design-transfer problem, referred to as the Regional Constellation Reconfiguration Problem (RCRP). This formulation incorporates both constellation design and constellation transfer aspects, which are typically considered independent and serial in current state-of-the-art techniques. The RCRP utilizes the maximal covering location problem formulation found in facility location problems for constellation design, and the assignment problem for constellation transfer, both of which are ILPs. By integrating these two problems, a larger design space is explored and operators are provided with trade-off analysis between transportation cost and coverage performance. The proposed model supports various mission concepts of operations that arise in regional coverage missions. The presented RCRP formulation enables the use of using mixed-integer linear programming (MILP) methods, such as the branch-and-bound algorithm, to obtain globally-optimal reconfiguration solutions. However, this approach becomes intractable for moderately-sized instances. To address this challenge, a Lagrangian relaxation-based solution method is proposed to approach large-scale optimization. This method relaxes a set of constraints to reveal and exploit the special substructure of the problem, making it easier to solve. The results of the computational experiments demonstrate the near-optimality of the Lagrangian heuristic solutions, compared to solutions obtained by a commercial solver, with significantly faster runtime.
The ILP formulation of RCRP-ARC enables users to utilize commercial software packages for convenient handling and obtaining tolerance-optimal solutions. However, for large-scale real-world instances, the problem suffers from the explosion of a combinatorial solution space. To overcome this challenge and to produce high-quality feasible primal solutions, we developed a Lagrangian relaxation-based heuristic method that combines the subgradient method with the 1-exchange neighborhood local search, exploiting the special substructure of the problem. The computational experiments in Section 5 demonstrate the effectiveness of the proposed method, particularly for large-scale instances, producing near-optimal solutions with significantly reduced computational runtime compared to the reference solver.
We conduct computational experiments to evaluate the performance of the proposed Lagrangian relaxation-based solution method. In particular, we focus on analyzing the solution quality and the computational efficiency of the Lagrangian heuristic in comparison to the results obtained by a mixed-integer programming (MIP) solver. We first perform the design of experiments in Section 5.1 and then compare the results obtained by the Lagrangian heuristic and a commercial software package in Section 5.2. The primary computational experiments are performed using RCRP-ARC for RGT orbits. In Section 5.3, we provide an illustrative example to demonstrate the versatility of the proposed framework by extending it to a more general case of non-RGT orbits and RCRP-IRC.
The RCRP formulation combines the constellation transfer problem with the AP formulation and the constellation design problem with the MCP formulation. The former exhibits a special structure—the integrality property—that enables an efficient solution approach. The latter, however, is a combinatorial optimization problem, making the use of exact methods, such as the branch-and-bound algorithm, computationally expensive. In light of this observation, we develop a solution method in Section 4 that capitalizes on the characteristics of the RCRP formulation.
B
Let ρ:GS,R→GLn⁢(ℂ):𝜌→subscript𝐺𝑆𝑅subscriptGL𝑛ℂ\rho:G_{S,R}\to\mathrm{GL}_{n}(\mathbb{C})italic_ρ : italic_G start_POSTSUBSCRIPT italic_S , italic_R end_POSTSUBSCRIPT → roman_GL start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( roman_ℂ ) the representation where each ρ⁢(g)𝜌𝑔\rho(g)italic_ρ ( italic_g ) is the permutation matrix corresponding to the action of φ⁢(g)𝜑𝑔\varphi(g)italic_φ ( italic_g ) by left-multiplication on G𝐺Gitalic_G.
representation ρ:G→GLn⁢(ℂ):𝜌→𝐺subscriptGL𝑛ℂ\rho:G\rightarrow\mathrm{GL}_{n}(\mathbb{C})italic_ρ : italic_G → roman_GL start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( roman_ℂ ) for some n∈ℕ𝑛ℕn\in\mathbb{N}italic_n ∈ roman_ℕ with ρ⁢(s)≠ρ⁢(e)𝜌𝑠𝜌𝑒\rho\left(s\right)\neq\rho\left(e\right)italic_ρ ( italic_s ) ≠ italic_ρ ( italic_e ).
there is a matrix representation ρ𝜌\rhoitalic_ρ of G𝐺Gitalic_G such that ρ⁢(s)≠ρ⁢(e)𝜌𝑠𝜌𝑒\rho\left(s\right)\neq\rho\left(e\right)italic_ρ ( italic_s ) ≠ italic_ρ ( italic_e ).
be a representation such that ρ⁢(s)≠ρ⁢(e)𝜌𝑠𝜌𝑒\rho\left(s\right)\neq\rho\left(e\right)italic_ρ ( italic_s ) ≠ italic_ρ ( italic_e ).
By assumption we have ρ⁢(s)≠ρ⁢(e)𝜌𝑠𝜌𝑒\rho(s)\neq\rho(e)italic_ρ ( italic_s ) ≠ italic_ρ ( italic_e ).
D
CWP represents a novel framework related to Lyapunov function learning, that can be used to develop model-free controllers for general dynamical systems.
Subsequently, we propose D-learning for performing CWP in the absence of knowledge regarding the system dynamics.
• We propose D-learning, which parallels to Q-learning [11] in RL to obtain both Lyapunov function and its derivative (see in Fig.6). Unlike existing Lyapunov function learning methods that rely on controlled models or their approximation with neural networks [12], [13], the system dynamics are encoded in the so-called D-function depending on the actions. This allows CWP to be performed without any knowledge of the system dynamics.
(c) Principal Component Analysis (PCA) projection of the Lyapunov function (17) learned for the system (15), overlaid with the trajectories of the system controlled by the D-learning controller and the DDPG controller, which shows that the D-learning controller has better stability guarantees than the DDPG controller.
Moreover, the feature function, Lyapunov function, and controller, all in the form of neural networks, can be learned jointly to achieve superior performance. These will be explored in the future work.
B
We evaluated the performance of the trained and fine-tuned RCN-Hull model on the test partition of the UCV dataset. Since the ground truth convex hull matrices contain many zeroes, classification accuracy is not a suitable performance indicator. Hence, we report the prediction performance using the precision, recall and F1 score, which are standard classification metrics that provide a more meaningful assessment of a model’s performance in the presence of an imbalanced class distribution. The average precision, recall and F1 score using the proposed two-step transfer learning approach were 91.6%, 77.8% and 84.1%, respectively as given in Table II. We also report the corresponding metrics obtained without transfer learning in Table II. It may be seen that both the precision and recall metrics, and consequently the F1 scores, were lower when the model was individually trained using either the I-CV or the UCV dataset, which validates the efficacy of the transfer learning strategy in improving the model’s performance. Also, the metrics for the feature based ML method [11] given in the first row of Table II, show that it is outperformed by RCN-Hull in terms of all three metrics, not only using the transfer learning approach, but also when trained exclusively on either dataset. Table II also reports the 95% confidence intervals (CIs) of each metric, computed using 1000 bootstrapped samples of the data.
Table III lists BD-rates of the RQ curves generated using the RCN-Hull model predictions with the optimal ground truth convex hulls of the test sequences used as the reference. The PCHIP [56] interpolation method which has been widely employed to compute BD-rates in codec standardization efforts due to its relative stability [58], was used as the interpolation method to compute the BD-rates reported here. Even though the lengths of the video shots used during training were fixed at 300 frames, the test sequences were of different lengths, as specified by the second column of Table III. The attained performance on shorter video chunks having fewer than 100 frames is presented in Appendix B. Since the convex hull is the optimal RQ curve, the negative BD-rates in Table III might seem counterintuitive. However, in those cases where the number of predicted points in 𝒞^^𝒞\hat{\mathcal{C}}over^ start_ARG caligraphic_C end_ARG are fewer than the number of points on the ground truth convex hull 𝒞𝒞\mathcal{C}caligraphic_C, the computed BD-rate can be negative, due to the logarithmic transformation applied to the bitrates for BD-rate calculation, as demonstrated in Appendix C. Thus, it is desirable to have the BD-rate magnitudes close to 0, signifying small deviations from the optimal convex hull. Since the prediction errors produced both positive and negative BD-rates, in this instance the average of the signed BD-rates is not a reliable performance indicator. For this reason, we instead use the more indicative dispersion of BD-rates around 0. Table III shows that the BD-rate magnitudes obtained via RCN-Hull are small for most of the sequences, with only 3 test sequences having a BD-rate magnitude greater than 1%. The average BD-rate obtained by using RCN-Hull to predict the convex hulls was 0.26%, and the mean absolute deviation (MAD) was 0.57%, when VMAF was used to estimate the video qualities for convex hull computation. The VMAF values were restricted in the range of [21, 99] when computating the BD-rates reported here. VMAF values below 21 correspond to encodes of low visual quality which are typically not suitable for streaming applications, and thus we excluded them from the performance evaluation. Also, since VMAF values saturate above 99, streaming multiple encodes having VMAF values greater than 99 adds little practical benefit from a perceptual quality standpoint, which justifies our choice of the upper boundary of the VMAF range. The same range of VMAF values was also used to compute the complexity reduction values reported in the next section.
C}}_{pq}start_UNDERACCENT italic_p ∈ caligraphic_P end_UNDERACCENT start_ARG ∑ end_ARG start_UNDERACCENT italic_q ∈ caligraphic_Q end_UNDERACCENT start_ARG ∑ end_ARG over^ start_ARG caligraphic_C end_ARG start_POSTSUBSCRIPT italic_p italic_q end_POSTSUBSCRIPT, which implies significant reduction of the complexity of bitrate ladder construction. Columns 5-8 of Table III report the reduction in the time required to construct the convex hulls of the test sequences obtained using RCN-Hull, and the reduction in the number of encodes required for this purpose, respectively. These values were computed taking into account even the points removed in the postprocessing step of Section IV-D, since every predicted point in 𝒞^^𝒞\hat{\mathcal{C}}over^ start_ARG caligraphic_C end_ARG were encoded before the convexity correction step, regardless of their inclusion in the final bitrate ladder. The average reduction in the number of encodes was 62.0%, while the average time savings was 53.8%, as reported in the bottom row of the table for the prediction of convex hulls derived using VMAF. Of the 50 candidate encodes for the VMAF-based convex hulls, only 18.15 belonged to the ground truth convex hulls on average, while the average number of encodes required per video shot was reduced from 50 to 18.60 using RCN-Hull.
The overall performances of the four compared methods in terms of average BD-rates, average BD-rate magnitudes and time savings, along with their 95% bootstrap CIs are summarized in Table IV. Table IV also reports the mean absolute deviations (MAD) and the standard deviations (SD) of BD-rates obtained for the four compared methods. The predictions produced by the RCN-Hull model approximate the optimal convex hull more closely and reliably than those delivered by the proxy based and feature based approaches as implied by the lower average BD-rate magnitudes and lower BD-rate dispersion values achieved in terms of both MAD and SD. The time savings achieved RCN-Hull, P-Hull and F-Hull were similar, significantly outperforming the interpolation based approach, as shown in the first row of Table IV. This suggests that our approach is able to obtain both accurate and fast approximations of the optimal convex hulls.
We compared the performance of RCN-Hull against that of I-hull, P-hull [21] and F-hull [11]. The distribution of the BD-rates of each compared model on the UCV test set is plotted in Fig. 7a. The box plots333the box boundaries represent the lower and upper quartiles of the corresponding data, while the whiskers extend to their minimum and maximum values in Fig. 7a show that the BD-rate distribution obtained by the interpolation based method and RCN-Hull is close to the zero BD-rate mark, and also has low dispersions. The BD-rate distributions corresponding to the feature based method described in [11] and the the proxy based method of [21] deviate significantly from zero and has higher dispersions.
A
In this section, we first introduce experimental settings and implementation details for evaluation.
Below we elaborate on each module used in GraphMLP and provide its detailed implementations.
We start our ablation studies by exploring the GraphMLP on different hyper-parameters.
The large-scale ablation studies with 2D detected inputs on the Human3.6M dataset are conducted to investigate the effectiveness of our model (using the single-frame model).
We also conduct detailed ablation studies on the importance of designs in our proposed approach.
D
9:     S←S∪T1←𝑆𝑆subscript𝑇1S\leftarrow S\cup T_{1}italic_S ← italic_S ∪ italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, R←R∪T2←𝑅𝑅subscript𝑇2R\leftarrow R\cup T_{2}italic_R ← italic_R ∪ italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT
of Chen et al. (2021), which has adaptivity of O⁢(log⁡(n/k))𝑂𝑛𝑘O(\log(n/k))italic_O ( roman_log ( italic_n / italic_k ) ).
a linear-time algorithm of Kuhnle (2021); Chen et al. (2021), and showing that our adaptation
The highly adaptive linear-time algorithm (Alg. 3) outlined in Chen et al. (2021)
This algorithm is an instantiation of the ParallelGreedyBoost framework of Chen et al. (2021), and it relies heavily on
D
O~⁢(n67⁢d47)~𝑂superscript𝑛67superscript𝑑47\tilde{O}\left(n^{\frac{6}{7}}d^{\frac{4}{7}}\right)over~ start_ARG italic_O end_ARG ( italic_n start_POSTSUPERSCRIPT divide start_ARG 6 end_ARG start_ARG 7 end_ARG end_POSTSUPERSCRIPT italic_d start_POSTSUPERSCRIPT divide start_ARG 4 end_ARG start_ARG 7 end_ARG end_POSTSUPERSCRIPT )
In this section, we consider the cases that f⁢(x)𝑓𝑥f(x)italic_f ( italic_x ) is convex and L𝐿Litalic_L-smooth. We provide convergence results for AClipped-dpSGD with the preferred high probability. We will show that AClipped-dpSGD is faster than DP-GD and DP-SGD in terms of running time to achieve the excess population risk bounds.
In this section, we introduce our gradient estimator based on the AClip strategy, which privately estimates the mean of a heavy-tailed distribution with a high probability guarantee. Before presenting our result, we first discuss the bias of some simple clipped methods.
In this section, we provide the necessary background for our analyses, including differential privacy and
In this section, we provide our main Algorithm 1, AClipped-dpSGD, and establish its convergence results (i.e., excess population risk bounds) under (strongly) convex and (non)-smooth objectives.
C
SC user association: Optimal UA under SC for mmWave networks is widely studied in the literature [13].
Existing studies design new algorithms for UA, for example by using a Markov decision process [14] to minimize the number of handover decisions and to maximize throughput. However, a key shortcoming of these works is that they neglect the directionality gain of beams, while this simplification might significantly influence the performance of a UA scheme.
Directional beams in mmWave channels help overcome low signal quality due to high path loss. Several studies investigate how to steer, manage and align these beams to achieve the highest throughput or best efficiency [29, 30, 31]. All of these works show that operation with smaller beams leads to higher throughput. However, smaller and more beams may also result in higher overhead and complexity, as all these beams have to be managed. Furthermore, smaller beams may not perform well when there is some uncertainty in the users’ positions. As directional beams cause different gain patterns around a BS, UA under beamforming becomes a difficult problem that can be solved in different ways. In [32], the mmWave UA problem is formulated as a matching and auction game, respectively, and in [18, 27, 25, 33] the authors use machine learning to solve the UA problem. Due to the complexity of the problem, most of these models focus on a small setting with only a few users, while mmWave is used for scenarios with high user density. Moreover, most of these works are based on SC, as MC brings even more complexity and overhead. Our work differs from earlier works as we consider a more complete mmWave setting with both beamforming and MC.
Based on the aforementioned challenges, our goal is to design a computationally-efficient UA scheme that maximizes network throughput while meeting the users’ minimum rate requirements by exploiting a dynamic form of MC. To the best of our knowledge, our study is the first dynamic MC scheme that takes advantage of both beamforming and MC and only uses local information for the user’s decision to connect to a BS or not.
Three heuristics for UA are studied in [14]: (1) connect to the least-loaded BS, (2) connect to the highest instantaneous rate, and (3) connect to the highest SNR. In this work, the authors showed that out of these three heuristics, the SNR-based approach performed best in terms of spectral efficiency. However, some of these UA schemes require knowledge of the entire network. Contrary to most of these works that only verify their results on small instances, we aim to design a UA scheme that is scalable and performs well also in larger settings. Lastly, a framework for UA is proposed in [16] based on minimizing the transmission time and the power budget of a BS, by using a clustering algorithm to cluster users in the same beam. However, this research does not take into account the possibility for users to have MC. Several heuristics considering MC are proposed that optimize energy efficiency [10, 11] or throughput [35]. However, unlike our study, these devised heuristics overlook beam directionality.
A
At time t3, they decide whether to buy one of the two phones, or to leave the website empty-handed. This decision is the output of a rational process of utility maximization that takes into account both the features and the prices of the two phones. The novelty of our framework is that we make explicit the difference between what the user knows (the awareness set) and the entire catalog.
Performance-optimized Recommendation has a long history, with many initial works related to the problems of click-through rate (CTR) optimization for online advertising and search ranking. There are two categories of methods. The first relies on the label or reward given by the user, the second directly learns an order on the items.
Moving to specialized approaches for conversion modelling and the use of price in recommendation,
Most reward-optimized recommendation systems measure an abstract form of user utility and not an actual monetary value. This situation likely stems from the preponderance of clicks as immediate reward feedback in real-world systems. But as the field and the industry mature, we need consistent and rigorous approaches for conversion-optimized recommendation systems. The apparent similarity of conversions and clicks from the point of the merchant/advertiser is misleading, and the current click-optimization approaches are conceptually lacking when it comes to conversion modelling.
Our claim is that in the presence of sales data, recommendation algorithms can use the price information to directly optimize the welfare of the whole system (which is made of the advertisers and the users), instead of maximizing the probability of an action (click or conversion for instance).
C
In the present work, we investigate the interference degree of structured hypergraphs. The problem of computing the interference degree of a hypergraph is shown to be NP-hard, and the interference degree of certain structured hypergraphs is determined. We also investigate which hypergraphs are realizable, i.e. which hypergraphs arise as the interference model of a wireless network. For various values of the path loss exponent, we determine the maximal value of r𝑟ritalic_r such that the hypergraph K1,rsubscript𝐾1𝑟K_{1,r}italic_K start_POSTSUBSCRIPT 1 , italic_r end_POSTSUBSCRIPT is realizable.
In the rest of this section, we prove some results that are used in the proofs in subsequent sections. The reader may skip the rest of this section now and return to it later.
The rest of this paper is organized as follows. Section 2 describes the system model; formal definitions of the unit disk graph model and hypergraph model are given, a distributed maximal scheduling algorithm from the literature is recalled, and it is explained that its worst-case performance is characterized by the interference degree of the hypergraph. Section 3 discusses related work. In Section 4, we show that the problem of computing the interference degree of a hypergraph is NP-hard and determine the interference degree of some structured hypergraphs. In Section 5 we prove some preliminary results, which are used in Section 6, Section 7, Section 8, and Section 9 to prove that certain hypergraphs are realizable or nonrealizable when the path loss exponent γ=4,3,2𝛾432\gamma=4,3,2italic_γ = 4 , 3 , 2 and 1111, respectively. In Section 10, these results are extended to nonintegral values of the path loss exponent γ𝛾\gammaitalic_γ. In Section 11 some results concerning the hypergraphs of line networks are obtained. Section 12 contains concluding remarks. To facilitate understanding, we provide in Table 1 a list of all symbols used along with their corresponding meanings and the section where they are first defined.
In the special case the hypergraph (V,ℰ)𝑉ℰ(V,\mathcal{E})( italic_V , caligraphic_E ) is 2222-uniform, its interference degree is equal to the interference degree of the graph (V,ℰ)𝑉ℰ(V,\mathcal{E})( italic_V , caligraphic_E ) [18, p. 2955]. Thus, the interference degree of the 2222-uniform hypergraph K1,rsubscript𝐾1𝑟K_{1,r}italic_K start_POSTSUBSCRIPT 1 , italic_r end_POSTSUBSCRIPT is r𝑟ritalic_r. In light of these results, when investigating performance guarantees of the distributed maximal scheduling algorithm for hypergraph interference models, a question that arises naturally is: what is the maximal value of r𝑟ritalic_r such that the hypergraph K1,rsubscript𝐾1𝑟K_{1,r}italic_K start_POSTSUBSCRIPT 1 , italic_r end_POSTSUBSCRIPT is realizable? This question is investigated in subsequent sections for various values of the path loss exponent γ𝛾\gammaitalic_γ. In this section, we first show that hypergraph models can be viewed as a generalization of the unit disk graph model (cf. Lemma 10). We will then prove some preliminary results that hold for any value of the path loss exponent γ𝛾\gammaitalic_γ and that will be used in the proofs in subsequent sections. In the rest of this section, the path loss exponent γ𝛾\gammaitalic_γ can be assumed to be any value greater than or equal to 1111.
In the present section we investigate certain properties of a hypergraph invariant - the interference degree. We show that the problem of computing the interference degree of a hypergraph is NP-hard, and we prove some basic properties of this hypergraph invariant and compute the interference degree of certain structured hypergraphs. These results extend or generalize the previous results in the literature on graph models to hypergraph models.
B
&\text{if }q\in Q_{\mathsf{Max}}\end{cases}italic_V start_POSTSUBSCRIPT italic_i + 1 end_POSTSUBSCRIPT ( italic_q , italic_ν ) = { start_ROW start_CELL roman_min start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT roman_inf start_POSTSUBSCRIPT ( italic_q , italic_ν ) start_ARROW start_OVERACCENT italic_t , italic_δ end_OVERACCENT → end_ARROW ( italic_q start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_ν start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) end_POSTSUBSCRIPT ( sansserif_wt ( italic_δ ) + italic_t sansserif_wt ( italic_q ) + italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_q start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_ν start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) ) end_CELL start_CELL if italic_q ∈ italic_Q start_POSTSUBSCRIPT sansserif_Min end_POSTSUBSCRIPT end_CELL end_ROW start_ROW start_CELL roman_max start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT roman_sup start_POSTSUBSCRIPT ( italic_q , italic_ν ) start_ARROW start_OVERACCENT italic_t , italic_δ end_OVERACCENT → end_ARROW ( italic_q start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_ν start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) end_POSTSUBSCRIPT ( sansserif_wt ( italic_δ ) + italic_t sansserif_wt ( italic_q ) + italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_q start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_ν start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) ) end_CELL start_CELL if italic_q ∈ italic_Q start_POSTSUBSCRIPT sansserif_Max end_POSTSUBSCRIPT end_CELL end_ROW
a given transition δ𝛿\deltaitalic_δ in π𝜋\piitalic_π (or ρ𝜌\rhoitalic_ρ). More generally, for
For a fixed valuation ν𝜈\nuitalic_ν, and once chosen the transition δ𝛿\deltaitalic_δ in the minimum or maximum,
path π𝜋\piitalic_π in 𝒢𝒢\mathcal{G}caligraphic_G from an initial valuation ν𝜈\nuitalic_ν of the clock:
and valuation ν𝜈\nuitalic_ν, there exist t∈ℝ≥0𝑡subscriptℝabsent0t\in\mathbb{R}_{\geq 0}italic_t ∈ blackboard_R start_POSTSUBSCRIPT ≥ 0 end_POSTSUBSCRIPT
B
Motivated by the mathematically intricate CO problem, a multi-task learning based analog beam selection (namely MTL-ABS) framework is developed to solve the beam selection problem in a low-complexity way for the RIS-enabled THz MU-MIMO systems. The MTL technique is a promising paradigm in machine learning communities and aims to exploit the useful information in the multiple related tasks to improve the generalization performance of all the tasks with low complexity [30, 31, 32]. With more training data, MTL is able to learn more robust and general representations for multiple tasks, obtaining better performance and lower overfitting risk of each task. Benefiting from the advantages of MTL, the major goal of this paper is how to formulate the beam selection problem with inter-related tasks as a MTL problem. The main contributions can be summarized as follows.
We primitively formulate a codebook-based beam selection problem for the RIS-enabled THz MU-MIMO system, where the subarray architecture is employed at the BS and RIS, respectively. In light of this system model, we derive a novel sum-rate metric to measure the beam selection performance.
In addition, both active MIMO and passive RIS possess an extremely large number of array elements at THz band, but current research works tend to optimize the phase shifts of RIS elements one by one during the signal processing stage, which definitely leads to high latency and heavy computational complexity for RIS-aided THz communication systems [22]. Hence, it is more practical for massive MIMO and RIS techniques to adopt the subarray architecture and each radio frequency (RF) chain only connects to a subset of antennas [23]. Given the subarray structure, selecting analog beams for transceiver and RIS is an efficient way to guarantee the high sum-rate performance [24]. It should be noted that conventional MIMO systems only carry out the beam selection procedure at the transceiver sides [25, 26]. By contrast, the RIS-aided THz multi-user MIMO (MU-MIMO) system requires to achieve analog beam selection at the base station (BS), the RIS and users, respectively. In other words, there are three different kinds of beam selection targets that need to be considered simultaneously. To cope with this complex combinatorial optimization (CO) problem, the heuristic beam search algorithms are time-consuming and computationally infeasible in real-time communications.
With regard to the system model and channel model mentioned above, the sum-rate of the RIS-aided THz MU-MIMO system can be formulated as
In this section, we introduce the RIS-enabled THz MU-MIMO system model and channel model. Based on these assumptions, the beam selection problem is formulated.
A
F}}_{j,t}^{-1}-id\,,\,\widehat{\bm{F}}_{l,t-1}^{-1}-id\rangle_{Leb}.[ over^ start_ARG bold_Γ end_ARG ( 1 ) ] start_POSTSUBSCRIPT italic_j , italic_l end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG italic_T end_ARG ∑ start_POSTSUBSCRIPT italic_t = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ⟨ over^ start_ARG bold_italic_F end_ARG start_POSTSUBSCRIPT italic_j , italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT - italic_i italic_d , over^ start_ARG bold_italic_F end_ARG start_POSTSUBSCRIPT italic_l , italic_t - 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT - italic_i italic_d ⟩ start_POSTSUBSCRIPT italic_L italic_e italic_b end_POSTSUBSCRIPT .
For the estimator 𝑨~~𝑨\widetilde{\bm{A}}over~ start_ARG bold_italic_A end_ARG to hold, strictly speaking, we need to assume that 𝚪~⁢(0)~𝚪0\widetilde{\bm{\Gamma}}(0)over~ start_ARG bold_Γ end_ARG ( 0 ) is nonsingular as in the case of classical least squares estimators.
Under the conditions of Lemma 4.1, and 𝚪^⁢(0)^𝚪0\widehat{\bm{\Gamma}}(0)over^ start_ARG bold_Γ end_ARG ( 0 ) is nonsingular, where we recall
As before, we assume that 𝚪^⁢(0)^𝚪0\widehat{\bm{\Gamma}}(0)over^ start_ARG bold_Γ end_ARG ( 0 ) is invertible.
\right]^{-1},over~ start_ARG bold_italic_A end_ARG = over~ start_ARG bold_Γ end_ARG ( 1 ) [ over~ start_ARG bold_Γ end_ARG ( 0 ) ] start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ,
C
In the pre-training stage, we use the same strategy in EP [32] as the Base training method and train the SSL models (SemCo [24], FlexMatch [50] and MarginMatch [39]) by using the default hyperparameter setting in official codes.
In the episodic finetuning stage, for miniImageNet and tieredImageNet, 5 classes are randomly sampled per episode, where in each class we select 5 and 15 instances for the support and query set, respectively.
Most FSL methods use a meta-learning manner (episodic sampling) to generate many few-shot tasks for training. Specifically, to generate a M𝑀Mitalic_M-way K𝐾Kitalic_K-shot task, we randomly select M𝑀Mitalic_M classes from the base set and then randomly choose Nk+Nqsubscript𝑁𝑘subscript𝑁𝑞N_{k}+N_{q}italic_N start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT + italic_N start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT samples from each selected class, where M⁢Nk𝑀subscript𝑁𝑘MN_{k}italic_M italic_N start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT and M⁢Nq𝑀subscript𝑁𝑞MN_{q}italic_M italic_N start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT samples are used to build the support set 𝒮𝒮\mathcal{S}caligraphic_S and query set 𝒬𝒬\mathcal{Q}caligraphic_Q, respectively.
To investigate the impact of class-prior selection strategy i.e. first pick up some classes and then select several samples from these classes, we retrain two representative approaches, SPN [31] and M-PL [17], by using the original and our new setting with different label numbers. Then, we evaluate them by using the semi-supervised inference method with 15 queries and 100 unlabeled samples on miniImageNet and tieredImageNet. As shown in Table IV, the accuracy of SPN-N drops sharply, while M-PL-N obtains a slight reduction. These indicate we cannot correctly infer the performance in the new setting based on the results obtained in the original setting. Therefore, we need a new setting to reevaluate the methods.
To investigate the effect of meta-learning on SSL models, we finetune SemCo and FlexMatch by our PLML, and evaluate them on the testing set of base classes in miniImageNet, and tieredImageNet. As shown in Fig. 4, our approaches achieve superior performance than the baselines in most evaluation tasks of three datasets. Specifically, the highest gains are up to 3.04% in miniImageNet (SemCo-trans with 20 labels) and 2.83% in tieredImageNet (FlexMatch-trans with 200 labels) over state-of-the-art SSL algorithms. This indicates the huge potency of meta-learning for SSL, since our approach is an initial and simple attempt to utilize meta-learning for SSL. More techniques can be introduced to combine two topics.
A
For initial (base model) training, we use a multilayer perceptron (MLP) model architecture for each feature set for comparison. The MLP hyperparameters are tuned using 5-fold cross-validation on the training set where the search space is a grid of combinations of: hidden layer size 64, 128, or 512 (2 hidden layers chosen independently); dropout rate 0.25 or 0.5; and training duration 100 or 500 epochs. All models use a mean squared error (MSE) loss with a batch size of 256 (or the size of the dataset, if smaller), rectified linear unit (ReLU) activations, and Adam optimization with a learning rate of 0.001. Data is also scaled to zero mean and unit variance independently for each feature. For the Half-life and VDss datasets, we scale the targets in the training set by taking the natural logarithm before learning — the outputs are exponentiated before measuring performance metrics. Note, the base model for the z𝑧zitalic_z-baseline is effectively equivalent to the target attribute estimator used in the PerturbLearn step to make predictions for generated samples.
Table 2 shows the results of experiments using the FreeSolv dataset. Each row shows mean absolute error (MAE) results (mean ±plus-or-minus\pm± standard deviation (σ𝜎\sigmaitalic_σ)) averaged over 10 runs of fine-tuning the MLP base model (learned from the “train” split) with a GP regressor using either the PerturbLearn Markov blanket (also learned from the “train” split) or various baselines while each column shows a different number of fine-tuning samples, n𝑛nitalic_n, from the “test” split. The n=0𝑛0n=0italic_n = 0 column shows the performance of the base model, without fine-tuning, on the “test” split. The best performing model on the “validation” split with n=7𝑛7n=7italic_n = 7 is used to choose PerturbLearn hyperparameters.
Table 3: Spearman correlation (ρ𝜌\rhoitalic_ρ ±σplus-or-minus𝜎\pm\ \sigma± italic_σ; higher is better) of different predictors (with input dimension “size”) on the Half-life benchmark with varying numbers of test samples included in fine-tuning (n𝑛nitalic_n). Best results in bold; second-best is underlined. Note: since the dummy regressor output is constant, the Spearman correlation is undefined for that row.
Table 2: Mean absolute error (MAE ±σplus-or-minus𝜎\pm\ \sigma± italic_σ; lower is better) of different predictors (with input dimension “size”) on the FreeSolv benchmark with varying numbers of test samples included in fine-tuning (n𝑛nitalic_n). Best results in bold; second-best is underlined.
Table 4: Fine-tuning results for TDC datasets at n=25𝑛25n=25italic_n = 25. FreeSolv, Caco-2, and PPBR use MAE (±σplus-or-minus𝜎\pm\ \sigma± italic_σ) while Half-life, VDss, and the Clearance datasets (Hepato. and Micro.) use Spearman correlation (±σplus-or-minus𝜎\pm\ \sigma± italic_σ) as the performance metric. Best result in bold; next-best is underlined.
C
M⁢a⁢t⁢c⁢h⁢i⁢n⁢g𝑀𝑎𝑡𝑐ℎ𝑖𝑛𝑔Matchingitalic_M italic_a italic_t italic_c italic_h italic_i italic_n italic_g
0.006∗∗superscript0.006absent0.006^{**}0.006 start_POSTSUPERSCRIPT ∗ ∗ end_POSTSUPERSCRIPT
0.615∗∗superscript0.615absent0.615^{**}0.615 start_POSTSUPERSCRIPT ∗ ∗ end_POSTSUPERSCRIPT
0.006∗⁣∗∗superscript0.006absent0.006^{***}0.006 start_POSTSUPERSCRIPT ∗ ∗ ∗ end_POSTSUPERSCRIPT
We collected donation transactions between 1:00 AM on Feb. 26, 2022 and 6:00 PM on Mar.3, 2022, from the public wallets of Ukraine to focus on the airdrop. We calculated the USD value of donation contributions using the historical prices of Bitcoin and Ethereum based on the daily opening prices. There were 14,903 donation transactions on the Bitcoin blockchain and 69,709 donation transactions on the Ethereum blockchain during this observation window. In total, $9,874,757 was raised from the Bitcoin blockchain, and $16,043,036 was raised from the Ethereum blockchain. We excluded extreme donations above the 99.9th percentile ($7727.13). We sum up donations from the same wallet if they are transacted within the same minute. We further removed transactions between 2:00 AM on Mar. 1, 2022 and 2:00 AM on Mar. 2 because this period is associated with a potential airdrop. We are left with 67,615 donation transactions.
B
We introduce a novel diffusion mechanism for machine learning models called FedDif, which aims to reduce weight divergence caused by non-IID data. In this mechanism, local models accumulate the personalized data distributions from different users, achieving a similar effect to training on IID data.
It can be easily seen that the diffusion efficiency maximization problem in (16) is a combinatorial optimization problem. It is difficult to obtain a solution directly because the set of feasible solutions is discrete. Therefore, based on auction theory, we design a diffusion strategy to find a feasible solution that simultaneously minimizes the IID distance and required spectral resources. In the following section, we provide a theoretical basis for feasible solutions that can solve the weight-divergence problem and minimize IID distance.
We design the diffusion strategy based on auction theory to balance the enhancement of learning performance with the reduction of communication costs. We formulate an optimization problem to find the trade-off, and the auction provides a feasible solution based on the proposed winner selection algorithm.
Although the diffusion mechanism can mitigate the effects of non-IID data, excessive diffusion can substantially increase the total training time and deteriorate the performance of communication systems. In other words, there is a trade off between improving learning performance and reducing communication costs. Immoderate diffusion can deteriorate network performance because users can overoccupy the bandwidth required to send their model. Conversely, passive diffusion requires more time-domain resources to achieve the targeted performance. Therefore, an efficient scheduling method should be designed for communication-efficient diffusion by considering these trade-offs. We first construct an optimization problem between IID distance and communication cost to find the optimal scheduling policy. Then, we provide a theoretical analysis of FedDif by demonstrating that FedDif can mitigate weight divergence. Our analysis provides a guideline for optimization, in which the scheduling policy should assign the next users who can minimize the IID distance of the models. Finally, we propose a diffusion strategy to find a feasible solution to the optimization problem based on auction theory.
There is a trade-off between the communication cost of diffusion and the learning performance of the global model. For example, FedDif requires more communication resources than typical FL in the short term. However, in the long term, the entire number of iterations required to obtain the required performance of the global model may decrease because the optimization trajectory of the global model after diffusion can become much closer to the optimal trajectory than the typical FL. FedDif aims to coordinate the trade-off by determining the optimal diffusion chain whose PUEs maximize the variance of IID distance and minimize the communication cost.
B
held by ownerheld by third partywallet with ahardware root of trustthat enforces ruleson behalf of anauthority or issuer
online services thatstore digital assetsbut for which theowner is responsiblefor their managementand any transactions
held by ownerheld by third partywallet with ahardware root of trustthat enforces ruleson behalf of anauthority or issuer
online services thatstore digital assetsbut for which theowner is responsiblefor their managementand any transactions
online services thatstore digital assetsand conduct transactionson behalf of the owner
D
Let us consider the (static) network whose topology at round t𝑡titalic_t is the complete graph Gt=Knsubscript𝐺𝑡subscript𝐾𝑛G_{t}=K_{n}italic_G start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = italic_K start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT, i.e., each process receives messages from all other processes at every round. We can prove by induction that all nodes of the history tree other than the root have exactly one child. This is because any two processes with the same input always receive equal multisets of messages, and are therefore always indistinguishable. Thus, the history tree is completely determined by the multiset μλsubscript𝜇𝜆\mu_{\lambda}italic_μ start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT of all processes’ inputs; moreover, a process’ view at any given round only depends on the process’ own input and on μλsubscript𝜇𝜆\mu_{\lambda}italic_μ start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT. By the fundamental theorem of history trees Theorem 3.1, this is enough to conclude that if a process’ output stabilizes, that output must be a function of the process’ own input and of μλsubscript𝜇𝜆\mu_{\lambda}italic_μ start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT, which is the defining condition of a multiset-based function.
The following result justifies the assumption made in Sections 4.4 and 5 that processes have a-priori knowledge of the number of leaders ℓℓ\ellroman_ℓ in the system.
The following result justifies the assumptions made in Section 4.3 that processes have knowledge of an upper bound on n𝑛nitalic_n or on the dynamic diameter d𝑑ditalic_d of the network.
Knowledge of the processes. Our algorithms assume that the processes have a-priori knowledge about certain properties of the network only when the absence of such knowledge would render the Average Consensus or Counting problems unsolvable.
The stabilizing algorithms for both functions give the correct output within 2⁢τ⁢n2𝜏𝑛2\tau n2 italic_τ italic_n communication rounds regardless of the number of leaders, and do not require any knowledge of the dynamic disconnectivity τ𝜏\tauitalic_τ or the number of processes n𝑛nitalic_n. Our terminating algorithm for leaderless networks runs in τ⁢(n+N)𝜏𝑛𝑁\tau(n+N)italic_τ ( italic_n + italic_N ) communication rounds with knowledge of τ𝜏\tauitalic_τ and an upper bound N≥n𝑁𝑛N\geq nitalic_N ≥ italic_n; the terminating algorithm for ℓ≥1ℓ1\ell\geq 1roman_ℓ ≥ 1 leaders runs in τ⁢(ℓ2+ℓ+1)⁢n𝜏superscriptℓ2ℓ1𝑛\tau(\ell^{2}+\ell+1)nitalic_τ ( roman_ℓ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + roman_ℓ + 1 ) italic_n communication rounds101010Note that the case where all processes are leaders is not equivalent to the case with no leaders, because processes do not have the information that ℓ=nℓ𝑛\ell=nroman_ℓ = italic_n, and have to “discover” that there are no non-leader processes in the network. with no knowledge of n𝑛nitalic_n. The latter running time is reasonable (i.e., linear) in most applications, as ℓℓ\ellroman_ℓ is typically a constant or negligible compared to n𝑛nitalic_n.
A
Since L=O⁢(log⁡(n/ε))𝐿𝑂𝑛𝜀L=O(\log(n/\varepsilon))italic_L = italic_O ( roman_log ( italic_n / italic_ε ) ) under our assumptions log⁡(C)=O⁢(log⁡n)𝐶𝑂𝑛\log(C)=O(\log n)roman_log ( italic_C ) = italic_O ( roman_log italic_n ) and log⁡log⁡(1/η)=O⁢(log⁡n)1𝜂𝑂𝑛\log\log(1/\eta)=O(\log n)roman_log roman_log ( 1 / italic_η ) = italic_O ( roman_log italic_n ), we obtain the sample complexity upper bound from the theorem.
For k=2𝑘2k=2italic_k = 2 the sample complexity is obtained in the same way, using Lemma 4.10 instead.
We first consider general k≥2𝑘2k\geq 2italic_k ≥ 2 in this subsection, and then give an improved sample complexity bound for k=2𝑘2k=2italic_k = 2 in the next subsection.
For k=2𝑘2k=2italic_k = 2, the sample complexity of the identity testing algorithm is
If k=2𝑘2k=2italic_k = 2, i.e., we have a binary domain 𝒦={0,1}𝒦01\mathcal{K}=\{0,1\}caligraphic_K = { 0 , 1 }, then the sample complexity for the KL tester is better.
A
In this framework, we can define the problem 𝚙𝚊𝚛𝚝𝚒𝚊𝚕𝚆𝚘𝚛𝚍𝚜𝙽𝚘𝚗𝚄𝚗𝚒𝚟𝚙𝚊𝚛𝚝𝚒𝚊𝚕𝚆𝚘𝚛𝚍𝚜𝙽𝚘𝚗𝚄𝚗𝚒𝚟\mathtt{partialWordsNonUniv}typewriter_partialWordsNonUniv,
Considering an instance of the 3333-CNF-SatCNF-Sat\operatorname{\textsc{CNF-Sat}}SatProb problem,
By this construction, we obtain that the instance of the 3333-CNF-SatCNF-Sat\operatorname{\textsc{CNF-Sat}}SatProb problem is satisfiable if and only if there exists a word v𝑣vitalic_v of length L𝐿Litalic_L which is not compatible to any of the words wisubscript𝑤𝑖w_{i}italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. Therefore, 𝚙𝚊𝚛𝚝𝚒𝚊𝚕𝚆𝚘𝚛𝚍𝚜𝙽𝚘𝚗𝚄𝚗𝚒𝚟𝚙𝚊𝚛𝚝𝚒𝚊𝚕𝚆𝚘𝚛𝚍𝚜𝙽𝚘𝚗𝚄𝚗𝚒𝚟\mathtt{partialWordsNonUniv}typewriter_partialWordsNonUniv is NP-hard (and the completeness now follows trivially). Moreover, as L𝐿Litalic_L, the length of the partial words in the instance of our problem, equals the number of variables in the 3333-CNF-SatCNF-Sat\operatorname{\textsc{CNF-Sat}}SatProb, the conditional lower bound holds as well.
The first part of the following result was shown in [68] via a reduction from 3333-CNF-SatCNF-Sat\operatorname{\textsc{CNF-Sat}}SatProb, and it can be complemented by a conditional lower bound.
Ultimately, we have a reduction from 3333-CNF-SatCNF-Sat\operatorname{\textsc{CNF-Sat}}SatProb to 𝚙𝚊𝚛𝚝𝚒𝚊𝚕𝚆𝚘𝚛𝚍𝚜𝙽𝚘𝚗𝚄𝚗𝚒𝚟𝚙𝚊𝚛𝚝𝚒𝚊𝚕𝚆𝚘𝚛𝚍𝚜𝙽𝚘𝚗𝚄𝚗𝚒𝚟\mathtt{partialWordsNonUniv}typewriter_partialWordsNonUniv (from Theorem 4.1)
C
A novel deep learning-based framework is proposed for point cloud geometry inter-frame encoding similar to P-frame encoding in video compression.
We propose a novel deep learning-based inter-frame predictor network that can predict the latent representation of the current frame from the previously reconstructed frame as shown in Fig. 4.
The proposed inter-prediction module employs a specific version of generalized sparse convolution [36] with different input and output coordinates denoted as GSConv to perform motion estimation in the feature domain.
We propose a novel inter-prediction module (predictor network) that learns a feature embedding of the current PC frame from the previous PC frame. The network utilizes hierarchical multiscale feature extractions and employs a generalized sparse convolution (GSConv) with arbitrary input and output coordinates to perform motion compensation in the feature domain by mapping the latent features from the coordinates of the first frame to the coordinates of the second frame. The inter-prediction module is the first deep learning module that successfully enables the effective transferring of features between point cloud frames with different coordinates.
The proposed inter-frame compression framework employs an encoder and decoder network similar to PCGCv2 along with a novel inter-prediction module to predict the feature embedding of the current PC frame from the previous PC frame.
C
K}(\alpha_{ik}-1)\left[\psi(\alpha_{ik}-\psi(\alpha_{i0}))\right].\end{split}start_ROW start_CELL italic_K italic_L [ roman_Dir ( bold_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) | | roman_Dir ( bold_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | bold_1 ) ] = end_CELL start_CELL roman_log ( divide start_ARG roman_Γ ( italic_α start_POSTSUBSCRIPT italic_i 0 end_POSTSUBSCRIPT ) end_ARG start_ARG roman_Γ ( italic_K ) ∏ start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_K end_POSTSUPERSCRIPT roman_Γ ( italic_α start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT ) end_ARG ) + ∑ start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_K end_POSTSUPERSCRIPT ( italic_α start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT - 1 ) [ italic_ψ ( italic_α start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT - italic_ψ ( italic_α start_POSTSUBSCRIPT italic_i 0 end_POSTSUBSCRIPT ) ) ] . end_CELL end_ROW
The weighted KL divergence term provides a regularization to penalize the case in Fig. 1. The overall loss function can be written as:
where pi⁢jsubscript𝑝𝑖𝑗p_{ij}italic_p start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT is the predicted probability for it⁢hsuperscripti𝑡ℎ\textit{i}^{th}i start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT sample of class j𝑗jitalic_j, 𝐲isubscript𝐲𝑖\mathbf{y}_{i}bold_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is a one-hot vector indicating the ground-truth class of observation 𝐱isubscript𝐱𝑖\mathbf{x}_{i}bold_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, if the sample 𝐱isubscript𝐱𝑖\mathbf{x}_{i}bold_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT belongs to kt⁢hsuperscript𝑘𝑡ℎk^{th}italic_k start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT class, yi⁢j=0subscript𝑦𝑖𝑗0y_{ij}=0italic_y start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT = 0 for j≠k𝑗𝑘j\neq kitalic_j ≠ italic_k and yi⁢j=1subscript𝑦𝑖𝑗1y_{ij}=1italic_y start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT = 1 for j=k𝑗𝑘j=kitalic_j = italic_k. ψ⁢(⋅)𝜓⋅\psi(\cdot)italic_ψ ( ⋅ ) is the digamma function. Unfortunately, the loss function in Eq. 6 cannot provide a good optimization process in meta-stage. The case in Fig. 1 shows that the model is confident(low uncertainty) for the inaccurate prediction. To penalize this case, we propose a KL divergence term, which can be denoted as:
For clarity, we provide a toy example under a triplet classification task to illustrate the difference from softmax classifiers. To calibrate the predictive uncertainty, the model is encouraged to learn a sharp simplex for accurate prediction(Fig. 1), and to produce a flat distribution for inaccurate prediction in Fig. 1. The case in Fig. 1 and Fig. 1 are unexpected.
how prior evidence influences posterior evidence. In the case that pre-trained network provides a good class agnostic embedding, η𝜂\etaitalic_η should be higher and vice versa. According to Eq. 2, the posterior evidence and the parameters of the Dirichlet distribution can be written as:
A
This is enabled by adding redundancy in the form of check symbols to the data representation.
Hamming codes can be implemented in systematic or non-systematic form; and conversion between the two forms takes elementary matrix transformations. While the coverage is similar to classical TMR,
criterion directly restricts the type of applicable ECCs. This criterion essentially is after homomorphic operation, which guarantees that computation on raw data can always be mapped to computation on check symbols without any ambiguity.
Two types of self-checking circuits exist: Type-I features systematic; Type-II, non-systematic codes.
Check symbols in a codeword can be totally isolated from the raw data bits (systematic ECCs); or interleaved (non-systematic ECCs). In the following, we stick to systematic ECCs where data to be protected can be accessed directly, which by construction enables a more modular design, especially useful in the PiM context.
C
The proposed method presents 2ksuperscript2𝑘2^{k}2 start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT GCAS with array size 2k⁢r×2k⁢ssuperscript2𝑘𝑟superscript2𝑘𝑠2^{kr}\times 2^{ks}2 start_POSTSUPERSCRIPT italic_k italic_r end_POSTSUPERSCRIPT × 2 start_POSTSUPERSCRIPT italic_k italic_s end_POSTSUPERSCRIPT and set size 2ksuperscript2𝑘2^{k}2 start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT where k,r,s𝑘𝑟𝑠k,r,sitalic_k , italic_r , italic_s are positive integer. Therefore, the GCAS proposed in (wang2021new12, , Th.7) appeared as a special case of our proposed construction.
Section 2 provides useful definitions. Section 3 describes 2D-CCC construction. Section 4 examines the PMEPR of row and column sequences in 2D-CCC arrays and provides generalizations of the proposed 2D-CCC. Section 5 compares the proposed 2D-CCC to the current state-of-the-art. Section 6 concludes.
The proposed construction yields 2D-CCC with array size M2×K2superscript𝑀2superscript𝐾2M^{2}\times K^{2}italic_M start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT × italic_K start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT and set size M⁢K𝑀𝐾MKitalic_M italic_K where M,K≥2𝑀𝐾2M,K\geq 2italic_M , italic_K ≥ 2, therefore, the proposed work generalizes the 2D-CCC given in farkas2003two .
In this section we derive the PMEPR bound of the 2222D-CCC arrays given in Theorem 1.
As a special case of the design, we have come up with 2D-GCAS with any array size and flexible set size of the form ∏i=1api⁢∏j=1bqjsuperscriptsubscriptproduct𝑖1𝑎subscript𝑝𝑖superscriptsubscriptproduct𝑗1𝑏subscript𝑞𝑗\prod_{i=1}^{a}p_{i}\prod_{j=1}^{b}q_{j}∏ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∏ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT italic_q start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT. We have also successfully used the proposed 2D-CCC as precoding matrices in OP-based massive MIMO URA systems. We have shown that the BER performance of the proposed scheme. Moreover, the PMEPR of row and column sequences have been examined. The 2D-CCC given in farkas2003two , GCP given in (davis1999peak, , Th.3), CCC given inchen2008complete ; rathinakumar2008complete ; sarkar2021multivariable , and GCAS given in (pai2022two, , Th.1) appears as a special case of the proposed construction.
A
Suppose that from the following time series {y1,y2,…,yn}subscript𝑦1subscript𝑦2…subscript𝑦𝑛\{y_{1},y_{2},...,y_{n}\}{ italic_y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_y start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT } we want to estimate a function f:ℝp→ℝ:𝑓→superscriptℝ𝑝ℝf:\mathbb{R}^{p}\rightarrow\mathbb{R}italic_f : blackboard_R start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT → blackboard_R to predict the next observation based on its p𝑝pitalic_p lags. How can this be achieved and how to use it to make H𝐻Hitalic_H-step ahead point predictions from an abstract regression algorithm 𝒜𝒜\mathcal{A}caligraphic_A?
The MIMO strategy also involves converting time series data into a supervised learning problem, as shown in (6). However, unlike the recursive strategy, the target variable is a vector rather than a scalar, therefore F:ℝp→ℝH:𝐹→superscriptℝ𝑝superscriptℝ𝐻F:\mathbb{R}^{p}\rightarrow\mathbb{R}^{H}italic_F : blackboard_R start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT → blackboard_R start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT is multi-output. This has the advantage of avoiding error accumulation over a large forecast horizon. To make a forecast for the entire horizon, it only needs to incorporate the past p𝑝pitalic_p observations to then forecast the whole forecast horizon in a single step.
First, we should convert the time series to a supervised learning problem as follows
Although this paper solely tackles univariate time series, AEnbMIMOCQR can be easily generalized to cope with multivariate time series and hints are provided to do so. Furthermore, AEnbMIMOCQR can be employed as a replacement for CQR in volatile regression settings, not necessarily time series, and for unsupervised anomaly detection tasks.
We will now delve into establishing the key attributes that constitute a high-quality PI, not exclusively tied to time series forecasting, but more generally any regression task. To this end, consider an unseen pair of covariates and target, denoted as (𝒙𝒏+𝟏,yn+1)subscript𝒙𝒏1subscript𝑦𝑛1(\bm{x_{n+1}},y_{n+1})( bold_italic_x start_POSTSUBSCRIPT bold_italic_n bold_+ bold_1 end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_n + 1 end_POSTSUBSCRIPT ); the foremost and arguably most crucial requirement that a high-quality PI should meet is validity, as outlined in Definition (1). It refers to the alignment between the desired and observed nominal coverage. A valid, sometimes also referred to as calibrated PI, ensures the specified nominal coverage—neither more nor less. For instance, for a 90% PI, around 90% of real observations should fall within. Another requirement is sharpness. A sharp PI is as narrower as possible to be informative. Moreover, a frequently overlooked yet essential criterion that PIs should also comply with is the ability to handle heteroscedasticity. Heteroscedasticity occurs when the error variance exhibits variation across the covariates space. A clear understanding of the distinction between marginal and conditional coverage is vital in order to grasp why marginally valid PIs may not be sufficient. Essentially, conditionally valid PIs adapt to the varying error variance in the covariates space, as stipulated in Definition (2). To help comprehension of these concepts, two plots, Figures 2 and 2, are presented, pointing the differences between these two types of PIs. While in Figure 2, the PIs adapt to the local uncertainty of the given input, in Figure 2, they fail to do so. Additionally, ensuring conditional coverage implies that miscoverage of the target variable is equally likely to occur across the covariates space. This becomes clearer if we increase α𝛼\alphaitalic_α from 0.010.010.010.01 to 0.10.10.10.1, as shown in Figures 5 and 5, where we notice that in Figure 5 the ratio of miscoverage for x≥40𝑥40x\geq 40italic_x ≥ 40 is much greater in comparison to x<40𝑥40x<40italic_x < 40.
B
The same construction as in [OS24, Example 4.1] shows that the dissimilarity dB^^subscript𝑑𝐵\widehat{d_{B}}over^ start_ARG italic_d start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT end_ARG on ℛ2superscriptℛ2\mathscr{R}^{2}script_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-barcodes does not satisfy the triangle inequality, even when restricted to signed barcodes arising as the rank exact decomposition of a fp ℛ2superscriptℛ2\mathscr{R}^{2}script_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-persistence module.
Although the example deals with the Betti signed barcode, that is, the signed barcode associated to the usual exact structure on fp ℛ2superscriptℛ2\mathscr{R}^{2}script_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-persistence modules, it applies without any changes to the rank exact decomposition, since the (usual) minimal resolutions of the modules considered in [OS24, Example 4.1] coincide with the minimal rank projective resolutions of the modules.
The prototypical example comes from the usual exact structure on the category of fp 𝒫𝒫\mathscr{P}script_P-persistence modules, which has as exact sequences the usual exact sequences.
The notion of minimality of the minimal rank decomposition by rectangles—which requires the positive and negative barcodes to be disjoint—is replaced by the requirement that the signed barcode comes from a minimal projective resolution in the so-called rank exact structure, also described below.
The minimal rank projective resolution of Fig. 1(a) is used to define the rank exact decomposition of the ℛ2superscriptℛ2\mathscr{R}^{2}script_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-persistence module M𝑀Mitalic_M, which, in this case, is an interval module.
A
However, the test inputs and the mocks produced by these techniques are either synthetic or manually written by developers per their assumptions of how the system behaves.
These approaches do not guarantee that the generated mocks reflect realistic behaviors as observed in production contexts.
Second, for projects that already contain automated tests, rick can contribute with unit tests that reflect realistic behavior, as observed in production.
Contrary to these approaches, rick monitors applications in production in order to generate mocks. Consequently, the generated tests reflect the behavior of an application with respect to actual user interactions.
This may result in incomplete or unfaithful program states within the generated tests, which do not reflect the ones observed in production.
A
Modeling heterogeneity, which is the focus of this paper, has gained interest [16, 17]. Heterogeneity comes in many forms, e.g., differences in roles [18], robotic capabilities and or sensors [19], dynamics [20], and even teams of air and ground robots [21]. Heterogeneity has been defined [22] for systems in the finite and discrete setting [23, 24], but algorithmic methods that can tackle large-scale heterogeneity have been difficult to build.
We use a mathematical model (Section III) of the human adaptive immune system first proposed in [1] to understand how a defending team can optimally allocate its resources to minimize the harm incurred from a heterogeneous team of attackers. We focus our analysis on two situations in Section IV: (i) when no single type of defender can defend against every type of attacker, and (ii) when defenders have limited resources that they should devote optimally to tackle attackers.
We used a mathematical model to understand what an optimal defender team composition should be. The key property of this model is the cross-reactivity which enables defender agents of a given type to recognize attackers, of a few different types. This allows the defender distribution to be supported on a discrete set, even if the shape-space, i.e., the number of different types of agents, is very large. Cross-reactivity is also fundamentally responsible for the defender team to be able to estimate an unknown and evolving attacker distribution. This model points to a number of guiding principles for the design of multi-agent systems across many different scales and problem settings, e.g., the immune system where the population-level dynamics of the attackers and defenders that was used to formulate the model, to various experimental settings where we evaluated the model with a finite number of agents, to competition dynamics that allows effective decentralized control policies and reinforcement learning such controllers.
Task assignment with heterogeneous agents [25] is another similar problem to ours, but a desired trait distribution is necessitated by the objective instead of calculating it explicitly. In comparison, the present paper uses a simple formulation to understand what distribution is best and how to allocate heterogeneous agents optimally in a multi-agent interaction problem.
In the context of the above literature, the place of the present paper is to study a theoretical model where large heterogeneous multi-agent interaction problems can be analyzed precisely.
C
HappySimpleProperStrictNon-strict×\times×Lemma 2×\times×Lemma 4×\times×Lemma 3×\times×Lemma 5×\times×Corollary 2SemaphoreDilationSaturationCorollary 6⪯precedes-or-equals\preceq⪯⪯precedes-or-equals\preceq⪯⪯precedes-or-equals\preceq⪯⪯precedes-or-equals\preceq⪯⪯precedes-or-equals\preceq⪯⪰succeeds-or-equals\succeq⪰⪰succeeds-or-equals\succeq⪰Open 2Open 1
Does “non-strict” ⪯precedes-or-equals\preceq⪯ “simple & strict”? In other words, is there a reachability-preserving transformation from the former to the latter?
Does “simple & non-strict” ⪯precedes-or-equals\preceq⪯ “simple & strict”? In other words, is there a reachability-preserving transformation from the former to the latter?
Finally, the fact that there is a reachability-preserving transformation from “non-strict” to “strict” (the saturation technique), and some reachability graphs from “simple & strict” are unrealizable in “non-strict” (by Lemma 3), we also have
Similarly, combining the fact that “simple & non-strict” is strictly contained in “non-strict” (by Corollary 2), and there exists a reachability-preserving (in fact, support-preserving) transformation from “non-strict” to “proper”, we also have that
A
Nado et al. (2020) propose using batch statistics during inference from the target domain instead of the training statistics acquired from the source domain.
In this work, we study the generalization problem of semantic segmentation from synthetic data (Richter et al., 2016; Ros et al., 2016) through the lens of adaptation.
To our surprise, we found that no previous work on domain generalization for semantic segmentation has yet fulfilled all of these principles.
A more comprehensive approach by (Yue et al., 2019) considered a number of target domains.
Our comprehensive empirical study complements these results by demonstrating improved generalization of semantic segmentation models.
D
Then, by Claim 18, we can obtain two disjoint independent sets V1subscript𝑉1V_{1}italic_V start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and V2subscript𝑉2V_{2}italic_V start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT in G𝐺Gitalic_G with |V1|+|V2|>2⁢ε⁢nsubscript𝑉1subscript𝑉22𝜀𝑛|V_{1}|+|V_{2}|>2\varepsilon n| italic_V start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT | + | italic_V start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT | > 2 italic_ε italic_n. Then, we get that we can find an independent set of size at least ε⁢n𝜀𝑛\varepsilon nitalic_ε italic_n, which contradicts Theorem 16.
If no ties are allowed at all in the preference orders (i.e., the preferences are strict), then we obtain the standard stable marriage problem, where all stable matchings have the same size by the so-called “rural hospitals theorem” [39, 17], and the Gale–Shapley algorithm finds one efficiently. In fact, we can obtain a stable matching of a max-smti or max-smpo instance by modifying each agent’s preferences to an arbitrary total order that is consistent with the agent’s partial order, and running the Gale–Shapley algorithm with the resulting total orders. However, the size of the stable matching obtained depends on the choice of total orders, and it can be as small as half the optimum.
The aim of this paper is to bring together two directions in which the problem has been extended. One is the design of approximation algorithms for finding a maximum stable matching when ties are allowed in the preference lists. The other is the generalization of the stable marriage problem to matroid intersection, in particular, the matroid kernel problem introduced and solved by Fleiner [14] using an abstract version of the Gale–Shapley algorithm.
The last author was supported by JST PRESTO Grant Number JPMJPR212B and JST ERATO Grant Number JPMJER2301, and the joint project of Kyoto University and Toyota Motor Corporation,titled “Advanced Mathematical Science for Mobility Society”.
Some of our results were obtained at the Emléktábla Workshop in Gárdony, July 2022. We would like to thank Tamás Fleiner, Zsuzsanna Jankó, and Ildikó Schlotter for the fruitful discussions. We thank the anonymous reviewers of the previous versions for their helpful feedback. The work was supported by the Lendület Programme of the Hungarian Academy of Sciences – grant number LP2021-1/2021, by the Hungarian National Research, Development and Innovation Office – NKFIH, financed under the ELTE TKP2021-NKTA-62 funding scheme and grant K143858. The first author was supported by the Ministry of Culture and Innovation of Hungary from the National Research, Development and Innovation fund, financed under the KDP-2023 funding scheme (grant number C2258525).
D
In this section, the paper proposes two approaches to transform the original non-convex problem into two equivalent convex forms in order to adapt the two different offloading scenarios, i.e., pure BAC-NOMA offloading and hybrid BAC-NOMA offloading. If BDs are able to finish offloading within t0subscript𝑡0t_{0}italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, the pure BAC-NOMA scheme becomes feasible, and the second time duration ta=0subscript𝑡𝑎0t_{a}=0italic_t start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT = 0. Otherwise, the hybrid BAC-NOMA scheme is adopted.
where μ𝜇\muitalic_μ is an auxiliary variable which is determined by the proposed iterative algorithm in the later section.
Therefore, an algorithm is proposed to iteratively update the power allocation solution and the iterative variable μ𝜇\muitalic_μ. The procedure of this proposed scheme is summarized in Algorithm 1. With the increment of l𝑙litalic_l, the Dinkelbach iterative variable μ𝜇\muitalic_μ will finally converge to the ε𝜀\varepsilonitalic_ε-optimal μ∗superscript𝜇\mu^{*}italic_μ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT, where the ε𝜀\varepsilonitalic_ε-optimal μ∗superscript𝜇\mu^{*}italic_μ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT can be achieved if F⁢(μ∗)≤ε𝐹superscript𝜇𝜀F(\mu^{*})\leq\varepsilonitalic_F ( italic_μ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ) ≤ italic_ε. The convergence of Dinkelbach based algorithm was proved in [9], and simulation results also show that the proposed algorithm is able to converge within a few iterations, which is presented later in the next section.
Based on the above two schemes, an iterative based algorithm is proposed to obtain the resource allocation efficiently.
Different from the existing works which mainly focus on the EE maximization [6] and sum rate maximization [5, 8] of BAC-NOMA schemes, this paper considers the delay minimization problem of a hybrid BAC-NOMA assisted MEC offloading scenario. In particular, the signal of the downlink transmission can excite the circuit of BDs to upload the tasks to the MEC server. Since the existence of FD interference, those BDs may not be able to finish offloading when the downlink transmission is active. Therefore, a second time duration is scheduled to continue the offloading by using the NOMA uplink transmission. By applying the Dinkelbach method and quadratic transformation, an iterative based algorithm is proposed to obtain the sub-optimal solution to the original non-convex problem. By comparing with benchmarks, simulation results demonstrate that the proposed protocol can significantly reduce the offloading latency.
C
The Tor network is based on onion routing, the communication technique developed in the 1990s which aimed to ensure both private and anonymous communication. Messages are first encrypted several times before being sent across nodes (onion routers) in the network that serves to successively decrypt and pass along the encrypted message object. Anonymity is guaranteed in this process since every node knows only the previous and successive nodes and the state of the received message at any time, while encryption ensures that privacy is preserved (although the system is still vulnerable to other types of attacks). Tor was developed in the early 2000s and improved upon early limitations identified with onion routing, and the first tools to implement it [1].
In 2008, the Tor browser was deployed to enable easier and more widespread use of the Tor network. Tor has since been noted for its impact on societal and sociopolitical causes: as a key communication tool during the Arab Spring uprisings, for instance, or to actively evade censors like China’s Great Firewall. Improvements in Tor have been made in anticipation and as a consequence of its use in these specific scenarios [7]. Thus, Tor provides an interesting landscape to examine developments over time: from its origins in academic cryptographic research to its current position as the most widely known and usable tool to protect privacy and anonymity as much as possible.
Tor is widely used as a medium to circumvent censorship. Access to certain kinds of websites that contain propaganda messages, pornographic content, social media sites, etc. is the kinds of sites that are typically censored if the content is not authoritatively approved in the country. Over the years, governments have been aware of Tor and the methodologies it uses to perform censorship circumvention. Therefore, such countries often make attempts to restrict users from accessing censored content via Tor. Countries like China have been known to restrict access to the Tor network and have kept up with the recent technological advancements that the Tor project is implementing to make Tor more accessible to users where Tor is censored. The Tor project introduced Pluggable transports in the Tor network to prevent such censorship attempts. However, China has managed to detect and block many of the pluggable transport techniques over the year. Figure 7 shows the trend in the number of bridge users that use different pluggable transports in China to access the Tor network. As seen in the graph, the ⟨O⁢R⟩delimited-⟨⟩𝑂𝑅\langle OR\rangle⟨ italic_O italic_R ⟩ and obfs4 pluggable transport mechanism to access the network has suppressed/reduced over the years (with sudden spikes at some points). This is because China has found ways to suppress these mechanisms. However, the meek pluggable transport mechanism is still widely used. This is because the meek pluggable transport obfuscates the Tor traffic in TLS which makes it much harder to detect. Also, the Meek-supported pluggable transport bridges are hosted by AWS and Azure in China. Blocking the Meek pluggable transport would require them to block the AWS bridges which would consequently result in the blockage of the entire AWS and Azure IP range. Many Chinese companies heavily rely on AWS and Azure to host their cloud-based infrastructure.
From its origins as a research project and early use mostly by the technical community, Tor has evolved into being a usable tool with significant societal impact. Recent events in the past month alone highlight the continued relevance of Tor to such studies. It was revealed this past month that a threat actor was running thousands of Tor relays in an attempt to deanonymize parts of the network. Also this past month, Russia attempted to prevent the use of the network by blocking access to the torproject website and directing ISPs to block proxy access[17][18]. Measuring Tor, therefore, poses interesting questions and challenges outside of the solely technical.
Tor is a circuit-based low-latency anonymous communication service [1] [2]. Over the years, the usage of Tor has increased and it is being used in a variety of scenarios, both good and bad (legal and illegal as well). The current trends show that Tor is suitable due to its relatively low latency for being used in circumventing censorship and as a medium for hiding online illegal activity [3]. However, there are many cases where users strictly use Tor for protecting their private data as it is transmitted over the internet. It is used as a privacy-enhancing technique by privacy advocates, journalists, activists, law enforcement and is also for censorship circumvention purposes. Studying the usage patterns in Tor is critical to get a realistic and precise overview of the Tor infrastructure in terms of its privacy and anonymity guarantees. It leads us to the path for understanding the future updates and changes that Tor would undergo.
A
We defined Prevent by benefitting from the lessons learned with PreMiSe [43], EmBeD [47, 46], and Loud [42], three representative techniques to predict and localize failures.
The supervised PreMiSe approach that we developed as a joint project with industrial partners, gives us important insights about the strong limitations of supervised approaches in many industrially-relevant domains.
In this paper we discuss advantages and limitations of the approaches reported in the literature, and propose Prevent, a novel approach that overcomes the main limitations of current approaches.
PreMiSe indicates that supervised approaches can indeed precisely and timely predict failures, localize faults and identify the fault types. It also highlights the strong limitations of training systems with seeded faults in production, as supervised approaches require.
RQ2 focuses on the advantages and limitations of the unsupervised Prevent approach with respect to state-of-the-art (supervised) approaches. Unsupervised approaches do not require training with data collected during failing executions, thus they can be used in the many industrially relevant cases where it is not possible to seed failures during operations, as required for supervised approaches.
A
Stakeholders may have different value preferences, or their values may conflict with norms (Jakesch et al., 2022).
Floridi (2018a) proposes that ethical evaluation can be understood in terms of hard ethics and soft ethics.
However, there may be cases where ambiguities arise that hard ethics cannot provide an answer for.
Soft ethics examines what ought to be done over and above existing norms, such as in cases where competing values and interests need to be balanced, or existing regulations provide no guidance (Floridi, 2018b).
What are existing gaps in ethics research in AI and computer science, specifically in relation to operationalising principles in reasoning capacities?
C
(d+22)>deg⁡([D+𝒜]+).binomial𝑑22degreesubscriptdelimited-[]𝐷𝒜\binom{d+2}{2}>\deg([D+\mathcal{A}]_{+}).( FRACOP start_ARG italic_d + 2 end_ARG start_ARG 2 end_ARG ) > roman_deg ( [ italic_D + caligraphic_A ] start_POSTSUBSCRIPT + end_POSTSUBSCRIPT ) .
common denominator for all the elements of ℒ⁢(D)ℒ𝐷\mathcal{L}(D)caligraphic_L ( italic_D ).
In other words, common denominators H𝐻Hitalic_H of ℒ⁢(D)ℒ𝐷\mathcal{L}(D)caligraphic_L ( italic_D ) do exist in
If a common denominator H𝐻Hitalic_H is known for ℒ⁢(D)ℒ𝐷\mathcal{L}(D)caligraphic_L ( italic_D ), then G1/H,…,Gℓ/Hsubscript𝐺1𝐻…subscript𝐺ℓ𝐻G_{1}/H,\ldots,G_{\ell}/Hitalic_G start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT / italic_H , … , italic_G start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT / italic_H is a 𝕂𝕂\mathbb{K}blackboard_K-basis of ℒ⁢(D)ℒ𝐷\mathcal{L}(D)caligraphic_L ( italic_D ) whenever
dimensions of ℒ⁢(D)ℒ𝐷\mathcal{L}(D)caligraphic_L ( italic_D ) and ℒ⁢(D¯)ℒ¯𝐷\mathcal{L}(\bar{D})caligraphic_L ( over¯ start_ARG italic_D end_ARG ) can be estimated
B
Fig. 2(A) shows the profiles of k⁢(x)𝑘𝑥k(x)italic_k ( italic_x ) and f⁢(x)𝑓𝑥f(x)italic_f ( italic_x ) considered. In Fig. 2(B), we show the profiles of typical eigenmodes ϕihsubscriptsuperscriptbold-italic-ϕℎ𝑖\bm{\phi}^{h}_{i}bold_italic_ϕ start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT (ϕi⁢(x)subscriptitalic-ϕ𝑖𝑥\phi_{i}(x)italic_ϕ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x ); i∈{1,5,10}𝑖1510i\in\{1,5,10\}italic_i ∈ { 1 , 5 , 10 }) of the solution u⁢(x)𝑢𝑥u(x)italic_u ( italic_x ) and corresponding loading vectors (𝐀h⁢ϕih‖𝐀h⁢ϕih‖superscript𝐀ℎsubscriptsuperscriptbold-italic-ϕℎ𝑖normsuperscript𝐀ℎsubscriptsuperscriptbold-italic-ϕℎ𝑖\frac{\mathbf{A}^{h}\bm{\phi}^{h}_{i}}{||\mathbf{A}^{h}\bm{\phi}^{h}_{i}||}divide start_ARG bold_A start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT bold_italic_ϕ start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG start_ARG | | bold_A start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT bold_italic_ϕ start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | | end_ARG), where mode 1 refers to the lowest-frequency and mode 10 is a relatively high frequency. In Fig. 2(C), we present the numerical results for three setups: (M1) Jacobi solver; (M2) Jacobi solver initialized with a single DeepONet inference; (M3) HINTS-Jacobi (1/251251/251 / 25).
Mode 1 has the lowest spatial frequency, while mode 10 has a relatively high spatial frequency. (C) Numerical results. We consider three setups, each shown in one row: (M1) Jacobi solver only; (M2) Jacobi solver with DeepONet initializer, i.e., one-time usage of DeepONet followed by Jacobi iterations; (M3) HINTS-Jacobi (with a DeepONet-to-Jacobi ratio 1:14:1141:141 : 14). The second column shows key snapshots of the iterative solution. The third column shows the histories of the norms of residual and error of the approximate solution, with the snapshots in the second column marked correspondingly. The fourth column shows the history of the norm of error in eigenmodes 1, 5, and 10.
The remaining results in the first rows of Figs. 4A-B, similar to 1D cases, show the histories of the norms of the residual, error, and the mode-wise error. The second rows of Figs. 4A-B show the error of three key snapshots during the iterations together with the true/converged solution. For the case of 3D Helmholtz equation, we display the cross section at z=0.5𝑧0.5z=0.5italic_z = 0.5. In both cases, DeepONet iterations significantly reduce the errors or low-frequency modes, hence accelerating the convergence of the solution.
The third column shows the histories of the norms of residual and error of the approximate solution, with the snapshots in the second column marked correspondingly. The fourth column shows the history of the norm of error for eigenmodes 1, 5, and 10.
We show key snapshots of the approximate solution and the norms of residual and error of the approximate solution, where the reference solution is obtained using a direct solver.
D
The aforementioned theoretical result necessitates that all weights undergo changes across 𝐮𝐮\mathbf{u}bold_u, as constrained by the assumption (v) in Theorem 1. However, in practical applications, this assumption may not hold true. Consequently, two fundamental questions naturally arise: Is this assumption necessary for identifiability in the absence of any supplementary assumptions? Alternatively, can we obtain partial identifiability results if only some of the weights change across 𝐮𝐮\mathbf{u}bold_u? In fact, when part of weights change, we can still provide partial identifiability results, as outlined below.
Suppose latent causal variables 𝐳𝐳\mathbf{z}bold_z and the observed variable 𝐱𝐱\mathbf{x}bold_x follow the generative models defined in Eq. (1)- Eq. (3). Under the condition that the assumptions (i)-(iv) in
Suppose latent causal variables 𝐳𝐳\mathbf{z}bold_z and the observed variable 𝐱𝐱\mathbf{x}bold_x follow the generative models defined in Eq. (1)- Eq. (3),
Intuitively, variant causal influences among latent causal variables cannot be ‘absorbed’ by an invariant nonlinear mapping from 𝐳𝐳\mathbf{z}bold_z to 𝐱𝐱\mathbf{x}bold_x, breaking the transitivity, resulting in identifiable causal representations. Specifically, we explore latent causal generative models where the observed data 𝐱𝐱\mathbf{x}bold_x is generated by the latent causal variables 𝐳𝐳\mathbf{z}bold_z, allowing for any potential graph structures among 𝐳𝐳\mathbf{z}bold_z.
As discussed in Section 3.2, the key factor that impedes identifiable causal representations is the transitivity in latent space. Note that the transitivity is because the causal influences among the latent causal variables may be ‘absorbed’ by the nonlinear mapping from latent variables 𝐳𝐳\mathbf{z}bold_z to the observed variable 𝐱𝐱\mathbf{x}bold_x. To address this issue, motivated by identifying causal structures with the change of causal influences in observed space (Ghassami et al., 2018; Huang et al., 2019, 2020), we allow causal influences among latent causal variables to be modulated by an additionally observed variable 𝐮𝐮\mathbf{u}bold_u, represented by the red edge in Figure 4.
A
One may wonder which of the two postulates is responsible for conditional entropy becoming negative in the quantum world. Interestingly, we identify that it is the extensivity postulate that does so. To arrive at this conclusion, we provide an example of a non-negative measure of conditional entropy that satisfies the monotonicity postulate of conditional entropy (i.e., the one related to the second law) and exhibits weak additivity for tensor-product states. This measure, essentially a regularized version of the “measured” conditional entropy, does not exhibit full additivity under arbitrary tensor-product states. This distinction is important, as it highlights that fully quantum additivity under tensor-product states commands negativity. See Appendix H for further details.
As additional findings, we show that all plausible quantum conditional entropies cannot be smaller than the quantum conditional min-entropy, thus justifying once and for all the name of the latter quantity as the smallest plausible quantum conditional entropy. We also establish a logical equivalence between the non-negativity of all quantum conditional entropies and the well known reduction criterion [24] from entanglement theory. As a corollary, it follows that all separable states have non-negative quantum conditional entropy.
The following statement is a direct consequence of Theorem 1, indicating that every plausible conditional entropy being non-negative is equivalent to the reduction criterion [24], well known in entanglement theory. Thus, Corollary 1 provides a direct link between every conditional entropy (including the conditional min-entropy) and the reduction criterion.
Although all plausible conditional entropies are non-negative on all separable states, this does not mean that the conditional entropies are only non-negative on separable states. In fact, some entangled states have non-negative conditional entropy on all choices of conditional entropy functions. In light of this, Corollary 1 is helpful because it provides a simple sufficient criterion for a bipartite state to have non-negative quantum conditional entropy.
This quantity was previously given the name conditional min-entropy because it is known to be the least among all Rényi conditional entropies [15, 17, 18]. As part of our main result, we strengthen this observation by proving that all plausible quantum conditional entropies are not smaller than the conditional min-entropy; that is, the conditional min-entropy is the smallest plausible conditional entropy.
A
Applying non-parametric Chi-Square test and parametric Gaussian Mixture Model approaches, [19] proposes a mobile network outage prediction with logs.
A decentralized online clustering algorithm based anomaly detection from resource usage logs of supercomputer clusters has been explored in [21]. Disk failure prediction in data centers using different ML techniques such as online random forests [22], auto encoders [23] has also been proposed. A predictive learning mechanism to detect latent error and fault localization in micro-service applications has been explored in [24]. Rare failure predictions is aircrafts using auto-encoder and bidirectional gated RNN mining failure logs have been explored in [25]. An FP-Growth algorithm along with an adaptive sliding window division method to mine patterns in logs to predict failures has been proposed in [26].
For high-performance computing (HPC), [9] presents a long short-term memory (LSTM) based recurrent neural network (RNN) making use of log files to predict lead time to failures. An LSTM-based solution for mission-critical information systems analyzing logs has been presented in [10]. [11] presents a mechanism to predict failure sequences in logs in the context of telemetry and automobile sectors using multilayer perceptron, radial basis, and linear kernels. To predict events (leading to failure) in the system, analyzing multiple independent sources of times series data using different ML methods has been investigated in [12]. Several proposals have been put forward to predict vulnerability and security-related events in systems. A detailed survey of this topic has been summarized in [3]. Run time anomalies in applications using logs and ML-based techniques have been proposed in [13]. Failure prediction in network core routers analyzing logs, building a data set, and then applying support vector machines (SVM) has been proposed in [14]. A multimodal anomaly detection applying unsupervised learning using a microphone, thermal camera, and logs in data center storage has been proposed in [15]. A lightweight training-free online error prediction for cloud storage applying tensor decomposition to analyze storage error-event logs has been proposed in [16]. A multi-layer bidirectional LSTM-based method for task failure prediction in a cloud computing environment using log files has been proposed in [17].
System logs contain a wealth of information. Several research proposals have been put forward over decades to mine the logs and predict system failures [1]. Detecting anomalies in systems applying deep learning on logs has been proposed [2]. A security vulnerability in systems applying log analysis has also been explored extensively [3]. Predicting different types of failures in high-performance computing (HPC) systems has been widely deployed [4]. Typically, these proposals use different kinds of recurrent neural networks (RNN), e.g., long short-term memory (LSTM) and techniques, such as, autoencoders, gaussian mixer models, support vector machines, etc. All these techniques are data-intensive techniques where large amounts of logs are mined to construct the data sets. These data sets are then used by the above techniques to predict failures. Unlike the proposed method in this paper, none of the above techniques work without the statistics of the actual data.
An ML mechanism to predict job and task failures in multiple large-scale production systems from system logs has been described in [20].
D
(Chundawat et al., 2022b) proposed error-minimizing noise to generate an approximate version of Drsubscript𝐷𝑟D_{r}italic_D start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT such that the impaired model could be trained to mitigate the performance degradation.
As the Fisher-based method aims to approximate the model without the deleted data, there can be no guarantee that all the influence of the deleted data has been removed. Although injecting noise can help mitigate information leaks, the model’s performance may be affected by the noise (Cong and Mahdavi, 2022a).
et al., 2022). However, one has to rely on a customized learning algorithm that optimizes a perturbed version of the regularized empirical risk, where the added noise is drawn from a standard normal distribution. This normalized noise allows conventional convex optimization techniques to solve the learning problem with perturbation. As a result, the unlearning request can be done by computing the model perturbation towards the regularized empirical risk on the remaining data. The final trick is that this perturbation can be approximated by the influence function (Koh et al., 2017), which is computed by inverting the Hessian on training data and the gradient of the data to be forgotten (Marchant
Unfortunately, the above error max-min approach yields poor unlearning outcomes as the generated noise is somewhat adhoc. Hence, inspired by (Micaelli
Tarun et al. (Tarun et al., 2021) proposed an unlearning method for class removal based on data augmentation. The basic concept is to introduce noise into the model such that the classification error is maximized for the target class(es). The model is updated by training on this noise without the need to access any samples of the target class(es). Since such impair step may disturb the model weights and degrade the classification performance for the remaining classes, a repair step is needed to train the model for one or a few more epochs on the remaining data.
C
We present two proofs of the theorem, both relying on the preliminaries in Section 2. The first proof is given in Section 3.
Algorithm 1 shows the algorithm of [9] for the factorization of a generic motion polynomial M𝑀Mitalic_M. Let us explain the notation there. If F∈ℝ⁢[t]𝐹ℝdelimited-[]𝑡F\in\mathbb{R}[t]italic_F ∈ blackboard_R [ italic_t ] is an irreducible quadratic factor of the norm polynomial ν⁢(M)𝜈𝑀\nu(M)italic_ν ( italic_M ), the remainder R=r1⁢t+r0𝑅subscript𝑟1𝑡subscript𝑟0R=r_{1}t+r_{0}italic_R = italic_r start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_t + italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT when dividing M𝑀Mitalic_M by F𝐹Fitalic_F has, in general, a unique right zero h=−r1−1⁢r0ℎsuperscriptsubscript𝑟11subscript𝑟0h=-r_{1}^{-1}r_{0}italic_h = - italic_r start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. More precisely, the zero is unique if and only if the leading coefficient r1subscript𝑟1r_{1}italic_r start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is invertible. In this case, we denote it by czero⁡(M,F)≔h≔czero𝑀𝐹ℎ\operatorname{czero}(M,F)\coloneqq hroman_czero ( italic_M , italic_F ) ≔ italic_h. A necessary and sufficient condition for r1subscript𝑟1r_{1}italic_r start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT to be invertible is that F𝐹Fitalic_F does not divide the primal part P𝑃Pitalic_P of M𝑀Mitalic_M, i.e., realgcd⁡(P,F)=1realgcd𝑃𝐹1\operatorname{realgcd}(P,F)=1roman_realgcd ( italic_P , italic_F ) = 1. Because M𝑀Mitalic_M is generic, this condition is always fulfilled and the computation of hℎhitalic_h in Step 7 of Algorithm 1 will always work. In Step 9, two lists are concatenated and the procedure is called recursively. Non-uniqueness of the factorization comes from the possibility of selecting the quadratic factor F𝐹Fitalic_F in Step 6.
The necessity of the factorization condition is Proposition 7, and the sufficiency is Proposition 8 there.
It is easy to give examples of “non-generic” motion polynomials that admit infinitely many or no factorization. In [22], the authors showed that for any bounded motion polynomial M𝑀Mitalic_M (cf. Definition 2 below; the name refers to the bounded trajectories of the underlying rational motion), there exists a real polynomial S𝑆Sitalic_S such that M⁢S𝑀𝑆MSitalic_M italic_S admits a factorization. From the viewpoint of kinematics, this is an attractive property as M𝑀Mitalic_M and M⁢S𝑀𝑆MSitalic_M italic_S describe the same rational motion. A variant of this theme can be found in [20] where the factorizability of M⁢T𝑀𝑇MTitalic_M italic_T with a quaternion polynomial T𝑇Titalic_T is studied to construct linkages to mechanically “draw” a prescribed curve.
For a generic motion polynomial, the factorization exists and there is a simple algorithm to construct it:
B