licenses
sequencelengths
1
3
version
stringclasses
636 values
tree_hash
stringlengths
40
40
path
stringlengths
5
135
type
stringclasses
2 values
size
stringlengths
2
8
text
stringlengths
25
67.1M
package_name
stringlengths
2
41
repo
stringlengths
33
86
[ "MIT" ]
0.4.0
80c3e8639e3353e5d2912fb3a1916b8455e2494b
docs/src/index.md
docs
2311
# DensityInterface.jl ```@meta DocTestSetup = quote struct SomeDensity end log_of_d_at(x) = x^2 x = 4 end ``` ```@docs DensityInterface ``` This package defines an interface for mathematical/statistical densities and objects associated with a density in Julia. The interface comprises the type [`DensityKind`](@ref) and the functions [`logdensityof`](@ref)/[`densityof`](@ref)[^1] and [`logfuncdensity`](@ref)/[`funcdensity`](@ref). The following methods must be provided to make a type (e.g. `SomeDensity`) compatible with the interface: ```jldoctest a import DensityInterface @inline DensityInterface.DensityKind(::SomeDensity) = IsDensity() DensityInterface.logdensityof(object::SomeDensity, x) = log_of_d_at(x) object = SomeDensity() DensityInterface.logdensityof(object, x) isa Real # output true ``` `object` may be/represent a density itself (`DensityKind(object) === IsDensity()`) or it may be something that can be said to have a density (`DensityKind(object) === HasDensity()`)[^2]. In statistical inference applications, for example, `object` might be a likelihood, prior or posterior. DensityInterface automatically provides `logdensityof(object)`, equivalent to `x -> logdensityof(object, x)`. This constitutes a convenient way of passing a (log-)density function to algorithms like optimizers, samplers, etc.: ```jldoctest a using DensityInterface object = SomeDensity() log_f = logdensityof(object) log_f(x) == logdensityof(object, x) # output true ``` ```julia SomeOptimizerPackage.maximize(logdensityof(object), x_init) ``` Reversely, a given log-density function `log_f` can be converted to a DensityInterface-compatible density object using [`logfuncdensity`](@ref): ```julia object = logfuncdensity(log_f) DensityKind(object) === IsDensity() && logdensityof(object, x) == log_f(x) # output true ``` [^1]: The function names `logdensityof` and `densityof` were chosen to convey that the target object may either *be* a density or something that can be said to *have* a density. They also have less naming conflict potential than `logdensity` and esp. `density` (the latter already being exported by Plots.jl). [^2]: The package [`Distributions`](https://github.com/JuliaStats/Distributions.jl) supports `DensityInterface` for `Distributions.Distribution`.
DensityInterface
https://github.com/JuliaMath/DensityInterface.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
R_example/R_interface.jl
code
2784
using SpecialFunctions, BesselK, ForwardDiff, StaticArrays import ForwardDiff: derivative # Convenience tool that will remove allocs: @inline pv(scale, range, v) = @SVector [scale, range, v] # Raw Bessel functions. Note also that there is the `adbesselkxv` method in # BesselK.jl that gives you (x^v)*besselk(v,x) directly, sometimes with at least # a slight gain to accuracy and speed. That isn't bound here, but obviously you # could just slightly tweak this version. R_besselk(v, x) = BesselK.adbesselk(v, x) R_besselk_dx(v, x) = derivative(_x->R_besselk(v, _x), x) R_besselk_dv(v, x) = derivative(_v->R_besselk(_v, x), v) R_besselk_dx_dx(v, x) = derivative(_x->R_besselk_dx(v, _x), x) R_besselk_dx_dv(v, x) = derivative(_x->R_besselk_dv(v, _x), x) R_besselk_dv_dv(v, x) = derivative(_v->R_besselk_dv(_v, x), v) # Unlike the Julia-native version, this takes a pre-computed distance instead of # two coordinates x and y, because if we have to take them as straight # Vector{Float64} items then that will make allocations in the kernel calls and # totally kill performance. In Julia this is avoided by using StaticArrays, but # I don't think I can ask R users to deal with that interface. If you care # enough about performance and need the additional flexibility that that # provides, you could extend this code...or just switch to Julia more fully. # # Note here that the parameter order is: # (sigma (scale), rho (range), nu (smoothness)) # # You can off course change all of this up, but then be careful to also update # the book-keeping stuff in the derivative functions. # # TODO (cg 2022/01/02 17:54): what is the most common signature for Julia stuff # here? As it stands, the parameters can be passed in as a tuple, but should the # R version take all parameters as scalars to again avoid using straight # Vector{Float64}? function matern(dist, params) (sg, rho, nu) = params iszero(dist) && return sg*sg arg = sqrt(2*nu)*dist/rho (sg*sg*(2^(1-nu))/gamma(nu))*BesselK.adbesselkxv(nu, arg) end # First derivatives: matern_d1(dist, p) = matern(dist, (sqrt(2*p[1]), p[2], p[3])) matern_d2(dist, p) = derivative(_p->matern(dist,pv(p[1], _p, p[3])), p[2]) matern_d3(dist, p) = derivative(_p->matern(dist,pv(p[1], p[2], _p)), p[3]) # Second derivatives: matern_d1_d1(dist, p) = derivative(_p->matern_d1(dist, pv(_p, p[2], p[3])), p[1]) matern_d1_d2(dist, p) = derivative(_p->matern_d1(dist, pv(p[1], _p, p[3])), p[2]) matern_d1_d3(dist, p) = derivative(_p->matern_d1(dist, pv(p[1], p[2], _p)), p[3]) matern_d2_d2(dist, p) = derivative(_p->matern_d2(dist, pv(p[1], _p, p[3])), p[2]) matern_d2_d3(dist, p) = derivative(_p->matern_d2(dist, pv(p[1], p[2], _p)), p[3]) matern_d3_d3(dist, p) = derivative(_p->matern_d3(dist, pv(p[1], p[2], _p)), p[3])
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
examples/example.jl
code
300
using StaticArrays, BesselK, ForwardDiff # As simple as this: const x_point = @SVector rand(2) const y_point = @SVector rand(2) const params = [1.0, 1.0, 1.0] # scale, range, smoothness. # All derivatives, including the smoothness! ForwardDiff.gradient(p->matern(x_point, y_point, p), params)
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
paperscripts/demo/efish.jl
code
545
# Conclusion: for e-fish, pretty similar. function efish_replicates(dfns, parms, pts, lendatav) fish = zeros(3,3) _S = Symmetric([matern(x, y, parms) for x in pts, y in pts]) S = cholesky!(_S) for j in 1:3 dfj = dfns[j] Sj = Symmetric([dfj(x, y, parms) for x in pts, y in pts]) fish[j,j] = tr(S\(Sj*(S\Sj)))/2 for k in (j+1):3 dfk = dfns[k] Sk = Symmetric([dfk(x, y, parms) for x in pts, y in pts]) fish[j,k] = tr(S\(Sj*(S\Sk)))/2 fish[k,j] = fish[j,k] end end fish*lendatav end
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
paperscripts/demo/fit.jl
code
2260
# As is discussed in the paper, you can see that the FD derivatives here are so # inaccurate that Ipopt can't even converge. The FD Hessians are worse than # garbage and completely unusable without very high-order and adaptive methods, # which are so brutal to performance that they aren't even worth considering. using Ipopt, Serialization include("shared.jl") include("gradient.jl") include("ipopt_helpers.jl") function fitter(objects_initval) ((case, gradfun, hesfun, maxiter), ini) = objects_initval is_bfgs = in(case, (:FD_BFGS, :AD_BFGS)) println("Case: $case") cache = Vector{Float64}[] box_l = is_bfgs ? [1e-2, 1e-2, 0.25] : [0.0, 0.0, 0.25] prob = createProblem(3, box_l, fill(1e22, (3,)), 0, Float64[], Float64[], 0, div(3*4, 2), _p->caching_nll(_p, cache), (args...)->nothing, gradfun, (args...)->nothing, (x,m,r,c,o,l,v)->ipopt_hessian(x,m,r,c,o,l,v, hesfun,Function[],0)) addOption(prob, "tol", 1e-5) addOption(prob, "max_iter", maxiter) if is_bfgs addOption(prob, "hessian_approximation", "limited-memory") addOption(prob, "nlp_scaling_method", "none") end prob.x = deepcopy(ini) # for safety to avoid weird persistent pointer games. try @time status = solveProblem(prob) _h = is_bfgs ? fil(NaN, 3, 3) : hesfun(prob.x) return (prob, status, case, deepcopy(prob.x), _nll(prob.x), _h, cache, ini) catch er println("\n\nOptimization failed with error $er\n\n") return (prob, :FAIL_OPT_ERR, case, deepcopy(prob.x), NaN, fill(NaN, 3, 3), cache, ini) end end const cases = ((:FD_FISH, grad_fd!, fishfd, 100), (:AD_FISH, grad_ad!, fishad, 100), (:FD_HESS, grad_fd!, hessfd, 100), (:AD_HESS, grad_ad!, _nllh, 100), (:FD_BFGS, grad_fd!, no_hessian, 100), (:AD_BFGS, grad_ad!, no_hessian, 100)) const inits = (ones(3), [1.0, 0.1, 2.0]) const test_settings = vec(collect(Iterators.product(cases, inits))) if !isinteractive() const res = map(fitter, test_settings) serialize("fit_results.serialized", res) end
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
paperscripts/demo/gradient.jl
code
504
function gradient_replicates!(gstore, dfns, parms, pts, datav) fill!(gstore, zero(eltype(gstore))) @assert length(gstore) == length(dfns) "Length of gstore and number of derivative functions don't match." _S = Symmetric([matern(x, y, parms) for x in pts, y in pts]) S = cholesky(_S) for j in eachindex(gstore) dfj = dfns[j] Sj = Symmetric([dfj(x, y, parms) for x in pts, y in pts]) gstore[j] = (tr(S\Sj)*length(datav) - sum(z->dot(z, S\(Sj*(S\z))), datav))/2 end gstore end
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
paperscripts/demo/hessian.jl
code
992
function hessian_replicates(dfns, d2fns, parms, pts, datav) hess = zeros(3,3) _S = Symmetric([matern(x, y, parms) for x in pts, y in pts]) S = cholesky(_S) for j in 1:3 dfj = dfns[j] Sj = Symmetric([dfj(x, y, parms) for x in pts, y in pts]) for k in j:3 # first derivative matrix: dfk = dfns[k] Sk = Symmetric([dfk(x, y, parms) for x in pts, y in pts]) # second derivative matrix: dfjk = d2fns[j][k-j+1] Sjk = Symmetric([dfjk(x, y, parms) for x in pts, y in pts]) # Compute the complicated derivative of the qform (not efficient or # thoughtful, so don't use this code for real somewhere): dqf = -(S\(Sk*(S\(Sj*inv(_S)))) ) dqf += S\(Sjk*inv(_S)) dqf -= S\(Sj*(S\(Sk*inv(_S)))) # compute the term: term = (tr(S\Sjk) - tr(S\(Sj*(S\Sk))))*length(datav)/2 term -= sum(z->dot(z, dqf, z), datav)/2 hess[j,k] = term hess[k,j] = term end end hess end
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
paperscripts/demo/ipopt_helpers.jl
code
1129
hes_reshape(h, len) = [h[i,j] for i=1:len for j=1:i] function hes_structure!(rows, cols, len) idx = 1 for row in 1:len for col in 1:row rows[idx] = row cols[idx] = col idx += 1 end end nothing end function ipopt_hessian(xarg, mode, rows, cols, obj_factor, lams, values, hessfn, constr_hessv, nconstr) (mode == :Structure) && return hes_structure!(rows, cols, length(xarg)) @assert length(lams) == nconstr "Disagreement in lengths of lambdas and constraint functions." h = hessfn(xarg) values .= hes_reshape(h, length(xarg)).*obj_factor for (lj, hesconstrj) in zip(lams, constr_hessv) constrj_hes = hesconstrj(xarg) values .+= lj*hes_reshape(constrj_hes, length(xarg)) end end function jac_structure!(rows, cols, len, nconstr) for (j, ix) in enumerate(Iterators.partition(1:(len*nconstr), len)) rows[ix].=j cols[ix].=collect(1:len) end nothing end function ipopt_constr_jac(xarg, mode, rows, cols, values, g_jac, nconstr) (mode == :Structure) && return jac_structure!(rows, cols, length(xarg), nconstr) values .= g_jac(xarg) end
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
paperscripts/demo/shared.jl
code
3358
# Setting: imagine that we have a bunch of iid replicates of very correlated # smooth data. Let's fit it and look at the resulting CIs. using LinearAlgebra, StableRNGs include("../../examples/matern.jl") include("gradient.jl") include("efish.jl") include("hessian.jl") # A likelihood function, slightly specialized for speed: function nll_replicates(parms, pts, datav) K = Symmetric([matern(x, y, parms) for x in pts, y in pts]) Kf = cholesky!(K) # in-place out = logdet(Kf)*length(datav)/2 # only compute logdet once. out += sum(z->sum(abs2, Kf.L\z), datav)/2 # all the solve terms. out end # For plotting a likelihood surface. This uses the "profile likelihood", which # takes advantage of the fact that, given all other parameters, the # likelihood-miniziming variance can be expressed as # # sig_implied = dot(datav, K(sigma=1, other_parms...)\datav)/length(datav[1]). # # Since the full likelihood can be written with sigma pulled out of the matrix, # you can actually just plug this right back in to the normal Gaussian # likelihood and optimize all parameters at once still, but now you have a k-1 # dimensional problem instead of a k-dimensional problem. # # See, for example, the definition at the top of the second column in page 5 in # the Geoga et al JCGS citation, although that's of course not the first time # the idea has been used here and it dates back to at least the 90s. function profile_nll_replicates(parms, pts, datav) n = length(first(datav)) _p = @SVector [one(eltype(parms)), parms[1], parms[2]] K = Symmetric([matern(x, y, _p) for x in pts, y in pts]) Kf = cholesky!(K) # in-place out = logdet(Kf)*length(datav)/2 # only compute logdet once. out += n*sum(z->log(sum(abs2, Kf.L\z)), datav)/2 # all the solve terms. out end # simulate some data, using the "let" syntax to avoid keeping global K after # we're done: const TRU_P = @SVector [1.5, 2.5, 1.3] const SEED = StableRNG(12345) const N_REP = 10 const N_DAT = 512 const PTS = [SVector{2,Float64}(rand(SEED, 2)...) for _ in 1:N_DAT] const SIMS = let K = Symmetric([matern(x, y, TRU_P) for x in PTS, y in PTS]) Kf = cholesky(K) [Kf.L*randn(SEED, size(Kf, 2)) for _ in 1:N_REP] end # A second simulation of a single draw at more points for the likelihood # surface: const N_DAT_SURF = 1600 const TRU_P_SURF = @SVector [1.5, 0.5, 1.3] const PTS_SURF = [SVector{2,Float64}(rand(SEED, 2)...) for _ in 1:N_DAT_SURF] const SIM_SURF = let K = Symmetric([matern(x, y, TRU_P_SURF) for x in PTS_SURF, y in PTS_SURF]) Kf = cholesky(K) Kf.L*randn(SEED, size(Kf, 2)) end _nll(p) = nll_replicates(p, PTS, SIMS) _nllh(p) = ForwardDiff.hessian(_nll, p) function caching_nll(p, cache=nothing) if !isnothing(cache) push!(cache, deepcopy(p)) end _nll(p) end const HIGHFD = central_fdm(10,1) function high_fd_hessian(p) gfun = _p -> FiniteDifferences.grad(HIGHFD, _nll, _p) FiniteDifferences.jacobian(HIGHFD, gfun, p)[1] end no_hessian(x) = throw(error("This function should not have been called!")) grad_fd!(p, store) = gradient_replicates!(store, FD_DFNS, p, PTS, SIMS) grad_ad!(p, store) = ForwardDiff.gradient!(store, _nll, p) hessfd(p) = hessian_replicates(FD_DFNS, FD_D2FNS, p, PTS, SIMS) fishfd(p) = efish_replicates(FD_DFNS, p, PTS, length(SIMS)) fishad(p) = efish_replicates(AD_DFNS, p, PTS, length(SIMS))
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
paperscripts/demo/standalone.jl
code
1811
# Bring a few packages into scope: using BesselK, ForwardDiff, StaticArrays import BesselK: gamma, adbesselkxv # Matern covariance function, using the rescaled (x^v)*besselk(v, x): function matern(x, y, params) (sg, rho, nu) = params dist = norm(x-y) iszero(dist) && return sg*sg arg = sqrt(2*nu)*dist/rho (sg*sg*(2^(1-nu))/gamma(nu))*adbesselkxv(nu, arg) end # Slightly fancy matrix assembly and factorization: function assemble_matrix(points, params) buf = Matrix{eltype(params)}(undef, length(points), length(points)) Threads.@threads for k in eachindex(points) # nice multi-threading ptk = points[k] buf[k,k] = matern(ptk, ptk, params) @inbounds for j in 1:(k-1) # turns off array bounds checking buf[j,k] = matern(points[j], ptk, params) end end cholesky!(Symmetric(buf)) # in-place Cholesky factorization to avoid heap allocs. end # Negative log-likelihood, with a slight trick for the quadratic form to just # have to solve with the triangular factor once: function nll(points, data, params) Sigma = assemble_matrix(points, params) (logdet(Sigma) + sum(abs2, Sigma.U'\data))/2 end # Sample locations and data, using stack-allocated arrays via StaticArrays.jl to # make sure that the autodiff derivatives don't make any heap allocations. # This is just a random example to demonstrate how to create the single-arg # closure that you can pass to ForwardDiff. const LOCS = [@SVector rand(2) for _ in 1:1000] const DAT = randn(length(LOCS)) # create single-argument closure for the log-likelihood: objective(p) = nll(LOCS, DAT, p) # AD-generated gradient and hessian. It's that easy! Plug in to your favorite # optimizer and you're good to go. objective_grad(p) = ForwardDiff.gradient(objective, p) objective_hess(p) = ForwardDiff.hessian(objective, p)
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
paperscripts/demo/surface.jl
code
785
include("shared.jl") include("../plotting/gnuplot_utils.jl") BLAS.set_num_threads(1) # Small neighborhood of the MLE. const RANGE_GRID = range(0.25, 5.0, length=80) const SMOOTH_GRID = range(1.2, 1.5, length=70) function gensurf() out = zeros(length(RANGE_GRID), length(SMOOTH_GRID)) Threads.@threads for j in eachindex(RANGE_GRID) @inbounds for k in eachindex(SMOOTH_GRID) out[j,k] = profile_nll_replicates((RANGE_GRID[j], SMOOTH_GRID[k]), PTS_SURF, (SIM_SURF,)) end end out .- minimum(out) end # Will need to figure out the right color scale here. if !isinteractive() const surf = gensurf() gnuplot_save_matrix!("../plotdata/profile_surface.csv", surf, RANGE_GRID, SMOOTH_GRID) end
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
paperscripts/demo/writehessians.jl
code
1285
using Ipopt, Printf, Serialization # You need to run fit.jl before running this script. include("shared.jl") const res = deserialize("fit_results.serialized") function printf_sym_matrix(M::Matrix{Float64}) (s1, s2) = size(M) for j in 1:(s1-1) for k in 1:(s2-1) if k >= j @printf "%1.2f & " M[j,k] else @printf "\\cdot & " end end @printf "%1.2f\\\\ \n" M[j,end] end for k in 1:(s2-1) @printf "\\cdot & " end @printf "%1.2f" M[end,end] end function write_matrix_to_file(fname, M, prefix="../../../../manuscript/tables/") open(prefix*fname, "w") do io redirect_stdout(io) do printf_sym_matrix(M) end end end #= # look at hess vs e-fish at initializer: write_matrix_to_file("fdefish_at_ones.tex", fishfd(ones(3))) write_matrix_to_file("fdhess_at_ones.tex", hessfd(ones(3))) write_matrix_to_file("adhess_at_ones.tex", _nllh(ones(3))) write_matrix_to_file("refhess_at_ones.tex", high_fd_hessian(ones(3))) # look at hess vs e-fish at MLE: write_matrix_to_file("fdefish_at_mle.tex", fishfd(res[3][4])) write_matrix_to_file("fdhess_at_mle.tex", hessfd(res[3][4])) write_matrix_to_file("adhess_at_mle.tex", _nllh(res[3][4])) write_matrix_to_file("refhess_at_mle.tex", high_fd_hessian(res[3][4])) =#
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
paperscripts/plotting/gnuplot_utils.jl
code
927
function tablesave!(name, M::Matrix; rlabels=nothing, clabels=nothing, siunits=false) out = siunits ? map(x->"\\num{"*string(x)*"}", M) : string.(M) if !isnothing(rlabels) out = hcat(string.(rlabels), out) end if !isnothing(clabels) is_row_nothing = isnothing(rlabels) rw1 = string.(clabels) if !is_row_nothing pushfirst!(rw1, "") end rw1[end] = rw1[end] * "\\\\" writedlm("header_"*name, reshape(rw1, 1, length(rw1)), '&') end out[:,end] .= out[:,end] .* repeat(["\\\\"], size(out,1)) writedlm(name, out, '&') end function gnuplot_save_matrix!(name, M::Matrix{Float64}, row_pts, col_pts, delim=',') out = Array{Any}(undef, size(M,1)+1, size(M,2)+1) out[1,1] = "" out[1,2:end] .= col_pts out[2:end,1] .= row_pts out[2:end, 2:end] .= M writedlm(name, out, delim) end function gnuplot_save_vector!(name, M, pts, delim=',') writedlm(name, hcat(pts, M), delim) end
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
paperscripts/testing/accuracy.jl
code
1539
using BesselK, SpecialFunctions, ArbNumerics include("shared.jl") function wbesselk(v,x) try return BesselK._besselk(v,x) catch return NaN end end rbesselk(v,x) = Float64(ArbNumerics.besselk(ArbFloat(v), ArbFloat(x))) abesselk(v,x) = SpecialFunctions.besselk(v, x) const BASELINE = [rbesselk(z[1], z[2]) for z in Iterators.product(VGRID, XGRID)] const AMOS = [abesselk(z[1], z[2]) for z in Iterators.product(VGRID, XGRID)] const OURSOL = [wbesselk(z[1], z[2]) for z in Iterators.product(VGRID, XGRID)] const TOLS_A = atolfun.(zip(BASELINE, AMOS)) const TOLS_U = atolfun.(zip(BASELINE, OURSOL)) const TOLS_AU = rtolfun.(zip(AMOS, OURSOL)) const RTOLS_A = rtolfun.(zip(BASELINE, AMOS)) const RTOLS_U = rtolfun.(zip(BASELINE, OURSOL)) @assert iszero(length(findall(isnan, OURSOL))) "There were NaNs in our attempts!" # quick simple test to find the worst atol when B(x) < 1: let ix = findall(x->x<=one(x), BASELINE) res = findmax(abs, TOLS_U[ix]) _ix = res[2] println("Worst atol when true besselk(v,x) <= 1: $(res[1])") println("True value: $(BASELINE[ix][_ix])") end gnuplot_save_matrix!("../plotdata/atols_amos.csv", TOLS_A, VGRID, XGRID) gnuplot_save_matrix!("../plotdata/atols_ours.csv", TOLS_U, VGRID, XGRID) gnuplot_save_matrix!("../plotdata/rtols_amos.csv", RTOLS_A, VGRID, XGRID) gnuplot_save_matrix!("../plotdata/rtols_ours.csv", RTOLS_U, VGRID, XGRID) gnuplot_save_matrix!("../plotdata/rtols_amos_ours.csv", TOLS_AU, VGRID, XGRID)
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
paperscripts/testing/derivative_accuracy.jl
code
1100
using SpecialFunctions, ForwardDiff, FiniteDifferences include("shared.jl") const BIG_FD = central_fdm(10,1) dwbesselk(v,x) = ForwardDiff.derivative(_v->BesselK._besselk(_v,x,100,1e-12,false), v) dbesselk(v, x) = BIG_FD(_v->besselk(_v,x), v) const _h = 1e-6 fastfdbesselk(v, x) = (besselk(v+_h, x) - besselk(v, x))/_h const BASELINE = [dbesselk(z[1], z[2]) for z in Iterators.product(VGRID, XGRID)] const OURSOL = [dwbesselk(z[1], z[2]) for z in Iterators.product(VGRID, XGRID)] const FASTFD = [fastfdbesselk(z[1], z[2]) for z in Iterators.product(VGRID, XGRID)] const TOLS = atolfun.(zip(BASELINE, OURSOL)) const FDTOLS = atolfun.(zip(BASELINE, FASTFD)) const DIFTOLS = log10.(TOLS) .- log10.(FDTOLS) let res = findmax(DIFTOLS) println("Worst case derivative difference: $(res[1])") println("Value of dfun: $(BASELINE[res[2]])") end gnuplot_save_matrix!("../plotdata/atols_deriv_fd.csv", FDTOLS, VGRID, XGRID) gnuplot_save_matrix!("../plotdata/atols_deriv_ad.csv", TOLS, VGRID, XGRID) gnuplot_save_matrix!("../plotdata/atols_deriv_fdad.csv", DIFTOLS, VGRID, XGRID)
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
paperscripts/testing/secderivative_accuracy.jl
code
1283
using SpecialFunctions, ForwardDiff, FiniteDifferences include("shared.jl") const BIG_FD = central_fdm(10,1) const BIG_FD_O2 = central_fdm(10,2) besselkdv(v,x) = ForwardDiff.derivative(_v->BesselK._besselk(_v,x,100,1e-12,false), v) besselkdv2(v,x) = ForwardDiff.derivative(_v->besselkdv(_v,x), v) dbesselkdv(v, x) = BIG_FD(_v->besselk(_v,x), v) dbesselkdv2(v, x) = BIG_FD_O2(_v->besselk(_v,x), v) fastdbesselkdv2(v, x) = (besselk(v+2e-6, x) - 2*besselk(v+1e-6, x) + besselk(v, x))/1e-12 const BASELINE = [dbesselkdv2(z[1], z[2]) for z in Iterators.product(VGRID, XGRID)] const OURSOL = [besselkdv2(z[1], z[2]) for z in Iterators.product(VGRID, XGRID)] const FASTFD = [fastdbesselkdv2(z[1], z[2]) for z in Iterators.product(VGRID, XGRID)] const TOLS = atolfun.(zip(BASELINE, OURSOL)) const FDTOLS = atolfun.(zip(BASELINE, FASTFD)) const DIFTOLS = log10.(TOLS) .- log10.(FDTOLS) let res = findmax(DIFTOLS) println("Worst case derivative difference: $(res[1])") println("Value of dfun: $(BASELINE[res[2]])") end gnuplot_save_matrix!("../plotdata/atols_deriv2_fd.csv", FDTOLS, VGRID, XGRID) gnuplot_save_matrix!("../plotdata/atols_deriv2_ad.csv", TOLS, VGRID, XGRID) gnuplot_save_matrix!("../plotdata/atols_deriv2_fdad.csv", DIFTOLS, VGRID, XGRID)
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
paperscripts/testing/shared.jl
code
1097
include("../../examples/matern.jl") include("../plotting/gnuplot_utils.jl") const VGRID = range(0.25, 10.0, length=101) # to hit "near-integer" v, which is the hardest. const XGRID = range(0.0, 30.0, length=201)[2:end] # First and second derivatives with FD: fdbesselkdv(v, x) = (besselk(v+1e-6, x) - besselk(v+1e-6, x))/1e-6 fdbesselkdvdv(v, x) = (besselk(v+2e-6, x) - 2*besselk(v+1e-6, x) + besselk(v, x))/1e-12 # First and second derivatives with AD: adbesselkdv(v, x) = ForwardDiff.derivative(_v->BesselK._besselk(_v, x), v) adbesselkdvdv(v, x) = ForwardDiff.derivative(_v->adbesselkdv(_v, x), v) function assemble_matrix(fn, pts, p) out = Array{Float64}(undef, length(pts), length(pts)) Threads.@threads for k in 1:length(pts) out[k,k] = p[1] @inbounds for j in 1:(k-1) out[j,k] = fn(pts[j], pts[k], p) end end Symmetric(out) end atolfun(tru, est) = isnan(est) ? NaN : (isinf(tru) ? 0.0 : abs(tru-est)) atolfun(tru_est) = atolfun(tru_est[1], tru_est[2]) rtolfun(tru, est) = atolfun(tru, est)/abs(tru) rtolfun(tru_est) = rtolfun(tru_est[1], tru_est[2])
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
paperscripts/testing/speed1.jl
code
1378
using BesselK, BenchmarkTools, Printf include("shared.jl") const PAIRS = ((0.5, 1.0, "half-integer"), (1.0, 1.0, "whole integer"), (3.001, 1.0, "near-integer order"), (3.001, 8.0, "near-integer order, borderline arg"), (1.85, 1.0, "small order"), (1.85, 8.0, "small order, borderline arg"), (1.85, 14.0, "intermediate arg"), (1.85, 29.0, "large intermediate arg"), (1.85, 35.0, "large argument")) for (j, (v, x, descriptor)) in enumerate(PAIRS) t_us = (@belapsed BesselK._besselk($v, $x) samples=1_000)*1e9 # our code t_am = (@belapsed BesselK.besselk($v, $x) samples=1_000)*1e9 # AMOS t_us_d = (@belapsed adbesselkdv($v, $x) samples=1_000)*1e9 # our code t_am_d = (@belapsed fdbesselkdv($v, $x) samples=1_000)*1e9 # AMOS t_us_d2 = (@belapsed adbesselkdvdv($v, $x) samples=1_000)*1e9 # our code t_am_d2 = (@belapsed fdbesselkdvdv($v, $x) samples=1_000)*1e9 # AMOS if j != length(PAIRS) @printf "(%1.3f, %1.0f) & %1.0f & %1.0f & %1.0f & %1.0f & %1.0f & %1.0f & %s \\\\\n" v x t_us t_am t_us_d t_am_d t_us_d2 t_am_d2 descriptor else @printf "(%1.3f, %1.0f) & %1.0f & %1.0f & %1.0f & %1.0f & %1.0f & %1.0f & %s\n" v x t_us t_am t_us_d t_am_d t_us_d2 t_am_d2 descriptor end end
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
paperscripts/testing/speed2.jl
code
1557
using LinearAlgebra, BesselK, BenchmarkTools, Printf, StaticArrays include("shared.jl") const GRIDN = 24 const GRID1D = range(0.0, 1.0, length=GRIDN) const PTS = map(x->SVector{2,Float64}(x[1], x[2]), vec(collect.(Iterators.product(GRID1D, GRID1D)))) function checkcovmat(fn, p, v) _p = pv(1.0, p, v) M = assemble_matrix(fn, PTS, _p) em = eigmin(M) Mf = cholesky!(M, check=false) if issuccess(Mf) ld = logdet(Mf) else ld = NaN end (issuccess(Mf) ? "S" : "F", em, ld) end const VRANGE = (0.4, 1.25, 3.5) const PRANGE = (0.01, 1.0, 100.0) # For now, I might even keep the timing in seconds. for (j, (v, p)) in enumerate(Iterators.product(VRANGE, PRANGE)) _p = pv(1.0, p, v) # test 1: assembly time. t_u = @belapsed assemble_matrix(matern_us, $PTS, $_p) samples=8 t_a = @belapsed assemble_matrix(matern_amos, $PTS, $_p) samples=8 # test 2: Cholesky success or failure: (s_u, em_u, ld_u) = checkcovmat(matern_us, p, v) (s_a, em_a, ld_a) = checkcovmat(matern_amos, p, v) # PRINTING: # (v,p) pair # times (u,p) # eigmins (u,p) # eigmin difference # logdets (u.p) # logdet difference if j != length(VRANGE)*length(PRANGE) @printf "(%1.3f, %1.2f) & %1.1e & %1.1e & %1.2e & %1.2e & %1.2e & %1.2e & %1.2e & %1.2e \\\\\n" p v t_u t_a em_u em_a abs(em_u-em_a) ld_u ld_a abs(ld_u-ld_a) else @printf "(%1.3f, %1.2f) & %1.1e & %1.1e & %1.2e & %1.2e & %1.2e & %1.2e & %1.2e & %1.2e\n" p v t_u t_a em_u em_a abs(em_u-em_a) ld_u ld_a abs(ld_u-ld_a) end end
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
paperscripts/testing/notinuse/checkposdef.jl
code
1504
using LinearAlgebra include("shared.jl") const GRIDN = 24 function checkcovmat(fn, p, v; display_small_piece=false) grid1d = range(0.0, 1.0, length=GRIDN) pts = vec(collect.(Iterators.product(grid1d, grid1d))) M = assemble_matrix(fn, pts, [1.0, p, v]) em = eigmin(M) if display_small_piece # just a debugging thing, really. println("Principle 5x5 minor:") display(M[1:5, 1:5]) end Mf = cholesky!(M, check=false) (issuccess(Mf), em) end translate_result(res) = res ? :SUCCESS : :FAILURE const VRANGE = (0.4, 0.55, 0.755, 0.99, 1.01, 1.55, 2.05, 3.05, 4.05) const PRANGE = (0.01, 0.1, 1.0, 10.0, 100.0, 1000.0) for (v, p) in Iterators.product(VRANGE, PRANGE) println("\n(v,p) = ($v, $p):") print("AMOS:") amos_succ = :PLACEHOLDER us_succ = :PLACEHOLDER (amos_em, us_em) = (0.0, 0.0) try (amos_succ, amos_em) = checkcovmat(matern_amos, p, v) println("$(translate_result(amos_succ)), $amos_em") catch amos_succ = :FAILURE println(amos_succ) end print("US: ") try (us_succ, us_em) = checkcovmat(matern_us, p, v) println("$(translate_result(us_succ)), $us_em") catch us_succ = :FAILURE println(us_succ) end println("Eigmin difference: $(abs(amos_em-us_em))") println("Eigmin rtol: $(abs(amos_em-us_em)/abs(amos_em))") if amos_succ && !us_succ println("######################") println("!!!!WE FAILED WHERE AMOS SUCCEEDED, INVESTIGATE!!!") println("######################") end end
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
paperscripts/testing/notinuse/derivative_speed.jl
code
1163
using BenchmarkTools, SpecialFunctions, FiniteDifferences, ForwardDiff, StaticArrays include("shared.jl") const FDFAST(fn, x) = (fn(x+h)-fn(x))/h const FD2 = central_fdm(2, 1) dbesselk_fdfast(v, x) = FDFAST(_v->besselk(_v, x), v) dbesselk_fd2(v, x) = FD2(_v->besselk(_v, x), v) dbesselk_ad(v, x) = ForwardDiff.derivative(_v->_besselk(_v, x), v) #= # And note zero allocations for the AD version. Pretty good. And faster by # significant margins---like a factor of five---than even the most reckless FD. for (v, x) in Iterators.product((0.25, 0.5, 0.75, 1.0, 1.25, 1.5, 2.25, 3.0, 4.25), (1e-4, 0.25, 2.5, 5.0, 7.5, 12.0, 25.0)) println("(v,x) = ($v, $x):") print("FDFAST:") @btime dbesselk_fdfast($v, $x) print("FD2: ") @btime dbesselk_fd2($v, $x) print("AD: ") @btime dbesselk_ad($v, $x) print("\n\n") end =# # matern function: const oo = @SVector ones(2) const zz = @SVector zeros(2) @inline pv(v) = @SVector [1.1, 1.1, v] function matern_d3_ad(v) ForwardDiff.derivative(_v->BesselK.matern(oo, zz, pv(_v)), v) end function matern_d3_fd(v) FDFAST(_v->BesselK.matern(oo, zz, pv(_v)), v) end
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
paperscripts/testing/notinuse/general_speed.jl
code
1392
# Observations: # # By and large, we really smoke AMOS in speed. Especially for large arg. # Assembling a reasonably large matrix (1024 x 1024) gets done in about half the # time. Which may seem like nothing, but a factor of two never hurts... using BenchmarkTools, SpecialFunctions, StaticArrays include("../besk.jl") include("shared.jl") const NUs = (0.1, 0.25, 0.5, 0.75, 0.999, 1.0, 1.001, 1.25, 1.5, 1.75, 2.0, 2.05, 2.75, 3.1, 3.8, 4.3, 4.9) const TINY_X = 1e-4 const SMALL_X = 0.25 const MID_X = 7.5 const LARGE_X = 20.0 const Xs = (1e-4, 0.25, 5.0, 7.5, 8.5, 14.0, 17.0, 29.0, 31.0, 50.0) const PTS = [rand(3).*10.0 for _ in 1:1024] const PRMS = @SVector [1.0, 1.0, 1.25] print("\n\n") println("##################") println("HEAT ONE: pointwise timings.") println("##################") print("\n\n") for (v, x) in Iterators.product(NUs, Xs) println("(v,x) = ($v, $x):") print("Timing for AMOS:") @btime SpecialFunctions.besselk($v, $x) print("Timing for us:") try @btime _besselk($v, $x) catch println("FAILURE/ERROR OUT.") end print("\n\n") end print("\n\n") println("##################") println("HEAT TWO: kernel matrix assembly.") println("##################") print("\n\n") println("Timings for AMOS:") @btime assemble_matrix(matern_amos, $PTS, $PRMS) println("Timings for US:") @btime assemble_matrix(matern_us, $PTS, $PRMS)
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
paperscripts/testing/notinuse/matern_accuracy.jl
code
1812
using LinearAlgebra include("shared.jl") const GRIDPTS = let GRIDN = 24 grid1d = range(0.0, 1.0, length=GRIDN) pts = vec(collect.(Iterators.product(grid1d, grid1d))) end const RANDPTS = [rand(2) for _ in 1:(24*24)] function assemblecovmat(fn, p, pts) M = assemble_matrix(fn, pts, p, v) em = eigmin(M) if display_small_piece # just a debugging thing, really. println("Principle 5x5 minor:") display(M[1:5, 1:5]) end Mf = cholesky(M, check=false) (issuccess(Mf), em) end translate_result(res) = res ? :SUCCESS : :FAILURE const VRANGE = (0.4, 0.55, 0.755, 0.99, 1.01, 1.55, 2.05, 3.05, 4.05) const PRANGE = (0.01, 0.1, 1.0, 10.0, 100.0, 1000.0) # Cases to look more into: # ((RAND,GRID), v=(3.05, 4.05), p=0.1) # # otherwise, things look good: rtols can be larger than you might expect at # times, but that appears to be happening when both eigenvalues are exact to # more or less eps() precision where atol is more relevant anyway. # for (case, v, p) in Iterators.product((:GRID, :RAND), VRANGE, PRANGE) pts = (case == :GRID) ? GRIDPTS : RANDPTS M_amos = assemble_matrix(matern_amos, pts, (1.0, p, v)) M_us = assemble_matrix(matern_us, pts, (1.0, p, v)) # pointwise checks: println("($case, v=$v, p=$p):") println("atol difference: $(maximum(abs, M_amos - M_us))") println("rtol difference: $(maximum(abs, tolfun.(zip(M_amos, M_us))))") # eigenvalue checks, assuming successful cholesky: cholflag = issuccess(cholesky(M_amos, check=false)) if cholflag ev_amos = eigvals(M_amos) ev_us = eigvals(M_us) println("largest eig atol: $(maximum(abs, ev_amos-ev_us))") println("largest eig rtol: $(maximum(abs, tolfun.(zip(ev_amos, ev_us))))") else println("Cholesky for M_amos failed, skipping eigenvalue checks.") end end
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
paperscripts/testing/notinuse/matern_derivative_speed.jl
code
1696
using BenchmarkTools, StaticArrays include("shared.jl") const oo = @SVector ones(2) const zz = @SVector zeros(2) const TESTING_PARAMS = (@SVector [1.25, 0.25, 0.5], @SVector [1.25, 5.25, 0.5], @SVector [1.25, 0.25, 0.75], @SVector [1.25, 5.25, 0.75], @SVector [1.25, 0.25, 1.0], @SVector [1.25, 5.25, 1.0], @SVector [1.25, 0.25, 1.75], @SVector [1.25, 5.25, 1.75], @SVector [1.25, 0.25, 3.0], @SVector [1.25, 5.25, 3.0], @SVector [1.25, 0.25, 4.75], @SVector [1.25, 5.25, 4.75]) for pp in TESTING_PARAMS println("\n###") println("Parameters $pp") println("###\n") println("Fast finite diff, h=$h:") @btime matern_fdfast_d1($oo, $zz, $pp) @btime matern_fdfast_d2($oo, $zz, $pp) @btime matern_fdfast_d3($oo, $zz, $pp) println("Adaptive finite diff, order 2:") @btime matern_fd2_d1($oo, $zz, $pp) @btime matern_fd2_d2($oo, $zz, $pp) @btime matern_fd2_d3($oo, $zz, $pp) println("Adaptive finite diff, order 5:") @btime matern_fd5_d1($oo, $zz, $pp) @btime matern_fd5_d2($oo, $zz, $pp) @btime matern_fd5_d3($oo, $zz, $pp) println("Complex step, h=$ch:") try @btime matern_cstep_d1($oo, $zz, $pp) @btime matern_cstep_d2($oo, $zz, $pp) @btime matern_cstep_d3($oo, $zz, $pp) catch println("Failure/error. Probably at integer values.") end println("Autodiff:") @btime matern_ad_d1($oo, $zz, $pp) @btime matern_ad_d2($oo, $zz, $pp) @btime matern_ad_d3($oo, $zz, $pp) end
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
paperscripts/testing/notinuse/xv_accuracy.jl
code
1133
using BesselK, SpecialFunctions, ArbNumerics include("shared.jl") function wbesselkxv(v,x) try return BesselK.adbesselkxv(v,x) catch return NaN end end function rbesselkxv(v,x) _v = ArbReal(v) _x = ArbReal(x) xv = _x^_v Float64(xv*ArbNumerics.besselk(ArbFloat(v), ArbFloat(x))) end abesselkxv(v,x) = (x^v)*SpecialFunctions.besselk(v, x) const VGRID = range(0.25, 10.0, length=100) const XGRID = range(0.0, 8.0, length=201)[2:end] # since other impls can't do x=0. const BASELINE = [rbesselkxv(z[1], z[2]) for z in Iterators.product(VGRID, XGRID)] const AMOS = [abesselkxv(z[1], z[2]) for z in Iterators.product(VGRID, XGRID)] const OURSOL = [wbesselkxv(z[1], z[2]) for z in Iterators.product(VGRID, XGRID)] const TOLS_A = atolfun.(zip(BASELINE, AMOS)) const TOLS_U = atolfun.(zip(BASELINE, OURSOL)) const TOLS_AU = rtolfun.(zip(AMOS, OURSOL)) #= gnuplot_save_matrix!("../plotdata/atols_amos.csv", TOLS_A, VGRID, XGRID) gnuplot_save_matrix!("../plotdata/atols_ours.csv", TOLS_U, VGRID, XGRID) gnuplot_save_matrix!("../plotdata/rtols_amos_ours.csv", TOLS_AU, VGRID, XGRID) =#
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
paperscripts/testing/notinuse/xvsecderivative_accuracy.jl
code
1339
using SpecialFunctions, ForwardDiff, FiniteDifferences include("shared.jl") const BIG_FD = central_fdm(10,1) const BIG_FD_O2 = central_fdm(10,2) xvbesselkdv(v,x) = ForwardDiff.derivative(_v->BesselK.adbesselkxv(_v,x), v) xvbesselkdv2(v,x) = ForwardDiff.derivative(_v->xvbesselkdv(_v,x), v) beskxv(v,x) = besselk(v,x)*(x^v) dxvbesselkdv(v, x) = BIG_FD(_v->beskxv(_v,x), v) dxvbesselkdv2(v, x) = BIG_FD_O2(_v->beskxv(_v,x), v) fastdxvbesselkdv2(v, x) = (beskxv(v+2e-6, x) - 2*beskxv(v+1e-6, x) + beskxv(v, x))/1e-12 const VGRID = range(0.25, 10.0, length=101) # to avoid integer v. const XGRID = range(0.0, 50.0, length=201)[2:end] # to avoid zero x. const BASELINE = [dxvbesselkdv2(z[1], z[2]) for z in Iterators.product(VGRID, XGRID)] const OURSOL = [xvbesselkdv2(z[1], z[2]) for z in Iterators.product(VGRID, XGRID)] const FASTFD = [fastdxvbesselkdv2(z[1], z[2]) for z in Iterators.product(VGRID, XGRID)] const TOLS = atolfun.(zip(BASELINE, OURSOL)) const FDTOLS = atolfun.(zip(BASELINE, FASTFD)) const DIFTOLS = log10.(TOLS) .- log10.(FDTOLS) gnuplot_save_matrix!("../plotdata/atols_deriv2_xv_fd.csv", FDTOLS, VGRID, XGRID) gnuplot_save_matrix!("../plotdata/atols_deriv2_xv_ad.csv", TOLS, VGRID, XGRID) gnuplot_save_matrix!("../plotdata/atols_deriv2_xv_fdad.csv", DIFTOLS, VGRID, XGRID)
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
src/BesselK.jl
code
999
module BesselK using Bessels export adbesselk, adbesselkxv, matern # Here's a work-around since ForwardDiff#481 got merged, making iszero(x) # check that the value AND partials of x are zero. Conceptually, I'm # sympathetic that this is the more correct choice. It just doesn't quite work # for the way this code needs to branch. _iszero(x) = (0 <= x) & (x <= 0) include("gamma.jl") # gamma function, for the moment ripped from Bessels.jl include("besk_ser.jl") # enhanced direct series. The workhorse for small-ish args. include("besk_as.jl") # asymptotic expansion for large arg. include("uk_polys.jl") # Uk polynomials, now generated statically ahead of time. include("besk_asv.jl") # uniform expansion for large order. include("besk_temme.jl") # Temme recurrence series for small-ish args. For AD. include("besk.jl") # putting it all together with appropriate branching. include("matern.jl") # a basic Matern covariance function end
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
src/besk.jl
code
2125
@inline isnearint(v, tol) = abs(v-round(v)) < tol # Unlike the previous version of this function, this inner function now ASSUMES # that v isa Dual, and so it only hits branches that are relevant for AD. function _besselk(v, x, maxit, tol, order) if abs(x) < 4 && (v < 2.85) && !isnearint(v, 0.01) return _besselk_ser(v, x, maxit, tol, false) elseif abs(x) < 8.5 return _besselk_temme(v, x, maxit, tol, false) elseif abs(x) < 15.0 return _besselk_asv(v, x, Val(12), Val(false)) elseif abs(x) < 30.0 return _besselk_asv(v, x, Val(8), Val(false)) elseif abs(v) > 1.5 return _besselk_asv(v, x, Val(6), Val(false)) else return _besselk_as(v, x, order) end end # Just has some different cutoffs, which for whatever reason work a bit better. # At some point this function could be improved a lot, which is part of why I'm # okay with splitting it like this for the moment. function _besselkxv(v, x, maxit, tol, order) if abs(x) < 4 && (v < 5.75) && !isnearint(v, 0.01) return _besselk_ser(v, x, maxit, tol, true) elseif abs(x) < 6.0 return _besselk_temme(v, x, maxit, tol, true) elseif abs(x) < 15.0 return _besselk_asv(v, x, Val(12), Val(true)) elseif abs(x) < 30.0 return _besselk_asv(v, x, Val(8), Val(true)) elseif abs(v) > 1.5 return _besselk_asv(v, x, Val(6), Val(true)) else return _besselk_as(v, x, order)*exp(v*log(x)) # temporary, until float pows in 1.9. end end adbesselk(v::AbstractFloat, x::AbstractFloat) = Bessels.besselk(v, x) adbesselk(v, x) = _besselk(v, x, 100, 1e-12, 6) # TODO (cg 2022/09/09 12:22): with newer julia and/or package versions, I'm # getting allocations in the second derivative if I'm not careful. So for now # I'm going back to this, which unfortunately is still NaN at zero, even though # that value is well-defined. I suppose I could put the limit in if x is zero, # but I don't love that. function adbesselkxv(v::AbstractFloat, x::AbstractFloat) iszero(x) && return _gamma(v)*2^(v-1) Bessels.besselk(v, x)*(x^v) end adbesselkxv(v, x) = _iszero(x) ? _gamma(v)*2^(v-1) : _besselkxv(v, x, 100, 1e-12, 6)
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
src/besk_as.jl
code
1137
# This is an amalgam of my original asymptotic series expansion and the # improvements provided by Michael Helton and Oscar Smith in Bessels.jl, where # this is more or less a pending PR (#48). # To be replaced with Bessels.SQRT_PID2 when PR is merged. const SQRT_PID2 = sqrt(pi/2) # For now, no exponential improvement. It requires the exponential integral # function, which would either need to be lifted from SpecialFunctions.jl or # re-implemented. And with an order of, like, 10, this seems to be pretty # accurate and still faster than the uniform asymptotic expansion. function _besselk_as(v::V, x::T, order) where {V,T} fv = 4*v*v _z = x ser_v = one(T) floatj = one(T) ak_numv = fv - floatj factj = one(T) twofloatj = one(T) eightj = T(8) for _ in 1:order # add to the series: term_v = ak_numv/(factj*_z*eightj) ser_v += term_v # update ak and _z: floatj += one(T) twofloatj += T(2) factj *= floatj fourfloatj = twofloatj*twofloatj ak_numv *= (fv - fourfloatj) _z *= x eightj *= T(8) end pre_multiply = SQRT_PID2*exp(-x)/sqrt(x) pre_multiply*ser_v end
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
src/besk_asv.jl
code
613
@generated function _besselk_vz_asv(v, z, ::Val{N}, ::Val{M}) where{N,M} quote ez = sqrt(1+z*z)+log(z/(1+sqrt(1+z*z))) pz = inv(sqrt(1+z*z)) out = sqrt(pi/(2*v))/sqrt(sqrt(1+z*z)) mulval = M ? exp(v*log(z*v)-v*ez) : exp(-v*ez) (ser, sgn, _v) = (zero(z), one(z), one(v)) evaled_polys = tuple($([:($(Symbol(:uk_, j, :_poly))(pz)) for j in 0:(N-1)]...)) Base.Cartesian.@nexprs $N j -> begin ser += sgn*evaled_polys[j]/_v sgn *= -one(z) _v *= v end mulval*out*ser end end _besselk_asv(v, z, maxorder, modify) = _besselk_vz_asv(v, z/v, maxorder, modify)
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
src/besk_ser.jl
code
1337
function _besselk_ser(v, x, maxit, tol, modify) T = promote_type(typeof(x), typeof(v)) out = zero(T) oneT = one(T) twoT = oneT+oneT # precompute a handful of things: xd2 = x/twoT xd22 = xd2*xd2 half = oneT/twoT if modify e2v = exp2(v) xd2_v = (x^(2*v))/e2v xd2_nv = e2v else lxd2 = log(xd2) xd2_v = exp(v*lxd2) xd2_nv = exp(-v*lxd2) end gam_v = _gamma(v) gam_nv = _gamma(-v) gam_1mv = -gam_nv*v # == gamma(one(T)-v) gam_1mnv = gam_v*v # == gamma(one(T)+v) xd2_pow = oneT fact_k = oneT floatk = convert(T, 0) (gpv, gmv) = (gam_1mnv, gam_1mv) # One final re-compression of a few things: _t1 = gam_v*xd2_nv*gam_1mv _t2 = gam_nv*xd2_v*gam_1mnv # now the loop using Oana's series expansion, with term function manually # inlined for max speed: for _j in 0:maxit t1 = half*xd2_pow tmp = _t1/(gmv*fact_k) tmp += _t2/(gpv*fact_k) term = t1*tmp out += term ((abs(term) < tol) && _j>5) && return out # Use the trick that gamma(1+k+1+v) == gamma(1+k+v)*(1+k+v) to skip gamma calls: (gpv, gmv) = (gpv*(oneT+v+floatk), gmv*(oneT-v+floatk)) xd2_pow *= xd22 fact_k *= (floatk+1) floatk += T(1) end throw(error("$maxit iterations reached without achieving atol $tol.")) end
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
src/besk_temme.jl
code
7251
const _GA = MathConstants.γ # because I don't like unicode... const QUADGAMMA1 = -2.40411380631918857 # ψ^(2)(1) const HEXGAMMA1 = -24.88626612344087823 # ψ^(4)(1) # special methods for cosh(mu(v,x))*(x^vf) and sinh(...). @inline coshmuxv(v, x) = (exp2(v) + exp2(-v)*x^(2*v))/2 @inline sinhmuxv(v, x) = (exp2(v) - exp2(-v)*x^(2*v))/2 const TPCOEF = (1, 0, (pi^2)/6, 0, (7*pi^4)/360, 0, (31*pi^6)/15120) @inline tp_taylor0(v) = evalpoly(v, TPCOEF) # trig part const G1COEF = (_GA, 0, (2*_GA^3 - _GA*pi^2 - 2*QUADGAMMA1)/12, 0, (12*_GA^5 - 20*(_GA^3)*pi^2 + _GA*pi^4 - 120*(_GA^2)*QUADGAMMA1 + 20*(pi^2)*QUADGAMMA1 - 12*HEXGAMMA1)/1440) @inline g1_taylor0(v) = -evalpoly(v, G1COEF) const G2COEF = (1.0, 0, (_GA^2 - (pi^2)/6)/2, 0, (60*_GA^4 - 60*(_GA*pi)^2 + pi^4 - 240*_GA*QUADGAMMA1)/1440) @inline g2_taylor0(v) = evalpoly(v, G2COEF) const SHCOEF = (1, 0, 1/6, 0, 1/120, 0, 1/5040, 0, 1/362880) @inline sh_taylor0(v) = evalpoly(v, SHCOEF) # sinh part # TODO: there is still a problem when v is not zero but z is zero. The NaN might # be mathematically correct, and it isn't a branch that adbesselkxv hits. @inline function f0_expansion0(v::V, z, modify) where{V} _t = tp_taylor0(v) _g1 = g1_taylor0(v) _g2 = g2_taylor0(v) if !modify mu = v*log(2/z) _cm = cosh(mu) _sm = sinh(mu) _sh = sh_taylor0(mu)*log(2/z) else _cm = coshmuxv(v, z) if _iszero(v) # Because of the branching here, if v is zero, then I know that z is NOT zero. mu = v*log(2/z) _sh = (z^v)*sh_taylor0(mu)*log(2/z) else _sh = sinhmuxv(v, z)/v end end _t*(_g1*_cm + _g2*_sh) end # EVEN Cheby coefs for computing the gamma pair thing when v is not near zero. # (but |v| is <= 1/2). const A2N = (1.843740587300906, -0.076852840844786, 0.001271927136655, -0.000004971736704, -0.000000033126120, 0.000000000242310, -0.000000000000170, -0.000000000000001 ) # EVEN Cheby coefs for computing the gamma pair thing when v is not near zero. # (but |v| is <= 1/2). const A2Np1 = (-0.283876542276024, 0.001706305071096, 0.000076309597586, -0.000000865920800, 0.000000001745136, 0.000000000009161, -0.000000000000034) # This gives g1 and g2 directly when v is not near zero and completely avoids # calls to gamma functions. @inline function temmegammas(v) twov = one(v)+one(v) _v = twov*v (tv_even, tv_odd) = (one(v), _v) (ser_even, ser_odd) = (A2N[1]/2, A2Np1[1]*tv_odd) @inbounds for n in 2:7 # get next even value. Note that tv_even is now T_2(_v). tv_even = twov*_v*tv_odd - tv_even # add to the even term series: ser_even += A2N[n]*tv_even # get the next odd term: tv_odd = twov*_v*tv_even - tv_odd # add to the odd term series: ser_odd += A2Np1[n]*tv_odd end # one more term for the evens: tv_even = twov*_v*tv_odd - tv_even ser_even += A2N[8]*tv_even # now return: (ser_odd/v, ser_even) end function temme_pair(v, z, maxit, tol, modify=false) @assert abs(v) <= 1/2 "This internal routine is only for |v|<=1/2." # Some very low-level objects: onez = one(z) twoz = onez+onez zd2 = z/twoz _2dz = twoz/z # Creating the necessary things to get initial f, p, q: if abs(v) < 0.001 g1 = g1_taylor0(v) g2 = g2_taylor0(v) else (g1, g2) = temmegammas(v) end (gp, gm) = (inv(-(g1*twoz*v - g2*twoz)/twoz), inv((g1*twoz*v + g2*twoz)/twoz)) # p0 and q0 terms, branches for if we're modifying by (x^vf): if !modify p0 = (exp(-v*log(zd2))/twoz)*gp q0 = (exp(v*log(zd2))/twoz)*gm else p0 = exp2(v-one(v))*gp q0 = exp2(-v-one(v))*(z^(2*v))*gm end # cosh and sinh terms for f0, branches for if we're modifying by (x^vf): if !modify mu = v*log(_2dz) _cm = cosh(mu) _sm = sinh(mu) else _cm = coshmuxv(v, z) _sm = sinhmuxv(v, z) end # One more branch for f0, which is to check if v is near zero or z is near two. if _iszero(z) && modify f0 = one(z) else if abs(v) < 0.001 f0 = f0_expansion0(v, z, modify) elseif abs(z-2)<0.001 _s = sh_taylor0(v*log(_2dz)) # manually plug in mu. #f0 = (v*pi/sinpi(v))*(g1*cosh(mu) + g2*log(_2dz)*_s) f0 = (v*pi/sinpi(v))*(g1*_cm + g2*log(_2dz)*_s) else # Temme's form is: #f0 = (v*pi/sinpi(v))*(g1*cosh(mu) + g2*log(_2dz)*sinh(mu)/mu) # But if I modify to this, I get rid of a near singularity as z->0: f0 = (v*pi/sinpi(v))*(g1*_cm + g2*_sm/v) end end (_f, _p, _q) = (f0, p0, q0) # a few other odds and ends for efficient looping: (ser_kv, ser_kvp1) = (f0, _p) (factk, _floatk) = (onez, onez) (v2, _zd4, _z) = (v*v, z*z/(twoz + twoz), z*z/(twoz + twoz)) for k in 1:maxit _f = (k*_f + _p + _q)/(_floatk^2 - v2) _p /= (_floatk-v) _q /= (_floatk+v) ck = _z/factk # update term for besselk(v, z). term_v = ck*_f ser_kv += term_v # update term for besselk(v+1, z). term_vp1 = ck*(_p - k*_f) ser_kvp1 += term_vp1 if max(abs(term_v), abs(term_vp1)) < tol if !modify return (ser_kv, ser_kvp1*_2dz) else return (ser_kv, ser_kvp1*2) # note that I'm multiplying by a z, so cancel manually. end end _floatk += onez factk *= _floatk _z *= _zd4 end throw(error("Term tolerance $tol not reached in $maxit iters for (v,z) = ($v, $z).")) end # NOTE: In the modified scaling of (x^v)*besselk(v,x), there is a problem for # integer v: (x^0)*besselk(0,x) is just besselk(0,x). And that is still Inf for # x=0. Which really breaks the whole strategy of integer derivatives for the # rescaled Bessel here. BUT: there is a workaround! You don't actually need # besselk(0, x) for the modified recurrence, so we just throw away that value. function _besselk_temme(v, z, maxit, tol, mod) @assert v > -1/2 "This routine does not presently handle the case of v > -1/2." _p = floor(v) (v - _p > 1/2) && (_p += one(_p)) vf = v - _p twov = one(v)+one(v) _v = vf kvp2 = zero(v) (kv, kvp1) = temme_pair(_v, z, maxit, tol, mod) # check if any recurrence is necessary: v <= one(v)/twov && return kv v <= (twov + one(v))/twov && return kvp1 # if it is necessary, perform it: if mod # not necessarily the "right" way to handle this, but seems to stop the # propagation of NaNs in AD. # # a slightly different recurrence for (x^v)*besselk(v,x). if _iszero(z) # special case for z=0: for _ in 1:(Int(_p)-1) kvp2 = twov*(_v+one(v))*kvp1 _v += one(v) kv = kvp1 kvp1 = kvp2 end else z2 = z*z for _ in 1:(Int(_p)-1) kvp2 = twov*(_v+one(v))*kvp1 + z2*kv _v += one(v) kv = kvp1 kvp1 = kvp2 end end else for _ in 1:(Int(_p)-1) kvp2 = ((twov*(_v+one(v)))/z)*kvp1 + kv _v += one(v) kv = kvp1 kvp1 = kvp2 end end kvp2 end
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
src/gamma.jl
code
2778
# # The below file is adapted from Bessels.jl # (https://github.com/JuliaMath/Bessels.jl/blob/master/src/gamma.jl), # with this particular implementation entirely due to Michael Helton. # In Bessels.jl, it is hard-typed to only be F64 or F32, as they point out that # AD of the functions is better handled by rules than by passing it through # routines directly. This is definitely true, but for this package I don't want # to put ForwardDiff into the dependency tree, and so I can't manually apply any # rules. For the moment, I have removed the type restrictions. # ############################################################################## ########################## Begin file ######################################## ############################################################################## # Adapted from Cephes Mathematical Library (MIT license https://en.smath.com/view/CephesMathLibrary/license) by Stephen L. Moshier const SQ2PI = 2.5066282746310007 function _gamma(x) if x > zero(x) return _gamma_pos(x) else isinteger(x) && throw(DomainError(x, "NaN result for non-NaN input.")) xp1 = abs(x) + 1.0 return π / sinpi(xp1) / _gamma_pos(xp1) end end # only have a Float64 implementations function _gamma_pos(x) if x > 11.5 return large_gamma(x) elseif x <= 11.5 return small_gamma(x) elseif isnan(x) return x end end function large_gamma(x::T) where{T} isinf(x) && return x w = inv(x) s = ( 8.333333333333331800504e-2, 3.472222222230075327854e-3, -2.681327161876304418288e-3, -2.294719747873185405699e-4, 7.840334842744753003862e-4, 6.989332260623193171870e-5, -5.950237554056330156018e-4, -2.363848809501759061727e-5, 7.147391378143610789273e-4 ) w = w * evalpoly(w, s) + one(T) # lose precision on following block y = exp((x)) # avoid overflow v = x^(0.5 * x - 0.25) y = v * (v / y) return SQ2PI * y * w end function small_gamma(x::T) where{T} P = ( 1.000000000000000000009e0, 8.378004301573126728826e-1, 3.629515436640239168939e-1, 1.113062816019361559013e-1, 2.385363243461108252554e-2, 4.092666828394035500949e-3, 4.542931960608009155600e-4, 4.212760487471622013093e-5 ) Q = ( 9.999999999999999999908e-1, 4.150160950588455434583e-1, -2.243510905670329164562e-1, -4.633887671244534213831e-2, 2.773706565840072979165e-2, -7.955933682494738320586e-4, -1.237799246653152231188e-3, 2.346584059160635244282e-4, -1.397148517476170440917e-5 ) z = one(T) while x >= 3.0 x -= one(T) z *= x end while x < 2.0 z /= x x += one(T) end x -= T(2) p = evalpoly(x, P) q = evalpoly(x, Q) return z * p / q end
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
src/matern.jl
code
752
# A standard Matern covariance function. This is not the best parameterization, # but it is the most popular, so here we are. _norm(t) = sqrt(sum(z->z*z, t)) """ matern(x, y, params) computes σ² * \\mathcal{M}_{ν}(||x-y||/ρ), where params = (σ, ρ, ν) and \\mathcal{M} is the Matern covariance function, parameterized as \\mathcal{M}_{ν}(t) = σ^2 2^{1 - ν} Γ(ν)^{-1} (\\sqrt{2 ν} t / ρ)^{ν} \\mathcal{K}_{ν}(\\sqrt{2 ν} t / ρ). For more information, see Stein (1999), Interpolation of Spatial Data: Some Theory for Kriging. """ function matern(x, y, params) (sg, rho, nu) = (params[1], params[2], params[3]) dist = _norm(x-y) _iszero(dist) && return sg^2 arg = sqrt(2*nu)*dist/rho (sg*sg*(2^(1-nu))/_gamma(nu))*adbesselkxv(nu, arg) end
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
src/uk_polys.jl
code
9265
struct UkPolynomial{N,P} coef_skip_zeros::P constant::Float64 end function UkPolynomial(coefv::Vector{T}) where{T} # find the first non-zero coefficient, assuming the first coefficient is a # zero-order constant: first_nonzero_index = findfirst(!iszero, coefv) # now take the coefficients that are not zero, which is every other one: if first_nonzero_index == 1 constant = coefv[1] next_nonzero_index = findfirst(!iszero, coefv[2:end]) if isnothing(next_nonzero_index) if length(coefv) > 1 throw(error("These coefficients don't correspond to a U_k polynomial.")) end return UkPolynomial{0,Nothing}(nothing, coefv[1]) end first_nonzero_index = next_nonzero_index+1 else constant = 0.0 end nzcoef = vcat(zero(T), coefv[first_nonzero_index:2:end]) coef_skip_zeros = length(nzcoef) < 20 ? tuple(nzcoef...) : nzcoef (N,P) = (first_nonzero_index-1, typeof(coef_skip_zeros)) UkPolynomial{N,P}(coef_skip_zeros, constant) end # the case of a simple polynomial with no leading zeros. (Uk::UkPolynomial{0,P})(x) where{P} = evalpoly(x^2, Uk.coef_skip_zeros)/x + Uk.constant # the case of a zero-order polynomial: (Uk::UkPolynomial{0,Nothing})(x) = Uk.constant # the nontrivial case of a polynomial that DOES have leading zeros: function (Uk::UkPolynomial{N,P})(x) where{N,P} pre_multiply = x^(N-2) pv = evalpoly(x^2, Uk.coef_skip_zeros) pre_multiply*pv + Uk.constant end const uk_0_poly=UkPolynomial([1.0, ]) const uk_1_poly=UkPolynomial([0.0, 0.125, 0.0, -0.20833333333333334, ]) const uk_2_poly=UkPolynomial([0.0, 0.0, 0.0703125, 0.0, -0.4010416666666667, 0.0, 0.3342013888888889, ]) const uk_3_poly=UkPolynomial([0.0, 0.0, 0.0, 0.0732421875, 0.0, -0.8912109375, 0.0, 1.8464626736111112, 0.0, -1.0258125964506173, ]) const uk_4_poly=UkPolynomial([0.0, 0.0, 0.0, 0.0, 0.112152099609375, 0.0, -2.3640869140625, 0.0, 8.78912353515625, 0.0, -11.207002616222995, 0.0, 4.669584423426247, ]) const uk_5_poly=UkPolynomial([0.0, 0.0, 0.0, 0.0, 0.0, 0.22710800170898438, 0.0, -7.368794359479631, 0.0, 42.53499874538846, 0.0, -91.81824154324003, 0.0, 84.63621767460074, 0.0, -28.212072558200244, ]) const uk_6_poly=UkPolynomial([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.5725014209747314, 0.0, -26.491430486951554, 0.0, 218.1905117442116, 0.0, -699.5796273761327, 0.0, 1059.9904525279999, 0.0, -765.2524681411816, 0.0, 212.5701300392171, ]) const uk_7_poly=UkPolynomial([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.7277275025844574, 0.0, -108.09091978839464, 0.0, 1200.9029132163525, 0.0, -5305.646978613405, 0.0, 11655.393336864536, 0.0, -13586.550006434136, 0.0, 8061.722181737308, 0.0, -1919.4576623184068, ]) const uk_8_poly=UkPolynomial([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 6.074042001273483, 0.0, -493.915304773088, 0.0, 7109.514302489364, 0.0, -41192.65496889756, 0.0, 122200.46498301747, 0.0, -203400.17728041555, 0.0, 192547.0012325315, 0.0, -96980.5983886375, 0.0, 20204.29133096615, ]) const uk_9_poly=UkPolynomial([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 24.380529699556064, 0.0, -2499.830481811209, 0.0, 45218.76898136274, 0.0, -331645.1724845636, 0.0, 1.2683652733216248e6, 0.0, -2.813563226586534e6, 0.0, 3.763271297656404e6, 0.0, -2.998015918538106e6, 0.0, 1.311763614662977e6, 0.0, -242919.18790055133, ]) const uk_10_poly=UkPolynomial([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 110.01714026924674, 0.0, -13886.089753717039, 0.0, 308186.40461266245, 0.0, -2.785618128086455e6, 0.0, 1.328876716642182e7, 0.0, -3.756717666076335e7, 0.0, 6.634451227472903e7, 0.0, -7.410514821153264e7, 0.0, 5.095260249266463e7, 0.0, -1.970681911843222e7, 0.0, 3.2844698530720375e6, ]) const uk_11_poly=UkPolynomial([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 551.3358961220206, 0.0, -84005.43360302408, 0.0, 2.24376817792245e6, 0.0, -2.4474062725738734e7, 0.0, 1.420629077975331e8, 0.0, -4.958897842750303e8, 0.0, 1.1068428168230145e9, 0.0, -1.621080552108337e9, 0.0, 1.5535968995705795e9, 0.0, -9.39462359681578e8, 0.0, 3.255730741857656e8, 0.0, -4.932925366450995e7, ]) const uk_12_poly=UkPolynomial([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 3038.0905109223845, 0.0, -549842.3275722886, 0.0, 1.739510755397817e7, 0.0, -2.2510566188941535e8, 0.0, 1.5592798648792577e9, 0.0, -6.563293792619284e9, 0.0, 1.79542137311556e10, 0.0, -3.302659974980072e10, 0.0, 4.128018557975397e10, 0.0, -3.463204338815877e10, 0.0, 1.868820750929582e10, 0.0, -5.866481492051846e9, 0.0, 8.14789096118312e8, ]) const uk_13_poly=UkPolynomial([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 18257.75547429318, 0.0, -3.871833442572612e6, 0.0, 1.4315787671888906e8, 0.0, -2.167164983223796e9, 0.0, 1.763473060683497e10, 0.0, -8.786707217802325e10, 0.0, 2.879006499061506e11, 0.0, -6.453648692453765e11, 0.0, 1.008158106865382e12, 0.0, -1.098375156081223e12, 0.0, 8.19218669548577e11, 0.0, -3.990961752244664e11, 0.0, 1.1449823773202577e11, 0.0, -1.4679261247695614e10, ]) const uk_14_poly=UkPolynomial([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 118838.42625678328, 0.0, -2.918838812222081e7, 0.0, 1.247009293512711e9, 0.0, -2.1822927757529232e10, 0.0, 2.0591450323241003e11, 0.0, -1.1965528801961816e12, 0.0, 4.612725780849132e12, 0.0, -1.2320491305598287e13, 0.0, 2.334836404458184e13, 0.0, -3.1667088584785152e13, 0.0, 3.056512551993531e13, 0.0, -2.051689941093443e13, 0.0, 9.109341185239896e12, 0.0, -2.406297900028503e12, 0.0, 2.8646403571767896e11, ]) const uk_15_poly=UkPolynomial([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 832859.3040162894, 0.0, -2.3455796352225152e8, 0.0, 1.1465754899448242e10, 0.0, -2.2961937296824658e11, 0.0, 2.4850009280340854e12, 0.0, -1.663482472489248e13, 0.0, 7.437312290867914e13, 0.0, -2.3260483118893994e14, 0.0, 5.230548825784446e14, 0.0, -8.574610329828949e14, 0.0, 1.0269551960827622e15, 0.0, -8.894969398810261e14, 0.0, 5.427396649876595e14, 0.0, -2.2134963870252512e14, 0.0, 5.417751075510603e13, 0.0, -6.019723417234003e12, ]) const uk_16_poly=UkPolynomial([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 6.252951493434797e6, 0.0, -2.0016469281917765e9, 0.0, 1.1099740513917906e11, 0.0, -2.5215584749128555e12, 0.0, 3.1007436472896465e13, 0.0, -2.3665253045164925e14, 0.0, 1.2126758042503475e15, 0.0, -4.3793258383640155e15, 0.0, 1.1486706978449752e16, 0.0, -2.226822513391114e16, 0.0, 3.213827526858623e16, 0.0, -3.4447226006485136e16, 0.0, 2.705471130619707e16, 0.0, -1.5129826322457674e16, 0.0, 5.705782159023669e15, 0.0, -1.301012723549699e15, 0.0, 1.3552215870309362e14, ]) const uk_17_poly=UkPolynomial([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 5.0069589531988926e7, 0.0, -1.807822038465807e10, 0.0, 1.1287091454108745e12, 0.0, -2.886383763141477e13, 0.0, 4.000444570430363e14, 0.0, -3.450385511846272e15, 0.0, 2.0064271476309532e16, 0.0, -8.270945651585064e16, 0.0, 2.4960365126160426e17, 0.0, -5.62631788074636e17, 0.0, 9.575335098169137e17, 0.0, -1.233611693196069e18, 0.0, 1.1961991142756303e18, 0.0, -8.592577980317544e17, 0.0, 4.4347954614171885e17, 0.0, -1.5552983504313898e17, 0.0, 3.3192764720355212e16, 0.0, -3.2541926196426675e15, ]) const uk_18_poly=UkPolynomial([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 4.2593921650476694e8, 0.0, -1.7228323871735056e11, 0.0, 1.2030115826419195e13, 0.0, -3.4396530474307606e14, 0.0, 5.33510697870884e15, 0.0, -5.160509319348522e16, 0.0, 3.376676249790609e17, 0.0, -1.5736434765189596e18, 0.0, 5.402894876715981e18, 0.0, -1.3970803516443374e19, 0.0, 2.7572829816505184e19, 0.0, -4.1788614446568374e19, 0.0, 4.859942729324835e19, 0.0, -4.301555703831442e19, 0.0, 2.8465212251676553e19, 0.0, -1.3639420410571586e19, 0.0, 4.4702009640123085e18, 0.0, -8.966114215270461e17, 0.0, 8.301957606731907e16, ]) const uk_19_poly=UkPolynomial([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 3.836255180230434e9, 0.0, -1.7277040123530002e12, 0.0, 1.3412416915180642e14, 0.0, -4.2619355104269e15, 0.0, 7.351663610930973e16, 0.0, -7.921651119323832e17, 0.0, 5.789887667664653e18, 0.0, -3.0255665989903716e19, 0.0, 1.1707490535797255e20, 0.0, -3.434621399768417e20, 0.0, 7.756704953461136e20, 0.0, -1.3602037772849937e21, 0.0, 1.8571089321463448e21, 0.0, -1.9677247077053117e21, 0.0, 1.601689857369359e21, 0.0, -9.824438427689853e20, 0.0, 4.39279220088871e20, 0.0, -1.3512175034359957e20, 0.0, 2.556380296052923e19, 0.0, -2.242438856186774e18, ]) const uk_20_poly=UkPolynomial([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 3.646840080706557e10, 0.0, -1.8187262038511043e13, 0.0, 1.5613123930484675e15, 0.0, -5.484033603883292e16, 0.0, 1.0461721131134348e18, 0.0, -1.2483700995047234e19, 0.0, 1.0126774169536592e20, 0.0, -5.891794135069496e20, 0.0, 2.548961114664971e21, 0.0, -8.40591581710835e21, 0.0, 2.1487414815055883e22, 0.0, -4.302534303482378e22, 0.0, 6.7836616429518815e22, 0.0, -8.423222750084318e22, 0.0, 8.194331005435126e22, 0.0, -6.173206302884411e22, 0.0, 3.5284358439034075e22, 0.0, -1.478774352843361e22, 0.0, 4.285296082829493e21, 0.0, -7.671943936729004e20, 0.0, 6.393286613940834e19, ])
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
src/genUk/generate_uk_polys_hardcoded.jl
code
953
using Polynomials function Uk_polynomials(max_order) P0 = Polynomial([1.0]) out = [P0] mul_int = Polynomial([1.0, 0.0, -5.0]) mul_frnt = Polynomial([0.0, 0.0, 1.0, 0.0, -1.0])/2 for j in 1:max_order Pjm1 = out[end] Pjm1_int = integrate(mul_int*Pjm1)/8 Pjm1_drv = derivative(Pjm1) newP = mul_frnt*Pjm1_drv + Pjm1_int - Pjm1_int(0.0)/8 push!(out, newP) end out end open("uk_polys.jl", "w") do out uk_polys = Uk_polynomials(20) names = String[] redirect_stdout(out) do run(`cat ukpoly.jl`) println("\n\n\n") for (j,pj) in enumerate(uk_polys) c = pj.coeffs stem = string("uk_", j-1) pnm = string(stem, "_poly") push!(names, string(pnm, ",")) str1 = string("const ", pnm, "=UkPolynomial([", map(x->string(x, ", "), c)..., "])") println(str1) end println() println(string("const UK_POLYS = [", reduce(*, names), "]")) println() end end
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
src/genUk/ukpoly.jl
code
1450
struct UkPolynomial{N,P} coef_skip_zeros::P constant::Float64 end function UkPolynomial(coefv::Vector{T}) where{T} # find the first non-zero coefficient, assuming the first coefficient is a # zero-order constant: first_nonzero_index = findfirst(!iszero, coefv) # now take the coefficients that are not zero, which is every other one: if first_nonzero_index == 1 constant = coefv[1] next_nonzero_index = findfirst(!iszero, coefv[2:end]) if isnothing(next_nonzero_index) if length(coefv) > 1 throw(error("These coefficients don't correspond to a U_k polynomial.")) end return UkPolynomial{0,Nothing}(nothing, coefv[1]) end first_nonzero_index = next_nonzero_index+1 else constant = 0.0 end nzcoef = vcat(zero(T), coefv[first_nonzero_index:2:end]) coef_skip_zeros = length(nzcoef) < 20 ? tuple(nzcoef...) : nzcoef (N,P) = (first_nonzero_index-1, typeof(coef_skip_zeros)) UkPolynomial{N,P}(coef_skip_zeros, constant) end # the case of a simple polynomial with no leading zeros. (Uk::UkPolynomial{0,P})(x) where{P} = evalpoly(x^2, Uk.coef_skip_zeros)/x + Uk.constant # the case of a zero-order polynomial: (Uk::UkPolynomial{0,Nothing})(x) = Uk.constant # the nontrivial case of a polynomial that DOES have leading zeros: function (Uk::UkPolynomial{N,P})(x) where{N,P} pre_multiply = x^(N-2) pv = evalpoly(x^2, Uk.coef_skip_zeros) pre_multiply*pv + Uk.constant end
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
src/notinuse/besk.jl
code
6683
# # NOTE (cg 2022/09/09 10:48): this is my original implementation of this # function. But Bessels.jl has been progressing and developing better routines, # some of which are derived from this code, and so I'm now completely letting # that package handle the case when v isa AbstractFloat. I leave this code here # only for people coming from the paper that want to see it. # @inline isnearint(v, tol) = abs(v-round(v)) < tol # TODO (cg 2021/10/22 17:04): The cutoffs chosen here are actually to some # degree chosen as a balance between pointwise accuracy AND AD derivative # accuracy. For example, the series gets worse faster for derivatives than it # does for raw evals. Not entirely sure what to make of that, but here we are. # # TODO (cg 2021/10/27 10:28): the weak point is clearly with |z| between, say, 8 # and 15. We only get atols on the order of 1e-12 for that, which is not quite # good enough to declare total victory. I can only manage to match the speed of # AMOS in that part of the domain, too. function _besselk(v, x, maxit=100, tol=1e-12, order=6) @assert x >= zero(x) DomainError("x needs to be non-negative.") iszero(x) && return Inf (real(v) < 0) && return _besselk(-v, x, maxit, tol, order) # TODO (cg 2021/11/16 16:44): this is not the right way to test if you're # doing AD. But I don't want to hard-bake the ForwardDiff.Dual type in here. is_ad = !(v isa AbstractFloat) # Special cases, now just half-integers: # # TODO (cg 2021/12/16 12:41): for the moment, specifically at half # integer values the second derivatives are more accurate using the direct # series, even at values in which for direct evaluations v is large enough # that the series isn't so great. It isn't earth-shattering and the temme one # is much more expensive (300 ns vs 100 ns on my machine), but in the interest # of maximum safety I'm switching to this behavior. # # What would really be nice is some way if checking if (v isa Dual{T,V,N} # where T<: Dual). But again, I'm worried about getting too stuck with # ForwardDiff. if isinteger(v-1/2)# && (x < 8.5) if is_ad && (x < 8.5) return _besselk_ser(v, x, maxit, tol, false) elseif !is_ad # Probably don't need the is_ad correction here. return _besselk_as(v, x, Int(ceil(v)), false) end end # # General cases: # # TODO (cg 2021/11/01 18:03): These branches are not perfect. If you go into # ./testing/accuracy.jl and track down the largest rtols between this code and # AMOS, you will be able to fiddle around with what version you use and get a # better rtol/atol. But I'm at the point where whenever I tweak something like # that, something else gets worse and makes it a wash. I think for the time # being I have to stop playing with things. if abs(x) < 8.5 # (x < 9) if (v > 2.85) || isnearint(v, 0.01) || ((x > 4) && is_ad) return _besselk_temme(v,x,maxit,tol,false) # direct series. else return _besselk_ser(v,x,maxit,tol,false) # direct series. end elseif abs(x) < 15.0 return _besselk_asv(v,x,12) # uniform large order expn. elseif abs(x) < 30.0 return _besselk_asv(v,x,8) else if abs(v) > 1.5 || is_ad return _besselk_asv(v,x,6) else if is_ad return _besselk_as(v,x,order) else return _besselk_as(v,x,order,false) end end end end # A very simple wrapper that will use AMOS when possible, but fall back to this # code. So now you can use AD on this but still get AMOS for direct evals. # # TODO (cg 2021/11/05 16:57): what's the most sensible naming thing here? # Calling it besselk and not exporting it seems reasonable enough, but users # will obviously want to import it. So not obvious what's best to do here. function adbesselk(v, x, maxit=100, tol=1e-12, order=5) if (v isa AbstractFloat) && isinteger(v) && in(typeof(x), (Float32, Float64)) return _besselk_int(v, x) elseif (v isa AbstractFloat) && isinteger(v-1/2) && in(typeof(x), (Float32, Float64)) return _besselk_halfint(v, x) elseif v isa AbstractFloat SpecialFunctions.besselk(v, x) else _besselk(v, x, maxit, tol) end end # Not exactly a taylor series, but accurate enough. # # TODO (cg 2021/11/10 18:33): an enhancement here would be something that also # worked for integers. But that seems hard. I did the whole Temme thing because # integers are hard. But as it turns out, we really need it, because the Temme # recursion depends on (x^v)*besselk(v,x) ->_{z->0} some finite number. But for # v=0, which is needed to compute for _all_ integer v, that doesn't hold! An # expansion like this that is valid for all v, including integer v, but is ALSO # AD-compatible would take care of that entirely, because the K_{v+1}(x) term in # the Temme series doesn't need f0. @inline function besselkxv_t0(v, x) gv = gamma(v) _2v = 2^(v-1) cof = (gv*_2v, zero(v), (_2v/4)*gv/(v-1), zero(v), (_2v/16)*gv/(v*v - 3*v + 2)) evalpoly(x, cof) end # Note that the special series cutofs are low compared to the above function. In # general, the adbesselk* functions that are exported really should be pretty # near machine precision here or should just fall back to AMOS when possible. It # turns out that the *xv modifications in the code are really only helpful when # the argument is pretty small. function adbesselkxv(v, x, maxit=100, tol=1e-12, order=5) (iszero(v) && iszero(x)) && return Inf is_ad = !(v isa AbstractFloat) xcut = is_ad ? 6.0 : 2.0 if !isinteger(v) && (abs(x) <= 1e-8) # use Taylor at zero. return besselkxv_t0(v, x) elseif (x < xcut) && (v < 5.75) && !isnearint(v, 0.01) return _besselk_ser(v, x, maxit, tol, true) elseif (x < xcut) return _besselk_temme(v, x, maxit, tol, true) elseif is_ad && (x > xcut) && (x < 15.0) return _besselk_asv(v, x, 12, true) else return adbesselk(v, x, maxit, tol, order)*(x^v) end end # Unlike adbesselkxv, this function is pure BesselK.jl, including the cases in # which special branches for (x^v)*besselk(v,x) don't come up. This is primarily # used for testing. function _besselkxv(v, x, maxit=100, tol=1e-12, order=5) (iszero(v) && iszero(x)) && return Inf is_ad = !(v isa AbstractFloat) xcut = is_ad ? 6.0 : 2.0 if !isinteger(v) && (abs(x) <= 1e-8) # use Taylor at zero. return besselkxv_t0(v, x) elseif (x < xcut) && (v < 5.75) && !isnearint(v, 0.01) return _besselk_ser(v, x, maxit, tol, true) elseif (x < xcut) return _besselk_temme(v, x, maxit, tol, true) elseif is_ad && (x > xcut) && (x < 15.0) return _besselk_asv(v, x, 12, true) else return _besselk(v, x, maxit, tol, order)*(x^v) end end
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
src/notinuse/besk_as.jl
code
3036
# A maximally fast upper branch incomplete gamma function when the first # argument is a non-positive integer. # # (_gamma_upper_negative_integer) @inline function _g_u_n_i(s::Int64, x, g0, expnx) n = -s ser = zero(x) _x = one(x) _sgn = copy(_x) facn = convert(typeof(x), factorial(n)) fac = facn/n for k in 0:(n-1) term = _sgn*fac*_x ser += term _x *= x _sgn *= -one(x) fac /= (n-k-1) end ((expnx/_x)*ser + _sgn*g0)/facn end # The best speed I was able to accomplish with this thing was just to # compartmentalize it into its own function. Definitely not ideal, but so it is. # # TODO (cg 2021/10/26 15:58): get rid of all the remaining factorial calls so # that this won't literally break for order greater than, like, 19. function exponential_improvement(v, x, l, m) onex = one(x) twox = onex+onex fv = 4*v*v ser = zero(x) _z = x floatj = onex ak_num = fv - floatj factj = onex twofloatj = onex eightj = 8 expx = exp(2*x) expnx = exp(-2*x) expintx = -expinti(-2*x) ser = expx*factorial(l-1)*_g_u_n_i(1-l, 2*x, expintx, expnx)/(2*pi) for j in 1:(m-1) # add to the series: s = Int(l-j) _g = expx*factorial(Int(s-1))*_g_u_n_i(1-s, 2*x, expintx, expnx)/(2*pi) term = ak_num/(factj*_z*eightj)*_g ser += term # update ak and _z: floatj += onex twofloatj += twox factj *= floatj ak_num *= (fv - twofloatj^2) _z *= x eightj *= 8 end _sgn = isodd(l) ? -one(x) : one(x) ser*_sgn*twox*cospi(v) end function _besselk_as(v, x, order, use_remainder=true, modify=false) onex = one(x) twox = onex+onex fv = 4*v*v ser = zero(x) _z = x ser = onex #zero(x) #onex/_z floatj = onex ak_num = fv - floatj factj = onex twofloatj = onex eightj = 8 for j in 1:order # add to the series: term = ak_num/(factj*_z*eightj) ser += term # update ak and _z: floatj += onex twofloatj += twox factj *= floatj ak_num *= (fv - twofloatj^2) _z *= x eightj *= 8 end if use_remainder _rem = exponential_improvement(v, x, Int(order+1), Int(order)) ser += _rem end # if you're modifying as (x^v)*besselk(v,x), since the series part is pretty # stable numerically, what we want to deal with is the (x^v)*exp(-x). That's # the problem of potentially huge*tiny. if modify mulval = exp(v*log(x)-x) else mulval = exp(-x) end sqrt(pi/(x*twox))*mulval*ser end # A refined version of my (CG) base version, thanks to Michael Helton and Oscar Smith (see https://github.com/heltonmc/Bessels.jl/issues/25) const SQRT_PID2(::Type{Float64}) = 1.2533141373155003 function _besselk_halfint(v::T, x) where{T} v = abs(v) invx = inv(x) b0 = b1 = SQRT_PID2(Float64)*sqrt(invx)*exp(-x) twodx = 2*invx _v = T(1/2) while _v < v b0, b1 = b1, muladd(b1, twodx*_v, b0) _v += one(T) end b1 end
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
test/runtests.jl
code
4735
using Test, BenchmarkTools, BesselK, SpecialFunctions, FiniteDifferences, ForwardDiff const VGRID = range(0.25, 10.0, length=100) const XGRID = range(0.0, 50.0, length=201)[2:end] const VX = collect(Iterators.product(VGRID, XGRID)) const REF_FD1 = central_fdm(10,1) const REF_FD2 = central_fdm(10,2) atolfun(tru, est) = isnan(est) ? NaN : (isinf(tru) ? 0.0 : abs(tru-est)) besselkxv(v,x) = besselk(v,x)*(x^v) fd_dbesselk_dv(v, x) = REF_FD1(_v->besselk(_v, x), v) fd2_dbesselk_dv_dv(v, x) = REF_FD2(_v->besselk(_v, x), v) fd_dbesselkxv_dv(v, x) = REF_FD1(_v->besselkxv(_v, x), v) fd2_dbesselkxv_dv_dv(v, x) = REF_FD2(_v->besselkxv(_v, x), v) ad_dbesselk_dv(v, x) = ForwardDiff.derivative(_v->adbesselk(_v, x), v) ad2_dbesselk_dv_dv(v, x) = ForwardDiff.derivative(_v->ad_dbesselk_dv(_v, x), v) ad_dbesselkxv_dv(v, x) = ForwardDiff.derivative(_v->adbesselkxv(_v, x), v) ad2_dbesselkxv_dv_dv(v, x) = ForwardDiff.derivative(_v->ad_dbesselkxv_dv(_v, x), v) # direct accuracy: @testset "direct eval" begin println("\nDirect evaluations:") for (ref_fn, cand_fn, case) in ((besselk, adbesselk, :standard), (besselkxv, adbesselkxv, :rescaled)) amos_ref = map(vx->ref_fn(vx[1], vx[2]), VX) candidate = map(vx->cand_fn(vx[1], vx[2]), VX) atols = map(a_c->atolfun(a_c[1], a_c[2]), zip(amos_ref, candidate)) ix = findall(x-> x <= 1000.0, amos_ref) thresh = case == :standard ? 5e-11 : 2e-12 (maxerr, maxix) = findmax(abs, atols[ix]) (maxerr_v, maxerr_x) = VX[ix][maxix] println("Case $case:") println("worst (v,x): ($maxerr_v, $maxerr_x)") println("Ref value: $(amos_ref[ix][maxix])") println("Est value: $(candidate[ix][maxix])") println("Abs error: $(round(maxerr, sigdigits=3))") @test maxerr < thresh end println() end # test derivative accuracy: @testset "first derivative" begin println("\nFirst derivatives:") for (ref_fn, cand_fn, case) in ((fd_dbesselk_dv, ad_dbesselk_dv, :standard), (fd_dbesselkxv_dv, ad_dbesselkxv_dv, :rescaled)) amos_ref = map(vx->ref_fn(vx[1], vx[2]), VX) candidate = map(vx->cand_fn(vx[1], vx[2]), VX) atols = map(a_c->atolfun(a_c[1], a_c[2]), zip(amos_ref, candidate)) ix = findall(x-> x <= 1000.0, amos_ref) thresh = case == :standard ? 4e-9 : 2e-6 (maxerr, maxix) = findmax(abs, atols[ix]) (maxerr_v, maxerr_x) = VX[ix][maxix] println("Case $case:") println("worst (v,x): ($maxerr_v, $maxerr_x)") println("Ref value: $(amos_ref[ix][maxix])") println("Est value: $(candidate[ix][maxix])") println("Abs error: $(round(maxerr, sigdigits=3))") @test maxerr < thresh end println() end # test second derivative accuracy: @testset "second derivative" begin println("\nSecond derivatives:") for (ref_fn, cand_fn, case) in ((fd2_dbesselk_dv_dv, ad2_dbesselk_dv_dv, :standard), (fd2_dbesselkxv_dv_dv, ad2_dbesselkxv_dv_dv, :rescaled)) amos_ref = map(vx->ref_fn(vx[1], vx[2]), VX) candidate = map(vx->cand_fn(vx[1], vx[2]), VX) atols = map(a_c->atolfun(a_c[1], a_c[2]), zip(amos_ref, candidate)) ix = findall(x-> x <= 100.0, amos_ref) thresh = case == :standard ? 5e-7 : 5e-6 (maxerr, maxix) = findmax(abs, atols[ix]) (maxerr_v, maxerr_x) = VX[ix][maxix] println("Case $case:") println("worst (v,x): ($maxerr_v, $maxerr_x)") println("Ref value: $(amos_ref[ix][maxix])") println("Est value: $(candidate[ix][maxix])") println("Abs error: $(round(maxerr, sigdigits=3))") @test maxerr < thresh end println() end # Testing the _xv versions really slows down the test script, and in general # there are no no routines. @testset "confirm no allocations" begin VGRID_ALLOC = (0.25, 1.0-1e-8, 1.0, 1.5, 2.1, 3.0, 3.5, 4.8) XGRID_ALLOC = range(0.0, 50.0, length=11)[2:end] VX_ALLOC = collect(Iterators.product(VGRID_ALLOC, XGRID_ALLOC)) ad_alloc_test(v,x) = @ballocated ad_dbesselk_dv($v,$x) samples=1 ad2_alloc_test(v,x) = @ballocated ad2_dbesselk_dv_dv($v,$x) samples=1 ad_alloc_test_xv(v,x) = @ballocated ad_dbesselkxv_dv($v,$x) samples=1 ad2_alloc_test_xv(v,x) = @ballocated ad2_dbesselkxv_dv_dv($v,$x) samples=1 ad_allocs = map(vx->ad_alloc_test(vx[1], vx[2]), VX_ALLOC) ad2_allocs = map(vx->ad2_alloc_test(vx[1], vx[2]), VX_ALLOC) ad_allocs_xv = map(vx->ad_alloc_test_xv(vx[1], vx[2]), VX_ALLOC) ad2_allocs_xv = map(vx->ad2_alloc_test_xv(vx[1], vx[2]), VX_ALLOC) @test all(iszero, ad_allocs) @test all(iszero, ad2_allocs) @test all(iszero, ad_allocs_xv) @test all(iszero, ad2_allocs_xv) end
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
README.md
docs
7541
# BesselK.jl [build-latest-img]: https://github.com/cgeoga/BesselK.jl/workflows/CI/badge.svg [build-url]: https://github.com/cgeoga/BesselK.jl/actions?query=workflow [![][build-latest-img]][build-url] This package implements one function: the modified second-kind Bessel function Kᵥ(x). It is designed specifically to be automatically differentiable **with ForwardDiff.jl**, including providing derivatives with respect to the order parameter `v` **that are fast and non-allocating in the entire domain for both first and second order**. Derivatives with respect to \nu are significantly faster than any finite differencing method, including the most naive fixed-step minimum-order method, and in almost all of the domain are meaningful more accurate. Particularly near the origin you should expect to gain at least 3-5 digits. Second derivatives are even more dramatic, both in terms of the speedup and accuracy gains, now commonly giving 10+ more digits of accuracy. As a happy accident/side-effect, if you're willing to give up the last couple digits of accuracy, you could also use `ForwardDiff.jl` on this code for derivatives with respect to argument for an order-of-magnitude speedup. In some casual testing the argument-derivative errors with this code are never worse than `1e-12`, and they turn 1.4 μs with allocations into 140 ns without any allocations. In order to avoid naming conflicts with other packages, this package exports three functions: * `matern`: the Matern covariance function in its most common parameterization. See the docstrings for more info. * `adbesselk`: Gives Kᵥ(x), using `Bessels.jl` if applicable and our more specialized order-AD codes otherwise. * `adbesselkxv`: Gives Kᵥ(x)*(x^v), using `Bessels.jl` if applicable and our more specialized order-AD codes otherwise. Here is a very basic demo: ```julia using ForwardDiff, SpecialFunctions, BesselK (v, x) = (1.1, 2.1) # For regular evaluations, you get what you're used to getting: @assert isapprox(besselk(v, x), adbesselk(v, x)) @assert isapprox((x^v)*besselk(v, x), adbesselkxv(v, x)) # But now you also get good (and fast!) derivatves: @show ForwardDiff.derivative(_v->adbesselk(_v, x), v) # good to go. @show ForwardDiff.derivative(_v->adbesselkxv(_v, x), v) # good to go. ``` # A note to people coming here from the paper You'll see that this repo defines a great deal of specific derivative functions in the files in `./paperscripts`. **This is only because we specifically tested those quantities in the paper**. If you're just here to fit a Matern covariance function, then you should **not** be doing that. Your code, at least in the simplest case, should probably look more like this: ```julia using ForwardDiff, BesselK function my_covariance_function(loc1, loc2, params) ... # your awesome covariance function, presumably using adbesselk somewhere. end const my_data = ... # load in your data const my_locations = ... # load in your locations # Create your likelihood and use ForwardDiff for the grad and Hessian: function nll(params) K = cholesky!(Symmetric([my_covariance_function(x, y, params) for x in my_locations, y in my_locations])) 0.5*(logdet(K) + dot(my_data, K\my_data)) end nllg(params) = ForwardDiff.gradient(nll, params) nllh(params) = ForwardDiff.hessian(nll, params) my_mle = some_optimizer(init_params, nll, nllg, nllh, ...) ``` Or something like that. You of course do not *have* to do it this way, and could manually implement the gradient and Hessian of the likelihood after manually creating derivatives of the covariance function itself (see `./example/matern.jl` for a demo of that), and manual implementations, particularly for the Hessian, will be faster if they are thoughtful enough. But what I mean to emphasize here is that in general you should *not* be doing manual chain rule or derivative computations of your covariance function itself. Let the AD handle that for you and enjoy the power that Julia's composability offers. # Limitations For the moment there are two primary limitations: * **AD compatibility with `ForwardDiff.jl` only**. The issue here is that in one particular case I use a different function branch of one is taking a derivative with respect to `v` or just evaluating `besselk(v, x)`. The way that is currently checked in the code is with `if (v isa AbstractFloat)`, which may not work properly for other methods. * **Only derivatives up to the second are checked and confirmed accurate.** The code uses a large number of local polynomial expansions at slightly hairy values of internal intermediate functions, and so at some sufficiently high level of derivative those local polynomials won't give accurate partial information. # Also consider: `Bessels.jl` This software package was written with the pretty specific goal of computing derivatives of Kᵥ(x) with respect to the order using `ForwardDiff.jl`. While it is in general a bit faster than AMOS, we give up a few digits of accuracy here and there in the interest of better and faster derivatives. If you just want the fastest possible Kᵥ(x) for floating point order and argument (as in, you don't need to do AD), then you would probably be better off using [`Bessels.jl`](https://github.com/heltonmc/Bessels.jl). This code now uses `Bessels.jl` whenever possible, so now the only question is really about whether you need AD. If you need AD with respect to order, use this package. If you don't, then this package offers nothing beyond what `Bessels.jl` does. # Implementation details See the reference for an entire paper discussing the implementation. But in a word, this code uses several routines to evaluate Kᵥ accurately on different parts of the domain, and has to use some non-standard to maintain AD compatibility and correctness. When `v` is an integer or half-integer, for example, a lot of additional work is required. The code is also pretty well-optimized, and you can benchmark for yourself or look at the paper to see that in several cases the `ForwardDiff.jl`-generated derivatives are faster than a single call to `SpecialFunctions.besselk`. To achieve this performance, particularly for second derivatives, some work was required to make sure that all of the function calls are non-allocating, which means switching from raw `Tuple`s to `Polynomial` types in places where the polynomials are large enough and things like that. Again this arguably makes the code look a bit disorganized or inconsistent, but to my knowledge it is all necessary. If somebody looking at the source finds a simplification, I would love to see it, either in terms of an issue or a PR or an email or a patch file or anything. # Citation If you use this package in your research that gets compiled into some kind of report/article/poster/etc, please cite [this paper](https://arxiv.org/abs/2201.00090): ``` @misc{GMSS_2022, title={Fitting Mat\'ern Smoothness Parameters Using Automatic Differentiation}, author={Christopher J. Geoga and Oana Marin and Michel Schanen and Michael L. Stein}, year={2022}, journal={Statistics and Computing} } ``` While this package ostensibly only covers a single function, putting all of this together and making it this fast and accurate was really a lot of work. I would *really* appreciate you citing this paper if this package was useful in your research. Like, for example, if you used this package to fit a Matern smoothness parameter with second order optimization methods.
BesselK
https://github.com/cgeoga/BesselK.jl.git
[ "MIT" ]
0.5.6
0a2aba1fa92200ac4ecd1c49b9a73100e4b34816
paperscripts/README.md
docs
1526
This folder is a sort of haphazard collection of scripts that we used in the paper. Not all of them ended up providing results that went directly into the paper, and a lot of them were also absorbed into the extensive testing in the package itself. But we include them all here for the curious people who would like to obtain results from the paper. Of course, as the versions of BesselK.jl change the exact numbers here will also change a bit, although if I did everything right the first tagged release of this code (or maybe the initial commit) should be almost the exact source that was used to generate the results in v1 of the paper. Should anything come up that you'd like to discuss, please don't hesitate to contact me. You can find my email addresses on my website, which you can find by googling my name (Chris Geoga). Some misc notes: -- I would _not_ suggest using the fitting scripts in `./demo/` as the basis of your own code for estimating parameters. You could certainly do much worse and it does leverage my generic go-to package `GPMaxlik.jl`, which has a ton of nice features (not that I'm biased). But I'd sooner suggest looking at the example files in that repo as a template. -- I've re-organized the code a bit and some code lives in `../examples/`. I've done my best to make sure that all of these tests still run as they should after the re-org, but if something doesn't, please open an issue or email me or something. It's probably just an accident that I will be able to resolve immediately.
BesselK
https://github.com/cgeoga/BesselK.jl.git