Define some essential mathematical functions.
ACFlow.AbstractInterpolationACFlow.BFGSDifferentiableACFlow.BFGSDifferentiableACFlow.BFGSOptimizationResultsACFlow.BFGSStateACFlow.CubicSplineInterpolationACFlow.CubicSplineInterpolationACFlow.LMOptimizationResultsACFlow.LineSearchExceptionACFlow.LinearInterpolationACFlow.LinearInterpolationACFlow.LsqFitResultACFlow.OnceDifferentiableACFlow.OnceDifferentiableACFlow.QuadraticInterpolationACFlow.QuadraticInterpolationACFlow.LSACFlow._interpACFlow.convergedACFlow.curve_fitACFlow.eval_residACFlow.eval_ΔfACFlow.eval_ΔxACFlow.eval_δfACFlow.eval_δxACFlow.gradientACFlow.gradient_via_fdACFlow.init_stateACFlow.jacobianACFlow.jacobian!ACFlow.levenberg_marquardtACFlow.linesearch!ACFlow.maxdiffACFlow.munge_dataACFlow.newtonACFlow.optimizeACFlow.secantACFlow.second_derivativeACFlow.simpsonACFlow.traceACFlow.trapzACFlow.update_g!ACFlow.update_h!ACFlow.update_state!ACFlow.valueACFlow.valueACFlow.value!ACFlow.value_gradient!ACFlow.@einsum
Root Finding
ACFlow.secant — Functionsecant(func, x0, args...)It implements the well-known secant algorithm to locate root of a given polynomial function. Here, func means the function, x0 is the initial guess, and args... denotes the required arguments for function call to func. In addition, the maximum iterations and tolerance are controled by maxiter and tol, respectively. Be careful, func must be a single variable function.
Arguments
See above explanations.
Returns
- 𝑝 -> The solution.
See also: newton.
ACFlow.newton — Functionnewton(
fun::Function,
guess,
kwargs...;
maxiter::I64 = 20000,
mixing::F64 = 0.5
)It implements the well-known newton algorithm to locate root of a given polynomial function. Here, fun means the function, guess is the initial solution, and kwargs... denotes the required arguments for fun. Please be careful, func is a multiple variable function. It not only returns the value, but also the jacobian matrix of the function.
Arguments
See above explanations.
Returns
- sol -> Solution.
- call -> Counter for function call to
fun().
See also: secant.
Numerical Integrations
ACFlow.trapz — Functiontrapz(x::AbstractMesh, y::AbstractVector{T}) where {T<:N64}Perform numerical integration by using the composite trapezoidal rule.
Arguments
- x -> Real frequency mesh.
- y -> Function values at real axis.
Returns
- ℐ -> The final value.
See also: simpson.
trapz(
x::AbstractVector{S},
y::AbstractVector{T},
linear::Bool = false
) where {S<:Number, T<:Number}Perform numerical integration by using the composite trapezoidal rule. Note that it supports arbitrary precision via BigFloat.
Arguments
- x -> Real frequency mesh.
- y -> Function values at real axis.
- linear -> Whether the given mesh is linear?
Returns
- ℐ -> The final value.
See also: simpson.
ACFlow.simpson — Functionsimpson(
x::AbstractVector{S},
y::AbstractVector{T}
) where {S<:Number, T<:Number}Perform numerical integration by using the simpson rule. Note that the length of x and y must be odd numbers. And x must be a linear and uniform mesh.
Arguments
- x -> Real frequency mesh.
- y -> Function values at real axis.
Returns
- ℐ -> The final value.
See also: trapz.
Numerical Derivatives
ACFlow.second_derivative — Functionsecond_derivative(x::AbstractVector, y::AbstractVector)Compute second derivative y''(x). If the length of x and y is N, the length of the returned vector is N-2.
Arguments
- x -> Real frequency mesh.
- y -> Function values at real axis.
Returns
- val -> y''(x).
ACFlow.gradient_via_fd — Functiongradient_via_fd(f, x)Compute ∂f/∂x via finite difference method. It is less accurate and much slower than the automatic differentiation approach. Actually, we won't use this function to calculate gradient. The Zygote.gradient() function is always a better choice.
Arguments
- x -> Real frequency mesh.
- f -> Function values at real axis, f(x).
Returns
- val -> ∂f/∂x.
Interpolations
Structs
ACFlow.AbstractInterpolation — TypeAbstractInterpolationIt represents an abstract interpolation engine, which is used to build the internal type system.
ACFlow.LinearInterpolation — TypeLinearInterpolationIt represents the linear interpolation algorithm.
ACFlow.QuadraticInterpolation — TypeQuadraticInterpolationIt represents the quadratic interpolation algorithm.
ACFlow.CubicSplineInterpolation — TypeCubicSplineInterpolationIt represents the cubic spline interpolation algorithm.
Constructors
ACFlow.LinearInterpolation — MethodLinearInterpolation(u::AbstractVector, t::AbstractVector)Create a LinearInterpolation struct. Note that u and t denote f(x) and x, respectively.
ACFlow.QuadraticInterpolation — MethodQuadraticInterpolation(u::AbstractVector, t::AbstractVector)Create a QuadraticInterpolation struct. Note that u and t denote f(x) and x, respectively.
ACFlow.CubicSplineInterpolation — MethodCubicSplineInterpolation(u::AbstractVector, t::AbstractVector)Create a CubicSplineInterpolation struct. Note that u and t denote f(x) and x, respectively.
Functions
ACFlow.munge_data — Functionmunge_data(u::AbstractVector{<:Real}, t::AbstractVector{<:Real})Preprocess the input data. Note that u and t denote f(x) and x, respectively.
munge_data(u::AbstractVector, t::AbstractVector)Preprocess the input data. Note that u and t denote f(x) and x, respectively.
ACFlow._interp — Function_interp(A::LinearInterpolation{<:AbstractVector}, t::Number)To implement the linear interpolation algorithm.
See also: LinearInterpolation.
_interp(A::QuadraticInterpolation{<:AbstractVector}, t::Number)To implement the quadratic interpolation algorithm.
See also: QuadraticInterpolation.
_interp(A::CubicSplineInterpolation{<:AbstractVector{<:Number}}, t::Number)To implement the cubic spline interpolation algorithm.
See also: CubicSplineInterpolation.
Einstein Summation Convention
ACFlow.@einsum — Macro@einsum(ex)Perform Einstein summation like operations on Julia Arrays.
Examples
Basic matrix multiplication can be implemented as:
@einsum A[i, j] := B[i, k] * C[k, j]If the destination array is preallocated, then use =:
A = ones(5, 6, 7) # Preallocated space
X = randn(5, 2)
Y = randn(6, 2)
Z = randn(7, 2)
# Store the result in A, overwriting as necessary:
@einsum A[i, j, k] = X[i, r] * Y[j, r] * Z[k, r]If destination is not preallocated, then use := to automatically create a new array for the result:
X = randn(5, 2)
Y = randn(6, 2)
Z = randn(7, 2)
# Create new array B with appropriate dimensions:
@einsum B[i, j, k] := X[i, r] * Y[j, r] * Z[k, r]Curve Fitting
Structs
ACFlow.OnceDifferentiable — TypeOnceDifferentiableMutable struct. It is used for objectives and solvers where the gradient is available/exists.
Members
- ℱ! -> Objective. It is actually a function call and return objective.
- 𝒥! -> It is a function call as well and returns jacobian of objective.
- 𝐹 -> Cache for ℱ! output.
- 𝐽 -> Cache for 𝒥! output.
ACFlow.LMOptimizationResults — TypeLMOptimizationResults{T,N}It is used to save the optimization results of the levenberg_marquardt algorithm.
Members
- x₀ -> Initial guess for the solution.
- minimizer -> Final results for the solution.
- minimum -> Residual.
- iterations -> Number of iterations.
- xconv -> If the convergence criterion 1 is satisfied.
- gconv -> If the convergence criterion 2 is satisfied.
ACFlow.LsqFitResult — TypeLsqFitResultIt encapsulates the results for curve fitting.
Members
- param -> Fitted results, i.e, the fitting parameters.
- resid -> Residuals.
- jacobian -> Jacobian matrix.
- converged -> If the curve-fitting algorithm is converged.
Constructors
ACFlow.OnceDifferentiable — MethodOnceDifferentiable(𝑓, p0::AbstractArray, 𝐹::AbstractArray)Constructor for OnceDifferentiable struct. 𝑓 is the function, p0 is the inital guess, 𝐹 = 𝑓(p0) is the cache for 𝑓's output.
Functions
ACFlow.value — Functionvalue(obj::OnceDifferentiable)Return obj.𝐹. obj will not be affected.
value(obj::OnceDifferentiable, 𝐹, x)Return 𝑓(x). obj will not be affected, but 𝐹 is updated.
value(obj::BFGSDifferentiable)Return obj.𝐹. obj will not be affected.
ACFlow.value! — Functionvalue!(obj::OnceDifferentiable, x)Return 𝑓(x). obj.𝐹 will be updated and returned.
ACFlow.jacobian — Functionjacobian(obj::OnceDifferentiable)Return obj.𝐽. obj will not be affected.
ACFlow.jacobian! — Functionjacobian!(obj::OnceDifferentiable, x)Return jacobian. obj.𝐽 will be updated and returned.
ACFlow.levenberg_marquardt — Functionlevenberg_marquardt(df::OnceDifferentiable, x₀::AbstractVector{T})Returns the argmin over x of sum(f(x).^2) using the Levenberg-Marquardt algorithm. The function f is encoded in df. x₀ is an initial guess for the solution.
See also: OnceDifferentiable.
ACFlow.curve_fit — Functioncurve_fit(model, x, y, p0)Fit data to a non-linear model. p0 is an initial model parameter guess. The return object is a composite type (LsqFitResult).
See also: LsqFitResult.
Numerical Optimization
Structs
ACFlow.BFGSDifferentiable — TypeBFGSDifferentiableMutable struct. It is used for objectives and solvers where the gradient is available/exists.
Members
- ℱ! -> Objective. It is actually a function call and return objective.
- 𝒟! -> It is a function call as well and returns derivative of objective.
- 𝐹 -> Cache for ℱ! output.
- 𝐷 -> Cache for 𝒟! output.
ACFlow.BFGSState — TypeBFGSStateMutable struct. It is used to trace the history of states visited.
Members
- x -> Current position.
- ls -> Current search direction.
- δx -> Changes in position.
- δg -> Changes in gradient.
- xₚ -> Previous position.
- gₚ -> Previous gradient.
- fₚ -> Previous f (f in xₚ).
- H⁻¹ -> Current inverse Hessian matrix.
- alpha -> A internal parameter to control the BFGS algorithm.
ACFlow.BFGSOptimizationResults — TypeBFGSOptimizationResultsIt is used to save the optimization results of the BFGS algorithm.
Members
- x₀ -> Initial guess for the solution.
- minimizer -> Final results for the solution.
- minimum -> Objective at the final solution.
- iterations -> Number of iterations.
- δx -> Absolute change in x.
- Δx -> Relative change in x.
- δf -> Absolute change in f.
- Δf -> Relative change in f.
- resid -> Maximum gradient of f at the final solution.
- gconv -> If the convergence criterion is satisfied
Constructors
ACFlow.BFGSDifferentiable — MethodBFGSDifferentiable(f, df, x::AbstractArray)Constructor for BFGSDifferentiable struct. f is the function, df is the derivative of objective, x is the initial guess.
Functions
ACFlow.value — Methodvalue(obj::BFGSDifferentiable)Return obj.𝐹. obj will not be affected.
ACFlow.gradient — Functiongradient(obj::BFGSDifferentiable)Return obj.𝐷. obj will not be affected.
ACFlow.value_gradient! — Functionvalue_gradient!(obj::BFGSDifferentiable, x)Evaluate objective and derivative at x. obj.𝐹 and obj.𝐷 should be updated. Note that here obj.𝒟! is actually nac.jl/smooth_norm().
ACFlow.maxdiff — Functionmaxdiff(x::AbstractArray, y::AbstractArray)Return the maximum difference between two arrays: x and y. Note that the sizes of x and y should be the same.
ACFlow.eval_δf — Functioneval_δf(d::BFGSDifferentiable, s::BFGSState)Evaluate the absolute changes in f.
ACFlow.eval_Δf — Functioneval_Δf(d::BFGSDifferentiable, s::BFGSState)Evaluate the relative changes in f.
ACFlow.eval_δx — Functioneval_δx(s::BFGSState)Evaluate the absolute changes in x.
ACFlow.eval_Δx — Functioneval_Δx(s::BFGSState)Evaluate the relative changes in x.
ACFlow.eval_resid — Functioneval_resid(d::BFGSDifferentiable)Evaluate residual (maximum gradient of f at the current position).
ACFlow.optimize — Functionoptimize(f, g, x₀::AbstractArray; max_iter::I64 = 1000)Return the argmin over x of f(x) using the BFGS algorithm. Here, f is the function call, and g will return the gradient of f, x₀ is an initial guess for the solution.
ACFlow.init_state — Functioninit_state(d::BFGSDifferentiable, x₀::AbstractArray)Create a BFGSState object. Note that d should be updated in this function (d.𝐹 and d.𝐷). x₀ is an initial guess for the solution.
See also: BFGSDifferentiable, BFGSState.
ACFlow.update_state! — Functionupdate_state!(d::BFGSDifferentiable, s::BFGSState)Evaluate line search direction and change of x. New position and old gradient are saved in s.
See also: BFGSDifferentiable, BFGSState.
ACFlow.update_g! — Functionupdate_g!(d::BFGSDifferentiable, s::BFGSState)Update the function value and gradient (d.𝐹 and d.𝐷 are changed).
See also: BFGSDifferentiable, BFGSState.
ACFlow.update_h! — Functionupdate_h!(d::BFGSDifferentiable, s::BFGSState)Try to evaluate the new Hessian matrix.
See also: BFGSDifferentiable, BFGSState.
ACFlow.trace — Functiontrace(d::BFGSDifferentiable, iter::I64, curr_time::F64)Print some useful information about the optimization procedure.
ACFlow.linesearch! — Functionlinesearch!(d::BFGSDifferentiable, s::BFGSState)Evaluate line search direction. Actually, s.alpha, s.fₚ, and s.xₚ will be updated in this function.
See also: BFGSDifferentiable, BFGSState.
ACFlow.converged — Functionconverged(r::BFGSOptimizationResults)Check whether the optimization is converged.
See also: BFGSOptimizationResults.
Line Search
Structs
ACFlow.LineSearchException — TypeLineSearchExceptionMutable struct. It contains information about the error occured in the line search.
Members
- message -> Error message.
- alpha -> A key parameter used to control line search.
Functions
ACFlow.LS — FunctionLS(state::BFGSState, alpha::T, scaled::Bool)Line search: initial and static version.
LS(df::BFGSDifferentiable,
x::Vector{C64}, s::Vector{C64},
c::F64, phi_0::F64, dphi_0::F64)Line search: Hager-Zhang algorithm.