This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\cortext

[c1]Corresponding author \vol? (201?) \pages1–?

\recivedat

received date

Constructing Segmented Differentiable Quadratics to Determine Algorithmic Run Times and Model Non-Polynomial Functions

Ananth Goyal [ Dougherty Valley High School
Abstract

We propose an approach to determine the continual progression of algorithmic efficiency, as an alternative to standard calculations of time complexity, likely, but not exclusively, when dealing with data structures with unknown maximum indexes and with algorithms that are dependent on multiple variables apart from just input size. The proposed method can effectively determine the run time behavior FF at any given index xx , as well as Fx\frac{\partial F}{\partial x}, as a function of only one or multiple arguments, by combining n2\frac{n}{2} quadratic segments, based upon the principles of Lagrangian Polynomials and their respective secant lines. Although the approach used is designed for analyzing the efficacy of computational algorithms, the proposed method can be used within the pure mathematical field as a novel way to construct non-polynomial functions, such as log2n\log_{2}{n} or n+1n2\frac{n+1}{n-2}, as a series of segmented differentiable quadratics to model functional behavior and reoccurring natural patterns. After testing, our method had an average accuracy of above of 99% with regard to functional resemblance.

keywords:
Time Complexity \sepAlgorithmic Run Time \sepPolynomials \sepLagrangian Interpolation

1 Introduction

Runtime, and it’s theoretical subset, time complexity, are imperative to understanding the speed and continual efficiency of all algorithmsNasar Aho . Particularly because runtime information allows for thorough comparisons between the performance of competing approaches. Due to the varying environments in which algorithms are executed, time complexity is implemented as a function of inputted argumentsSipser rather than accounting for the situational execution timePusch ; this removes the need to address every extraneous factor that affects the speed of such algorithmsDean . There are countless methods on determining the formulaic runtime complexityQi Guzman , particularly because, from a theoretical perspective, the true runtime can never be determined without thoroughly examining the algorithm itselfullman ; however, this does not mean that the process cannot be expedited, simplified, or made easier.

The goal is to produce a function 𝒪(T(n))\mathcal{O}(T(n)) that can model the time complexity of any given algorithmMohr , primarily who’s runtime is defined as a function of more than just a single variable. We define E(foo(args))E(foo(args)) where foo(args)foo(args) is any given algorithm and EE denotes the execution in a controlled environment. The following method can be used to determine run time with respect to a several variables (not just element size) by evaluating CPU time with respect to an input size. Any confounding variables such as CPU type, computing power, and/or programming language, will be bypassed as they will remain controlled during testing. The constructed polynomial series, which will be a piece-wise of segmented quadratics, will then produce the same functional asymptotic behavior as the true time complexity 𝒪(T(n))\mathcal{O}(T(n)), which can then be independently determined through the correlation with their respective parent functions. In addition, the methods found for computing such runtimes has profound mathematical implications for representing various non-polynomial functions as differentiable quadratic segments, similar, but not identical, to the outcome of evaluating Taylor SeriesJumarie Corliss . In short, we do this by using reference points of any given non-polynomial, and developing a quadratic (using polynomial interpolationBoor ) over a particular segment that accurately matches the true functional behavior.

2 Methods

Our primary condition is the following:

x[F(x+c)f(x)=xx+cfx]\exists_{x\in\mathbb{R}}\left[{F(x+c)-f(x)=\int_{x}^{x+c}\frac{\partial f}{\partial x}}\right]

Additionally,

(n:n>0)n𝒪(T(n))\forall(n\in\mathbb{R}:n>0)\exists\frac{\partial}{\partial n}\mathcal{O}(T(n))

This ensures that the targeted Time Complexity function must be constructed of only real numbers and be differentiable throughout, except for segemented bounds. It is important to note that F(x)𝒪(T(n))F(x)𝒪(T(n))F(x)\neq\mathcal{O}(T(n))\vee F(x)\not\approx\mathcal{O}(T(n)). We also define E(foo(args))=k𝒪(T(n))E(foo(args))=k\mathcal{O}(T(n)), where k is any constant of proportionality that converts the predicted time complexity into execution time or vice-versa.

2.1 Lagrangian Polynomial Principles

We first construct a single line of intersection amongst every consecutive ordered pair of segmented indexes and respective computing time (or any alternative performance modeling metric). We use the following standard point slope formula to do so:

y=yiyi1xixi1(xxi)+yiy=\frac{y_{i}-y_{i-1}}{x_{i}-x_{i-1}}(x-x_{i})+y_{i} (2.1)

The polynomial of any given segment can be constructed using the explicit formula belowsauer Rashed , in this case the first three indexes within a data set are used; however, this applies for any given 3 point segment within the data set. Defined as: (x(xj,xk))|(k=j+2))\forall(x\in(x_{j},x_{k}))|(k=j+2)). Note: The proof for the following formulas is shown in section 2.5.

(x(x0,x2)):f(x)=y0(xx1)(xx2)(x0x1)(x0x2)+y1(xx0)(xx2)(x1x0)(x1x2)+y2(xx0)(xx1)(x2x0)(x2x1)\forall(x\in(x_{0},x_{2})):f(x)=y_{0}{\frac{(x-x_{1})(x-x_{2})}{(x_{0}-x_{1})(x_{0}-x_{2})}}+y_{1}{\frac{(x-x_{0})(x-x_{2})}{(x_{1}-x_{0})(x_{1}-x_{2})}}+y_{2}{\frac{(x-x_{0})(x-x_{1})}{(x_{2}-x_{0})(x_{2}-x_{1})}} (2.2)

We then factor in the polynomial model above and the respective secant line equation, to construct the explicit average form of the initial 3 point segment such that each point is equivalent to the difference between the secant line and the original polynomial.

(x(x0,x2)):f(x)=\forall(x\in(x_{0},x_{2})):f(x)=

y0(xx1)(xx2)(x0x1)(x0x2)+y1(xx0)(xx2)(x1x0)(x1x2)+y2(xx0)(xx1)(x2x0)(x2x1)+[yiyi1xixi1(xxi)+yi]|(i=(12))y_{0}{\frac{(x-x_{1})(x-x_{2})}{(x_{0}-x_{1})(x_{0}-x_{2})}}+y_{1}{\frac{(x-x_{0})(x-x_{2})}{(x_{1}-x_{0})(x_{1}-x_{2})}}+y_{2}{\frac{(x-x_{0})(x-x_{1})}{(x_{2}-x_{0})(x_{2}-x_{1})}}+\left[\frac{y_{i}-y_{i-1}}{x_{i}-x_{i-1}}(x-x_{i})+y_{i}\right]|(i=(1\vee 2)) (2.3)

Before we implement this method, we must account for any given segment, and to do so, we must simplify the method of polynomial construction. First we define F(x) to be dependent on our fjf_{j} outputs.

F(x):=j=0kyjfj(x)F(x):=\sum_{j=0}^{k}y_{j}f_{j}(x) (2.4)

These outputs are determined accordingly (Note: k = 3 in our case; however, the model would work for any value of k):

fj(x):=0mkmjxxmxjxm=(xx0)(xjx0)(xxj1)(xjxj1)(xxj+1)(xjxj+1)(xxk)(xjxk)f_{j}(x):=\prod_{\begin{smallmatrix}0\leq m\leq k\\ m\neq j\end{smallmatrix}}{\frac{x-x_{m}}{x_{j}-x_{m}}}={\frac{(x-x_{0})}{(x_{j}-x_{0})}}\cdots{\frac{(x-x_{j-1})}{(x_{j}-x_{j-1})}}{\frac{(x-x_{j+1})}{(x_{j}-x_{j+1})}}\cdots{\frac{(x-x_{k})}{(x_{j}-x_{k})}} (2.5)

Such that,

(ji):fj(xi)=mjxixmxjxm=0{\displaystyle\forall({j\neq i}):f_{j}(x_{i})=\prod_{m\neq j}{\frac{x_{i}-x_{m}}{x_{j}-x_{m}}}=0} (2.6)

2.2 Estimating 𝒪(T(n))\mathcal{O}(T(n)) as a Function of Quadratic Segments

We can then average this with the constructed Lagrangian polynomial to get our model for any given 3-point segment. Note: (F(xk)=F(xk))(limxkFxlimxk+Fx)(Fx|x=k)\because(F(x_{k})=F(x_{k}))\wedge(\lim_{x\to k^{-}}\frac{\partial F}{\partial x}\neq\lim_{x\to k^{+}}\frac{\partial F}{\partial x})\therefore\nexists(\frac{\partial F}{\partial x}|_{x=k}) We can simplify the given expression toBerrut :

(x(xj,xk)):F(x)=12jk=j+3yj0mkmjxxmxjxm+12[ykyk1xkxk1(xxk)+yk]\forall(x\in(x_{j},x_{k})):F(x)=\frac{1}{2}\sum_{j}^{k=j+3}y_{j}\prod_{\begin{smallmatrix}0\leq m\leq k\\ m\neq j\end{smallmatrix}}{\frac{x-x_{m}}{x_{j}-x_{m}}}+\frac{1}{2}\left[\frac{y_{k}-y_{k-1}}{x_{k}-x_{k-1}}(x-x_{k})+y_{k}\right] (2.7)

Such that,

x[jk=j+2yj0mkmjxxmxjxm+[ykyk1xkxk1(xxk)+yk]]|x=xj+1x(1k)E(foo(xj+1))|x=xj+1.5\left.\frac{\partial}{\partial x}\left[\sum_{j}^{k=j+2}y_{j}\prod_{\begin{smallmatrix}0\leq m\leq k\\ m\neq j\end{smallmatrix}}{\frac{x-x_{m}}{x_{j}-x_{m}}}+\left[\frac{y_{k}-y_{k-1}}{x_{k}-x_{k-1}}(x-x_{k})+y_{k}\right]\right]\right|_{x=x_{j+1}}\approx\left.\frac{\partial}{\partial x}(\frac{1}{k})E(foo(x_{j+1}))\right|_{x=x_{j+1.5}} (2.8)

as well as the segmented averageComenetz ,

12xk2xj+1xjxk[jk=j+2yj0mkmjxxmxjxm+[ykyk1xkxk1(xxk)+yk]]xj+1xk(1k)E(foo(n))\frac{1}{2x_{k}-2x_{j+1}}\int_{x_{j}}^{x_{k}}\left[\sum_{j}^{k=j+2}y_{j}\prod_{\begin{smallmatrix}0\leq m\leq k\\ m\neq j\end{smallmatrix}}{\frac{x-x_{m}}{x_{j}-x_{m}}}+\left[\frac{y_{k}-y_{k-1}}{x_{k}-x_{k-1}}(x-x_{k})+y_{k}\right]\right]\approx\int_{x_{j+1}}^{x_{k}}(\frac{1}{k})E(foo(n)) (2.9)

We then implement the proposed method of each selected segment to construct the function for every iteration of natural numbers by redefining F(x)F(x) from a single constructed polynomial to a multi-layered, piece-wise construction of the primary segments of such polynomials.

(x:x>0)):F(x)={1202yj0m2m0xxmx0xm+12[y2y1x2x1(xx2)+y2]x1xx21213yj2m4m2xxmx1xm+12[y3y2x3x2(xx3)+y4]x2xx312n2nyjn2mnmn2xxmxn2xm+12[ynyn1xnxn1(xxn)+yn]xn1xxn\forall(x\in\mathbb{R}:x>0)):F(x)=\begin{cases}\frac{1}{2}\sum_{0}^{2}y_{j}\prod_{\begin{smallmatrix}0\leq m\leq 2\\ m\neq 0\end{smallmatrix}}{\frac{x-x_{m}}{x_{0}-x_{m}}}+\frac{1}{2}\left[\frac{y_{2}-y_{1}}{x_{2}-x_{1}}(x-x_{2})+y_{2}\right]&x_{1}\leq x\leq x_{2}\\ \frac{1}{2}\sum_{1}^{3}y_{j}\prod_{\begin{smallmatrix}2\leq m\leq 4\\ m\neq 2\end{smallmatrix}}{\frac{x-x_{m}}{x_{1}-x_{m}}}+\frac{1}{2}\left[\frac{y_{3}-y_{2}}{x_{3}-x_{2}}(x-x_{3})+y_{4}\right]&x_{2}\leq x\leq x_{3}\\ \cdots&\cdots\\ \frac{1}{2}\sum_{n-2}^{n}y_{j}\prod_{\begin{smallmatrix}n-2\leq m\leq n\\ m\neq n-2\end{smallmatrix}}{\frac{x-x_{m}}{x_{n-2}-x_{m}}}+\frac{1}{2}\left[\frac{y_{n}-y_{n-1}}{x_{n}-x_{n-1}}(x-x_{n})+y_{n}\right]&x_{n-1}\leq x\leq x_{n}\\ \end{cases} (2.10)

In order to retrieve the complexity of the algorithm at a particular index ii we can now simply compute F(i)F(i). Note: Fx|x=xjxk\nexists\left.\frac{\partial F}{\partial x}\right|_{x={x_{j}}\vee{x_{k}}} but (x(xj,xk)Fx|x\forall(x\in(x_{j},x_{k})\exists\left.\frac{\partial F}{\partial x}\right|_{x}. Additionally, however, the proposed method, when graphed, will construct a continuous function, making it easy to determine the true runtime of the function as 𝒪T(n)).\mathcal{O}T(n)).

2.3 Evaluating Quadratic Segments of Multi-variable Algorithms

2.3.1 Run Times with Non-Composite Operations

The following method will suffice if, and only if, the arguments are not directly correlated through any mathematical operation, excluding addition, subtraction, or any non-composite operation. For example, if our unknown time complexity of Algorithm foo(x,b)foo(x,b) was 𝒪(log2(x)+b)\mathcal{O}(\log_{2}(x)+b). We must first evaluate the execution time with respect to a single variable. We use E(foo())E(foo()), to denote the execution time of the given function; this can be determined by implementing a computing timer into the algorithm. In this case we evaluate the algorithm accordingly:

Y0=E(foo(x,b))|{(x:x>0)(b=0)}Y_{0}=E(foo(x,b))|\{(x\in\mathbb{N}:x>0)\wedge(b=0)\} (2.11)

Such that,

Y0=y00E(foo(x0,0)),y01E(foo(x1,0)),,y0nE(foo(xn,0))Y_{0}={y_{0_{0}}\vee{}E(foo(x_{0},0)),y_{0_{1}}\vee{}E(foo(x_{1},0)),\cdots,y_{0_{n}}\vee{}E(foo(x_{n},0))} (2.12)

And the same for the other argument:

X0=E(foo(x,b))|{(b:b>0)(x=0)}X_{0}=E(foo(x,b))|\{(b\in\mathbb{N}:b>0)\wedge(x=0)\} (2.13)

Such that,

X0=χ00E(foo(0,b0)),χ01E(foo(0,b0)),,χ0nE(foo(0,bn))X_{0}={\chi_{0_{0}}\vee{}E(foo(0,b_{0})),\chi_{0_{1}}\vee{}E(foo(0,b_{0})),\cdots,\chi_{0_{n}}\vee{}E(foo(0,b_{n}))} (2.14)

In this particular case, we first isolate the F(x,b)F(x,b) in terms of x. To do so we must first ensure xx and bb are independent of each other. Since, in our sample scenario,

E(foo(x,b))=log2(x)+bE(foo(x,b))=\log_{2}(x)+b (2.15)

Now, we can conclude that,

E(foo(x,0))=log2(x)+0=log2(x)E(foo(x,0))=\log_{2}(x)+0=\log_{2}(x) (2.16)

And,

E(foo(0(>0),b))=log2(0(>0))+bE(foo(0\vee(\forall\in\mathbb{R}>0),b))=\log_{2}(0\vee(\forall\in\mathbb{R}>0))+b (2.17)

Now, we can evaluate the E(foo(x,b))E(foo(x,b)) over a set of fixed data points. First with respect to x:

F(x,0)Fx(x,b)=12jk=j+3y0j0mkmjxxmxjxm+12[y0ky0k1xkxk1(xxk)+y0k]F(x,0)\vee F_{x}(x,b)=\frac{1}{2}\sum_{j}^{k=j+3}y_{0_{j}}\prod_{\begin{smallmatrix}0\leq m\leq k\\ m\neq j\end{smallmatrix}}{\frac{x-x_{m}}{x_{j}-x_{m}}}+\frac{1}{2}\left[\frac{y_{0_{k}}-y_{0_{k-1}}}{x_{k}-x_{k-1}}(x-x_{k})+y_{0_{k}}\right] (2.18)

Then with respect to b:

F(0,b)Fb(x,b)=12jk=j+3χ0j0mkmjbbmbjbm+12[χ0kχ0k1bkbk1(bbk)+χ0k]F(0,b)\vee F_{b}(x,b)=\frac{1}{2}\sum_{j}^{k=j+3}\chi_{0_{j}}\prod_{\begin{smallmatrix}0\leq m\leq k\\ m\neq j\end{smallmatrix}}{\frac{b-b_{m}}{b_{j}-b_{m}}}+\frac{1}{2}\left[\frac{\chi_{0_{k}}-\chi_{0_{k-1}}}{b_{k}-b_{k-1}}(b-b_{k})+\chi_{0_{k}}\right] (2.19)

Once we have computed our segmented quadratics with respect a particular index group, we can construct our piece-wise function of E(foo(x,b))=log2(x)+bE(foo(x,b))=\log_{2}(x)+b as two independent, graphical representations.

F(x,b)={(x>0)):Fx={1202yj0m2m0xxmx0xm+12[y2y1x2x1(xx2)+y2]x1xx212n2nyjn2mnmn2xxmxn2xm+12[ynyn1xnxn1(xxn)+yn]xn1xxn(b>0)):Fb={1202χj0m2m0xxmb0bm+12[χ2χ1b2b1(bb2)+χ2]b0xb212n2nχjn2mnmn2bbmbn2bm+12[χnχn1bnbn1(bbn)+yn]bn1bbnF(x,b)=\begin{cases}\forall(x>0)):F_{x}=\begin{cases}\frac{1}{2}\sum_{0}^{2}y_{j}\prod_{\begin{smallmatrix}0\leq m\leq 2\\ m\neq 0\end{smallmatrix}}{\frac{x-x_{m}}{x_{0}-x_{m}}}+\frac{1}{2}\left[\frac{y_{2}-y_{1}}{x_{2}-x_{1}}(x-x_{2})+y_{2}\right]&x_{1}\leq x\leq x_{2}\\ \cdots&\cdots\\ \frac{1}{2}\sum_{n-2}^{n}y_{j}\prod_{\begin{smallmatrix}n-2\leq m\leq n\\ m\neq n-2\end{smallmatrix}}{\frac{x-x_{m}}{x_{n-2}-x_{m}}}+\frac{1}{2}\left[\frac{y_{n}-y_{n-1}}{x_{n}-x_{n-1}}(x-x_{n})+y_{n}\right]&x_{n-1}\leq x\leq x_{n}\\ \end{cases}\\ \forall(b>0)):F_{b}=\begin{cases}\frac{1}{2}\sum_{0}^{2}\chi_{j}\prod_{\begin{smallmatrix}0\leq m\leq 2\\ m\neq 0\end{smallmatrix}}{\frac{x-x_{m}}{b_{0}-b_{m}}}+\frac{1}{2}\left[\frac{\chi_{2}-\chi_{1}}{b_{2}-b_{1}}(b-b_{2})+\chi_{2}\right]&b_{0}\leq x\leq b_{2}\\ \cdots&\cdots\\ \frac{1}{2}\sum_{n-2}^{n}\chi_{j}\prod_{\begin{smallmatrix}n-2\leq m\leq n\\ m\neq n-2\end{smallmatrix}}{\frac{b-b_{m}}{b_{n-2}-b_{m}}}+\frac{1}{2}\left[\frac{\chi_{n}-\chi_{n-1}}{b_{n}-b_{n-1}}(b-b_{n})+y_{n}\right]&b_{n-1}\leq b\leq b_{n}\\ \end{cases}\end{cases} (2.20)

Although our method produces non-differentiable points at segmented bounds, we can still compute partial derivatives at points (x:x>0)\forall(x\in\mathbb{R}:x>0) such as:

(x)12jk=j+3y0j0mkmjxxmxjxm+12[y0ky0k1xkxk1(xxk)+y0k]|x=xj+1(x)(log2x+b)|x=xj+1(\left.\frac{\partial}{\partial x})\frac{1}{2}\sum_{j}^{k=j+3}y_{0_{j}}\prod_{\begin{smallmatrix}0\leq m\leq k\\ m\neq j\end{smallmatrix}}{\frac{x-x_{m}}{x_{j}-x_{m}}}+\frac{1}{2}\left[\frac{y_{0_{k}}-y_{0_{k-1}}}{x_{k}-x_{k-1}}(x-x_{k})+y_{0_{k}}\right]\right|_{x=x_{j}+1}\approx(\left.\frac{\partial}{\partial x})(\log_{2}x+b)\right|_{x=x_{j}+1} (2.21)

We can justify the accuracy of FxF_{x} with E(foo(x))E(foo(x)) assuming only one inputted argument accordingly:

(Fx|x=j)(Fx|x=k)(limxkFx(x)=limxk+Fx(x)=Fx(x))\because\nexists(\frac{\partial F}{\partial x}|_{x=j})\wedge\nexists(\frac{\partial F}{\partial x}|_{x=k})\wedge\because(\lim_{x\to k^{-}}F_{x}(x)=\lim_{x\to k^{+}}F_{x}(x)=F_{x}(x)) (2.22)

We can conclude that:

1n(x:x>xk)nE(foo(x))1x2x1x1x2Fx(x)+1x3x2x2x3Fx(x)++1xnxn1xn1xnFx(x)\frac{1}{n}\sum_{\forall(x\in\mathbb{N}:x>x_{k})}^{n}E(foo(x))\approx\frac{1}{x_{2}-x_{1}}\int_{x_{1}}^{x_{2}}F_{x}(x)+\frac{1}{x_{3}-x_{2}}\int_{x_{2}}^{x_{3}}F_{x}(x)+\cdots+\frac{1}{x_{n}-x_{n-1}}\int_{x_{n-1}}^{x_{n}}F_{x}(x) (2.23)

Alternatively,

1n(x:x>xk)nE(foo(x))(x:x>xk)n1xkxjxjxkFx(x)\frac{1}{n}\sum_{\forall(x\in\mathbb{N}:x>x_{k})}^{n}E(foo(x))\approx\sum_{\forall(x\in\mathbb{N}:x>x_{k})}^{n}\frac{1}{x_{k}-x_{j}}\int_{x_{j}}^{x_{k}}F_{x}(x) (2.24)

2.3.2 Run times with Composite Operations

In order to explain the approach used in cases with unknown runtime functions that consist of composite operations, we must implement the following proof.

Theorem 2.1.

If the unknown run time function consists of composite operations, such as in M(x,b)=1blog2(x)M(x,b)=\frac{1}{b}\log_{2}(x), this can be instantly determined if the functional difference across a set of input values is not just a graphical translation.

Proof of Theorem 2.1.

If,

G(x,b)=log2(x)+bM(x,b)=1blog2(x)G(x,b)=\log_{2}(x)+b\wedge M(x,b)=\frac{1}{b}\log_{2}(x) (2.25)

Then,

G(x,0)=log2(x)+0=log2(x)=Gx(x,b)G(x)G(x,0)=\log_{2}(x)+0=\log_{2}(x)=G_{x}(x,b)\vee G(x) (2.26)

Additionally,

G(x,0)=Gx(x,b)G(x)G(x,0)=G_{x}(x,b)\vee G(x) (2.27)

But,

M(x,0)Mx(x,b)M(x)M(x,0)\neq M_{x}(x,b)\vee M(x) (2.28)

Due to the non-composite operations of GG, the value of bb does not directly impact the value of xx, rather just the output of the mulitvariable function. The same can be done conversely with other variables; however, if they are directly correlated, such as in M(x,b)M(x,b) it prevents the difference from being just a translation.

M(x,0((b:b>0)))log2(x)M(x,0\vee(\forall(b\in\mathbb{R}:b>0)))\neq\log_{2}(x) (2.29)

Above, it is clear that both independent variables cannot be determined through inputting a constant of 0, causing a non-linear intervariable relationship. ∎

In order to construct the primary segmented function with equivalent behavior to the multivariable runtime E(foo(x,b))E(foo(x,b)) or with any number of arguments, we must run execution tests with respect to each variable such that the remaining are treated as constants. If, like in the example stated earlier, the unknown runtime function was 1blog2x\frac{1}{b}\log_{2}x, then when graphically modeled for nn number of tests in terms of xx, a set of skewed logarithmic curves would be constructed, where as with respect to yy, a set of hyperbolic functions would be produced. By treating each temporary, non-functional, constant argument as knk_{n} the graphical differences can be factored in, when creating the single time complexity formula. Although there are several kk values that can be used, to keep the methodology consistent, we decide to take the input value that produces the average functional value over a given segment as the select constant. Although the true function is unknown, we can use the constructed, segmented quadratic to do so.

1δkδjδjδkFδ(δx,kb,,kz)δ=aδ2+bδ+c\frac{1}{\delta_{k}-\delta_{j}}\int_{\delta_{j}}^{\delta_{k}}F_{\delta}(\delta_{x},k_{b},\cdots,k_{z})\partial\delta=a\delta^{2}+b\delta+c (2.30)

Then we can simply solve for the value of δ\delta accordinglyIrving , such that (δ)(δ>0)(\delta\in\mathbb{R})\wedge(\delta>0):

δ=b±b24a(c1δkδjδjδkFδ(δx,kb,,kz)δ)2a\delta=\frac{-b\pm\sqrt{b^{2}-4a(c-\frac{1}{\delta_{k}-\delta_{j}}\int_{\delta_{j}}^{\delta_{k}}F_{\delta}(\delta_{x},k_{b},\cdots,k_{z})\partial\delta)}}{2a} (2.31)

While this process is still comprehensible, once we start introducing functions with more than 2 arguments, we must test their values in planes with multiple dimensional layers, rather than just one or two dimensions. To do so, we determine the intervariable relationships between every potential pair of arguments and construct a potential runtime formula accordingly. Note: The higher dimensional order of the function, the more convoluted the formulaic determination becomes.

Suppose the intervariable runtime function E(foo(x,b,c,,z))E(foo(x,b,c,\cdots,z)) such that the corresponding segmented quadratic function is F(x,b,c,,z)F(x,b,c,\cdots,z). We would evaluate the unknown E(foo(x,b,,z))E(foo(x,b,\cdots,z)) with respect to a single variable such that the remaining are treated as constants. Using the example above, we would first plug in constant values into xx and bb, while graphically modeling the rate of change of cc as an independent function. Then we begin to adjust bb with increments of ii to determine, their respective transformational relationship. We would repeat this process for every potential pair (x,b)(x,b), (x,c)(x,c), (b,c)(b,c), and so forth.

Fx,b(x,b,,z)=Fx(x,(b:b>0:b=b+i),,kz)F_{x,b}(x,b,\cdots,z)=F_{x}(x,\forall(b\in\mathbb{R}:b>0:b=b+i),\cdots,k_{z}) (2.32)
Fb,c(x,b,,z)=Fb(kx,b,(c:c>0:c=c+i),,kz)F_{b,c}(x,b,\cdots,z)=F_{b}(k_{x},b,\forall(c\in\mathbb{R}:c>0:c=c+i),\cdots,k_{z}) (2.33)
\cdots
Fc,z(x,b,,z)=Fb(kx,kb,c,(z:z>0:z=z+i),,kz)F_{c,z}(x,b,\cdots,z)=F_{b}(k_{x},k_{b},c,\forall(z\in\mathbb{R}:z>0:z=z+i),\cdots,k_{z}) (2.34)

From their we can use the graphical model to help deduce the formulaic runtime with respect to all variables.

Similar to the analysis method with respect to a single variable, we can justify the accuracy by approximately equating the average integrated value of each independent segment with it’s true, algorithmic counterpart: Note: We define ll as the total number of input arguments.

1(n)(l)(x:x>xk)(n)(l)E(foo(x,b,,z))1x2x1x1x2Fx((x,b,,z)x++1xnxn1xn1xnFx(x,b,,z)x\frac{1}{(n)(l)}\sum_{\forall(x\in\mathbb{N}:x>x_{k})}^{(n)(l)}E(foo(x,b,\cdots,z))\approx\frac{1}{x_{2}-x_{1}}\int_{x_{1}}^{x_{2}}F_{x}((x,b,\cdots,z)\partial x+\cdots+\frac{1}{x_{n}-x_{n-1}}\int_{x_{n-1}}^{x_{n}}F_{x}(x,b,\cdots,z)\partial x
+1b2b1b1b2Fb((x,b,,z)b++1bnbn1bn1xnFb(x,b,,z)b+\frac{1}{b_{2}-b_{1}}\int_{b_{1}}^{b_{2}}F_{b}((x,b,\cdots,z)\partial b+\cdots+\frac{1}{b_{n}-b_{n-1}}\int_{b_{n-1}}^{x_{n}}F_{b}(x,b,\cdots,z)\partial b
++1z2z1z0z1Fz((x,b,,z)z++1znzn1zn1xnFz(x,b,,z)b+\cdots+\frac{1}{z_{2}-z_{1}}\int_{z_{0}}^{z_{1}}F_{z}((x,b,\cdots,z)\partial z+\cdots+\frac{1}{z_{n}-z_{n-1}}\int_{z_{n-1}}^{x_{n}}F_{z}(x,b,\cdots,z)\partial b (2.35)

This method can be simplified accordingly:

1(n)(l)(x:x>xk)(n)(l)E(foo(x,b,,z))x0zn(δ:x>δk)n1δkδjδjδkF(x,b,,z)δ\frac{1}{(n)(l)}\sum_{\forall(x\in\mathbb{N}:x>x_{k})}^{(n)(l)}E(foo(x,b,\cdots,z))\approx\sum_{x_{0}}^{z_{n}}\sum_{\forall(\delta\in\mathbb{R}:x>\delta_{k})}^{n}\frac{1}{\delta_{k}-\delta_{j}}\int_{\delta_{j}}^{\delta_{k}}F(x,b,\cdots,z)\partial\delta (2.36)

2.4 Segmented Quadratics to Construct Non-Polynomials

The following subsection will discuss the mathematical applications of this method, and will focus on the proofs behind constructing non-polynomials as piece-wise functions built upon segmented quadratics.

Lemma 2.2.

Given nn values of (x)(x\in\mathbb{R}) with corresponding nn values of (y)(y\in\mathbb{R}) a representative polynomial PP can be constructed such that deg(P)<nP(xk)=yk\deg(P)<n\wedge P(x_{k})=y_{k}

Proof of Lemma 2.2.

Let,

P1(x)=(xx2)(xx3)(xxn)(x1x2)(x1x3)(x1xn)P_{1}(x)=\frac{(x-x_{2})(x-x_{3})\cdots(x-x_{n})}{(x_{1}-x_{2})(x_{1}-x_{3})\cdots(x_{1}-x_{n})} (2.37)

Therefore,

P1(x1)=1P1(x2)=P1(x3)==P1(xn)=0P_{1}(x_{1})=1\wedge P_{1}(x_{2})=P_{1}(x_{3})=\cdots=P_{1}(x_{n})=0 (2.38)

Then evaluate,

P2,P3,,Pn|Pj(xj)=1Pj(xi)=0Pj(xi)=0(ij)P_{2},P_{3},\cdots,P_{n}|P_{j}(x_{j})=1\wedge P_{j}(x_{i})=0\wedge P_{j}(x_{i})=0\forall(i\neq j) (2.39)

Therefore, P(x)=yiPi(x)P(x)=\sum y_{i}P_{i}(x) is a constructed polynomial such that (xi:P(xi))(i:i<n)\forall(x_{i}\in\mathbb{R}:\exists P(x_{i}))\wedge\forall(i\in\mathbb{N}:i<n). It is built upon subsidiary polynomials of degree n1deg(P)<nn-1\therefore\deg(P)<n

Theorem 2.3.

Referencing Lemma 1, given any real non-polynomial, an approximate quadratic piece-wise function F(x)F(x) can be constructed using n2\frac{n}{2} segments produced by 3 values, defined over 2 values, of xx\in\mathbb{R} and their corresponding outputs such that F(x)F(x) is continuous at all x values including respective transition points, but not necessarily differentiable at such values.

Proof of Theorem 2.3.

Since the initial portion of the polynomial is based upon Lemma 1, it is clear that 3 base points will construct a quadratic polynomial, unless their respective derivatives are equivalent which would produce a sloped line. The following method is shown:

F(x)={1202yj0m2m0xxmx0xm+12[y2y1x2x1(xx2)+y2]x1xx21213yj1m3m1xxmx2xm+12[y3y2x3x2(xx3)+y3]x2xx312n2nyjn2mnmn2xxmxn2xm+12[ynyn1xnxn1(xxn)+yn]xn2xxnF(x)=\begin{cases}\frac{1}{2}\sum_{0}^{2}y_{j}\prod_{\begin{smallmatrix}0\leq m\leq 2\\ m\neq 0\end{smallmatrix}}{\frac{x-x_{m}}{x_{0}-x_{m}}}+\frac{1}{2}\left[\frac{y_{2}-y_{1}}{x_{2}-x_{1}}(x-x_{2})+y_{2}\right]&x_{1}\leq x\leq x_{2}\\ \frac{1}{2}\sum_{1}^{3}y_{j}\prod_{\begin{smallmatrix}1\leq m\leq 3\\ m\neq 1\end{smallmatrix}}{\frac{x-x_{m}}{x_{2}-x_{m}}}+\frac{1}{2}\left[\frac{y_{3}-y_{2}}{x_{3}-x_{2}}(x-x_{3})+y_{3}\right]&x_{2}\leq x\leq x_{3}\\ \cdots&\cdots\\ \frac{1}{2}\sum_{n-2}^{n}y_{j}\prod_{\begin{smallmatrix}n-2\leq m\leq n\\ m\neq n-2\end{smallmatrix}}{\frac{x-x_{m}}{x_{n-2}-x_{m}}}+\frac{1}{2}\left[\frac{y_{n}-y_{n-1}}{x_{n}-x_{n-1}}(x-x_{n})+y_{n}\right]&x_{n-2}\leq x\leq x_{n}\\ \end{cases} (2.40)

When simplified, the function would be defined accordingly:

F(x)={a0x2+b0x+c0x1xx2a1x2+b1x+c1x2xx3a2x2+b2x+c2xn1xxnF(x)=\begin{cases}a_{0}x^{2}+b_{0}x+c_{0}&x_{1}\leq x\leq x_{2}\\ a_{1}x^{2}+b_{1}x+c_{1}&x_{2}\leq x\leq x_{3}\\ \cdots&\cdots\\ a_{2}x^{2}+b_{2}x+c_{2}&x_{n}-1\leq x\leq x_{n}\\ \end{cases} (2.41)

By definition any polynomial is continuous throughout it’s designated boundsCucker , therefore, for all values within each segment, F(x)F(x) is continuous. And, since the bounded values of each segment are equivalent, we can conclude that the produced function is continuous everywhere. Formally (limxtF(x)=limxt+F(x)=F(x))(\lim_{x\to t^{-}}F(x)=\lim_{x\to t^{+}}F(x)=F(x)) where tt is any bounded point. However, this does not guarantee that (limxtF(x)x=limxt+F(x)x(\lim_{x\to t^{-}}\frac{\partial F(x)}{\partial x}=\lim_{x\to t^{+}}\frac{\partial F(x)}{\partial x}; therefore it’s derivative at the bounded point is likely undefined. ∎

Theorem 2.4.

Given any segmented quadratic F(x)=ax2+bx+cF(x)=ax^{2}+bx+c constructed through averaging Lagrangian Interpolation with it’s respective secant, the graphical concavity can be determined by looking at the sign of variable aa such that (a)(b)(c)(a\in\mathbb{R})\wedge(b\in\mathbb{R})\wedge(c\in\mathbb{R}).

Proof of Theorem 2.4.

Since F(x)F(x) constructed with three base points, the only polynomial function produced are segmented quadratics. Upward concavity exists (x(xj,xk)|2x2>0\forall(x\in(x_{j},x_{k})|\frac{\partial^{2}}{\partial x^{2}}>0; while downward concavity exists (x(xj,xk)|2x2<0\forall(x\in(x_{j},x_{k})|\frac{\partial^{2}}{\partial x^{2}}<0. However, this process can be expedited without the need to compute second derivatives.

Since our segmented polynomial is constructed using only three points we can conclude that,

(x(xj,xk)):{Fx(x)=ax2+bx+c}|{(a:a>0)(b:b>0)(c:c>0)}\forall(x\in(x_{j},x_{k})):\{F_{x}(x)=ax^{2}+bx+c\}|\{(a\in\mathbb{R}:a>0)\wedge(b\in\mathbb{R}:b>0)\wedge(c\in\mathbb{R}:c>0)\} (2.42)

Therefore,

2Fx2|(x(xj,xk))=2a2axx=Fx(x)xjxk2Fx2=a|a||xjxk2Fx2|\left.\frac{\partial^{2}F}{\partial x^{2}}\right|_{\forall(x\in(x_{j},x_{k}))}=2a\therefore\iint{2a}\partial x\partial x=F_{x}(x)\therefore\int_{x_{j}}^{x_{k}}\frac{\partial^{2}F}{\partial x^{2}}=\frac{a}{|a|}\left|\int_{x_{j}}^{x_{k}}\frac{\partial^{2}F}{\partial x^{2}}\right| (2.43)

Thus, the sign of aa is the only significant value to determine the segmented concavity Fx(x)F_{x}(x). Determining the functional concavity of the Lagrangian construction is important, as with certain functions, primarily those with upward concavity, it may not be necessary to compute secant line averages. ∎

When testing the mathematical accuracy AA of our approach, we use the segmented average value and compare it to that of the original function G(x)G(x). In cases where abG(x)>abF(x)\int_{a}^{b}G(x)>\int_{a}^{b}F(x) we simply compute the reciprocal of the following function. We use aa and bb as placeholder variables to represent the segmented bounds.

A=0nabG(x)0nabF(x)A=\frac{\sum_{0}^{n}\int_{a}^{b}G(x)}{\sum_{0}^{n}\int_{a}^{b}F(x)} (2.44)

3 Results

We divide the results section into the primary perspective (algorithmic implications) and the secondary perspective (pure mathematical implications).

3.1 Algorithmic Test Results

We tested our method on four algorithms (two single variable functions and two multivariable functions) and compared the produced time complexity formulas to the true, known, complexities to see how accurate our formulations were, and the extremity of deviations, if, at all. The first algorithm (single variable) was a binary search of xx elements, with a complexity of 𝒪(logx)\mathcal{O}(\log x). The second (single variable) was a sort of nn elements, with a complexity of 𝒪(xlogx)\mathcal{O}(x\log x). The third (multivariable) was a combined search sort algorithm of nn unsorted elements, and bb select sorted elements, with a complexity of 𝒪(b+logx)\mathcal{O}(b+\log x). The fourth (multivariable) was a custom algorithm of nn elements, with a complexity of 𝒪(mx+loglogx)\mathcal{O}(mx+\log\log x). Although coefficients and additional constants are implemented in the predicted complexity, due to time being the only output variable, the only relevant component is the contents of the big-OO as it represents the asymptotic behavior of the algorithmic runtime regardless of any confounding variables. For multivariable algorithms, like stated in the methods, runtime complexities were computed with respect to each variable, and put together in the form of the final predicted complexity.

Table 1: Predicted Runtime Functions

Complexity Constructed Polynomial Predicted Complexity 𝒪(logx)\mathcal{O}(\log{x}) F(x)={172400000x2+203120000x+1947640045x9532200000x2+149220000x+61331760095x205151574200000x2+1528957420000x+270373696000205x385F(x)=\begin{cases}-\frac{17}{2400000}x^{2}+\frac{203}{120000}x+\frac{1947}{6400}&45\leq x\leq 95\\ -\frac{3}{2200000}x^{2}+\frac{149}{220000}x+\frac{6133}{17600}&95\leq x\leq 205\\ -\frac{151}{574200000}x^{2}+\frac{15289}{57420000}x+\frac{270373}{696000}&205\leq x\leq 385\\ \end{cases} 111𝒪(logx)+0.22\frac{1}{11}\mathcal{O}(\log{x})+0.22 𝒪(xlogx)\mathcal{O}(x\log{x}) F(x)={19625000x2+43125x+6950050x1003312500x2+12325000x9500100x12519625000x2+43125x+69500125x150F(x)=\begin{cases}\frac{19}{625000}x^{2}+\frac{4}{3125}x+\frac{69}{500}&50\leq x\leq 100\\ \frac{3}{312500}x^{2}+\frac{123}{25000}x-\frac{9}{500}&100\leq x\leq 125\\ \frac{19}{625000}x^{2}+\frac{4}{3125}x+\frac{69}{500}&125\leq x\leq 150\\ \end{cases} 1350𝒪(xlogx)\frac{1}{350}\mathcal{O}(x\log{}x) 𝒪(mxlogx)\mathcal{O}(mx\log{x}) {Fx(x,m)={19625000x2+43125x+6950050x1003312500x2+12325000x9500100x12519625000x2+43125x+69500125x150Fm(x,m)={1892m0m101837m3346110m202066m5618420m30\begin{cases}F_{x}(x,m)=\begin{cases}\frac{19}{625000}x^{2}+\frac{4}{3125}x+\frac{69}{500}&50\leq x\leq 100\\ \frac{3}{312500}x^{2}+\frac{123}{25000}x-\frac{9}{500}&100\leq x\leq 125\\ \frac{19}{625000}x^{2}+\frac{4}{3125}x+\frac{69}{500}&125\leq x\leq 150\\ \end{cases}\\ F_{m}(x,m)=\begin{cases}1892m&0\leq m\leq 10\\ 1837m-33461&10\leq m\leq 20\\ 2066m-56184&20\leq m\leq 30\\ \end{cases}\end{cases} 11593750𝒪(mxlogx)\frac{1159}{3750}\mathcal{O}(mx\log{x}) 𝒪(mx+loglogb)\mathcal{O}(mx+\log\log{b}) {Fx(x,m,b)={19625000x2+43125x+6950050x1003312500x2+12325000x9500100x12519625000x2+43125x+69500125x150Fm(x,m,b)={1892m0m101837m3346110m202066m5618420m30Fb(x,m,b)={1312000000x2+8631200000x+1389491600005x30538000000x2+363800000x+283677320000305x505115000000x2+51250000x+560389600000505x705\begin{cases}F_{x}(x,m,b)=\begin{cases}\frac{19}{625000}x^{2}+\frac{4}{3125}x+\frac{69}{500}&50\leq x\leq 100\\ \frac{3}{312500}x^{2}+\frac{123}{25000}x-\frac{9}{500}&100\leq x\leq 125\\ \frac{19}{625000}x^{2}+\frac{4}{3125}x+\frac{69}{500}&125\leq x\leq 150\\ \end{cases}\\ F_{m}(x,m,b)=\begin{cases}1892m&0\leq m\leq 10\\ 1837m-33461&10\leq m\leq 20\\ 2066m-56184&20\leq m\leq 30\\ \end{cases}\\ F_{b}(x,m,b)=\begin{cases}-\frac{13}{12000000}x^{2}+\frac{863}{1200000}x+\frac{138949}{160000}&5\leq x\leq 305\\ -\frac{3}{8000000}x^{2}+\frac{363}{800000}x+\frac{283677}{320000}&305\leq x\leq 505\\ -\frac{1}{15000000}x^{2}+\frac{51}{250000}x+\frac{560389}{600000}&505\leq x\leq 705\\ \end{cases}\end{cases} 𝒪(mxlogx+loglogb)+0.06\mathcal{O}(mx\log{}x+\log\log{b})+0.06

3.2 Mathematical Test Results

We tested our method against various non-polynomial functions, and selected one example of each common type of non-polynomial to be representative of the accuracy with it’s functional family. We made sure to remove any non-composite components such as any constants, as that would only adjust the function by a graphical translation. The functions used were log2x\log_{2}x (logarithm family), x1x\frac{x-1}{x} (rational family), 2x2^{x} (exponential family), and 4cos(x)4cos(x) (trigonometric family); although we could choose more convoluted functions, we wanted to showcase performance for functions that are similar to their parent functions to attain a holistic perspective. In most cases, the Lagrangian constructions tend to exceed the vertical level of their respective non-polynomials, making secant line averages most useful with downward concavity. We defined our piece wise function until some relatively close, arbitrary whole value that leaves numbers simple, to stay consistent; however, the accuracy is still indicative of potential performance regardless of the final bound, due to the natural progression of such functions.

Table 2: Accuracy of Constructed Polynomials Compared to Base Function
Function Constructed Polynomial Calculated Accuracy
log2x\log_{2}x F(x)={1196x2+14x+438x161768x2+18x+7316x3213072x2+116x+10332x64F(x)=\begin{cases}-\frac{1}{196}x^{2}+\frac{1}{4}x+\frac{4}{3}&8\leq x\leq 16\\ -\frac{1}{768}x^{2}+\frac{1}{8}x+\frac{7}{3}&16\leq x\leq 32\\ -\frac{1}{3072}x^{2}+\frac{1}{16}x+\frac{10}{3}&32\leq x\leq 64\\ \end{cases} 99.964%99.964\%
cos(πx)\cos(\pi{}x) F(x)={414125x243125x+10x0.5414125x2871125x+3321250.5x1414125x215725x+2461251x1.5F(x)=\begin{cases}-\frac{414}{125}x^{2}-\frac{43}{125}x+1&0\leq x\leq 0.5\\ \frac{414}{125}x^{2}-\frac{871}{125}x+\frac{332}{125}&0.5\leq x\leq 1\\ \frac{414}{125}x^{2}-\frac{157}{25}x+\frac{246}{125}&1\leq x\leq 1.5\\ \end{cases} 99.79%99.79\%
2x2^{x} F(x)={2x26x+83x44x220x+324x58x256x+1125x6F(x)=\begin{cases}2x^{2}-6x+8&3\leq x\leq 4\\ 4x^{2}-20x+32&4\leq x\leq 5\\ 8x^{2}-56x+112&5\leq x\leq 6\\ \end{cases} 98.92%98.92\%
x1x\frac{x-1}{x} F(x)={116x2+12x+142x41128x2+18x+384x811024x2+132x+11168x16F(x)=\begin{cases}-\frac{1}{16}x^{2}+\frac{1}{2}x+\frac{1}{4}&2\leq x\leq 4\\ -\frac{1}{128}x^{2}+\frac{1}{8}x+\frac{3}{8}&4\leq x\leq 8\\ -\frac{1}{1024}x^{2}+\frac{1}{32}x+\frac{11}{16}&8\leq x\leq 16\\ \end{cases} 99.34%99.34\%
Refer to caption
(a) Without Partitions
Refer to caption
(b) With Partitions
Figure 1: Graphical Representations of log2x\log_{2}x and F(x)F(x)
Refer to caption
(a) Without Partitions
Refer to caption
(b) With Partitions
Figure 2: Graphical Representations of cos(πx)\cos(\pi{}x) and F(x)F(x)
Refer to caption
(a) Without Partitions
Refer to caption
(b) With Partitions
Figure 3: Graphical Representations of 2x2^{x} and F(x)F(x)
Refer to caption
(a) Without Partitions
Refer to caption
(b) With Partitions
Figure 4: Graphical Representations of x1x\frac{x-1}{x} and F(x)F(x)

4 Discussion and Implications

4.1 Algorithmic Discussion

After testing the proposed approach against several known algorithms, we were able to swiftly determine the runtime functions that correspond with their true time complexities. We tested the approach on two single variable algorithms and two multivariable algorithms such that we could compare the produced complexity behaviour with the true, known, complexity. In practice, this method will be used on algorithm’s where complexities are unknown to help determine their runtime functions; however, experimentally, we needed to know the true complexity beforehand to deduce the comparative accuracy. Regardless, in all cases our method was able to produce the correct big-OO runtime function. This was determined through the automated construction of segmented polynomial models given a set of input data. By treating each variable independently and graphing their grouped correlation, it made it easy to deduce the respective time complexity. To reiterate, any external coefficients and constants are a result of the particular test environment and because time is the output value. The only relevant component in determining the accuracy of the method are the contents of the big-O function. Most of the predicted runtime complexity functions followed the format of k𝒪(T(n))+Ck\mathcal{O}(T(n))+C where kk is the constant of proportionality between execution time and standard time complexity and CC is any factor of translation that matches the produced graphical curve/line with their true counterpart. While these values help us overlay our constructions with their parent functions, they aren’t necessarily important in determining the accuracy of our approach as the asymptotic behavior of our construction will be the same regardless. We are confident that the proposed method can significantly help expedite the process of determining functional time complexities in all cases, including both single and mulitvariable algorithms.

4.2 Mathematical Discussion

After reviewing the results, we were able to confirm the accuracy of the proposed approach with constructing matching segmented differentiable quadratics given any non-polynomials. These include logarithmic, exponential, trigonometric, and rational functions. To determine the approaches accuracy with select functions, we calculated the average value of the formulated function over a particular segment. And after doing so, as well as reviewing the formulaic relationships between computed segments, we found a collective functional resemblance score of greater than 99%99\% and began to notice profound mathematical implications. After testing just a few data points, we can produce a rule that can construct the next consecutive segmented polynomial based upon the functional patterns that surface. For example, with regard to log2x\log_{2}x, we were able to determine that every consecutive segment was equivalent to a4x2+b2x+(c+1)\frac{a}{4}x^{2}+\frac{b}{2}x+(c+1) such that the variables are the constants of the previous segment and the accuracy of any additional segments would remain identical. Not only is this a revolutionary method for accurate, polynomial replications; but it’s sheer simplicity combats flaws found in leading methods of doing so (primarily with non-sinusoidal functions), most notably Taylor series. Using these methods mathematicians and scientists can construct accurate, differentiable functions to represent patterned data, non-polynomials, and functions found in higher theoretical dimensions. Additionally, a similar approach can be used to determine the natural progression of repetitious systems such as natural disasters, planetary orbits, or pandemic-related death tolls, to lead to a better understanding of their nature. As in theory, their physical attributes and properties are built upon reoccurring, natural functions.

4.3 Conclusion

In this paper we proposed an approach to use segmented quadratic construction, based upon the principles of Lagrangian Interpolation to help determine algorithmic runtimes, as well as model non-polynomials with advanced, foreseeable applications in pure mathematics and pattern modeling/recognition found in science and nature. We hope to build upon this approach by improving and determine new ways to apply this research in all computational and mathematical based fields.

Acknowledgments

I would like to thank Professor Jeffery Ullman, Mr. Sudhir Kamath, Mr. Robert Gendron, Mr. Phillip Nho, and Ms. Katie MacDougall for their continual support with my research work.

References

  • (1) Nasar, A. A. (2016). The history of Algorithmic complexity. The Mathematics Enthusiast, 13(3), 217-242.
  • (2) Aho, A., Lam, M., Sethi, R., & Ullman, J. (2007). Compilers: Principles, Techniques and Tools, 2nd Editio.
  • (3) Sipser, M. (1996). Introduction to the Theory of Computation. ACM Sigact News, 27(1), 27-29.
  • (4) Puschner, P., & Koza, C. (1989). Calculating the maximum execution time of real-time programs. Real-time systems, 1(2), 159-176..
  • (5) Dean, W. (2015). Computational complexity theory.
  • (6) Qi, Q., Weise, T., & Li, B. (2017, July). Modeling optimization algorithm runtime behavior and its applications. In Proceedings of the Genetic and Evolutionary Computation Conference Companion (pp. 115-116).
  • (7) Guzman, J. P., & Limoanco, T. (2017). An Empirical Approach to Algorithm Analysis Resulting in Approximations to Big Theta Time Complexity. JSW, 12(12), 946-976.
  • (8) Aho, A. V., & Ullman, J. D. (1994). Foundations of computer science. WH Freeman & Co..
  • (9) Mohr, A. (2014). Quantum computing in complexity theory and theory of computation. Carbondale, IL.
  • (10) Jumarie, G. (2006). Modified Riemann-Liouville derivative and fractional Taylor series of nondifferentiable functions further results. Computers & Mathematics with Applications, 51(9-10), 1367-1376.
  • (11) Corliss, G., & Chang, Y. F. (1982). Solving ordinary differential equations using Taylor series. ACM Transactions on Mathematical Software (TOMS), 8(2), 114-144.
  • (12) De Boor, C., & Ron, A. (1990). On multivariate polynomial interpolation. Constructive Approximation, 6(3), 287-302.
  • (13) Sauer, T., & Xu, Y. (1995). On multivariate Lagrange interpolation. Mathematics of computation, 64(211), 1147-1170.
  • (14) Rashed, M. T. (2004). Lagrange interpolation to compute the numerical solutions of differential, integral and integro-differential equations. Applied Mathematics and computation, 151(3), 869-878.
  • (15) Berrut, J. P., & Trefethen, L. N. (2004). Barycentric lagrange interpolation. SIAM review, 46(3), 501-517.
  • (16) Comenetz, M. (2002). Calculus: the elements. World Scientific Publishing Company.
  • (17) Irving, R. (2020). Beyond the quadratic formula (Vol. 62). American Mathematical Soc..
  • (18) Cucker, F., & Corbalan, A. G. (1989). An alternate proof of the continuity of the roots of a polynomial. The American Mathematical Monthly, 96(4), 342-345.