LOGEST_q
Updated 2024-02-13 20:10:01.487000
Syntax
SELECT * FROM [westclintech].[wct].[LOGEST_q](
<@Matrix_RangeQuery, nvarchar(max),>
,<@Y_ColumnNumber, int,>
,<@Lconst, bit,>)
Description
Use the table-valued function LOGEST_q to calculate the exponential curve for a series of x- and y-values. The LOGEST_q function returns the statistics that describe the calculated solution, including the coefficients (m), the standard error of the coefficients (se), the t statistic for each coefficient (tstat) and the associated p-values (pval), the coefficient of determination (rsq), the adjusted r-square value (rsqa) and the modified r-square value (rsqm), the standard error of the y estimate (sey), the F-observed value (F), the residual degrees of freedom (df), the regression sum of squares (ss_reg), and the residual some of squares (ss_resid).
In the case where we have one column of x-values and a column of y-values, the LOGEST solution is:
y = bm^x
For purposes of multi-linear regression, where there are multiple columns of x-values (and still a single column of y-values), the formula for the solution is described by the following equation:
y=m_n^{x_n} * m_{n-1}^{x_{n-1}} * m_{n-2}^{x_{n-2}} * ... * m_1^{x_1}*m_0
Where n is the number of x-columns and m0 is b. It's easy to see that by taking the natural logarithm, the exponential regression can be expressed as the linear regression of the natural logarithms:
\ln y = x_n * \ln m_n + x_{n-1} * \ln m_{n-1} + x_{n-2} * \ln m_{n-2} + ... + x_1 * \ln m_1 + \ln m_0
The function expects the input to be in row-column, or spreadsheet, format, rather than in third-normal form. Thus, the input into the function requires the specification of the column names for the x-values and the y-values, and the specification of a 1-based index identifying the column reference for the y-values. The column specifications and the table or view that contains the data are passed into the function as strings. The function dynamically creates SQL and the resultant table from the SQL is used as input into the OLS calculations.
LOGEST_q automatically detects collinearity and removes the right-most co-linear column resulting in a regression coefficient of 0 for that column.
Arguments
@Y_ColumnNumber
the index into the resultant table identifying the column containing the y-values. The index value must be between 1 and n, where n is the number of columns in the resultant table created by @Matrix_RangeQuery. @y_ColumnNumber must be of the type int or of a type that implicitly converts to int.
@Matrix_RangeQuery
The SELECT statement, as a string, which, when executed, creates the resultant table of x- and y-values used in the calculation.
@Lconst
A logical value specifying whether to force the y-intercept value (m0) equal to zero. @LConst must be of the type bit or of a type that implicitly converts to bit.
Return Type
table
{"columns": [{"field": "colName", "headerName": "Name", "header": "name"}, {"field": "colDatatype", "headerName": "Type", "header": "type"}, {"field": "colDesc", "headerName": "Description", "header": "description", "minWidth": 1000}], "rows": [{"id": "d479f67d-9679-43a5-8557-6c850aab22d7", "colName": "stat_name", "colDatatype": "nvarchar(max)", "colDesc": "Identifies the statistic being returned: m the estimated coefficient se the standard error of the estimated coefficient tstat the t statistic pval the p-value (t distribution) for the t statistic rsq the coefficient of determination (r2) rsqa adjusted r square rsqm multiple r square sey the standard error for the y estimate f the f-observed value df the residual degrees of freedom ss_reg the regression sum of squares ss_resid the residual sum of squares"}, {"id": "f4e601b8-125d-4fae-81a2-64f1f5a5d35d", "colName": "idx", "colDatatype": "int", "colDesc": "Identifies the subscript for the estimated coefficient, the standard error of the estimated coefficient, the t statistic, and the p-value. For example, the stat_name m with an idx of 0, specifies that the stat_val is for m0, or the y-intercept (which is b in y = mx + b). An idx of 1 for the same stat_name identifies m1. The stat_name se with an idx of 0 identifies the standard error of the m0 coefficient (which is sometimes referred to as the standard error of b or seb). idx values are only supplied for the m, se, tstat, and pval stat_names. All others will have an idx of NULL."}, {"id": "cee72cda-4bf8-48c5-8c71-cf8ea0bea489", "colName": "stat_val", "colDatatype": "float", "colDesc": "the calculated value of the statistic."}, {"id": "eb276889-c875-4686-a3d5-c2a4e1e07e02", "colName": "col_name", "colDatatype": "nvarchar(4000)", "colDesc": "the column name from the resultant table produced by the dynamic SQL. col_name values are produced only for the m, se, tstat, and pval statistics; all other stat_names have NULL for col_name."}]}
Remarks
If @Lconst is NULL then @LConst is set to 'True'.
If @LConst is true than the number of rows must be greater than the number of columns.
If @Lconst is false than the number of rows must be greater than or equal to the number of columns.
If a y-value is < 0 then NULL will be returned.
For simpler queries, you can try the LOGEST function.
Examples
Example #1
We put the x- and y-data into a temp table, #xy.
SELECT *
INTO #xy
FROM ( VALUES (508.78, 0.34, 3.95, 1.75, 4.41, 49),
(276.59, 0.39, 3.72, 1.17, 3.48, 46.72),
(82.07, 0.41, 2.35, 1.64, 3.98, 13.03),
(225.66, 0.54, 5.08, 1.57, 3.21, 16.32),
(176.03, 0.35, 3.31, 1.19, 3.73, 34.16),
(148.2, 0.37, 4.46, 1.4, 4.36, 14.62),
(357.56, 0.38, 5.51, 1.63, 3.74, 27.97),
(470.35, 0.38, 3.74, 1.72, 3.94, 48.8),
(308.7, 0.48, 4.76, 1.55, 3.37, 25.84),
(219.6, 0.37, 5.03, 1.53, 4.37, 17.6)) n (y, x0, x1, x2, x3, x4);
This is what the data look like.
{"columns":[{"field":"y","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"x0","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"x1","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"x2","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"x3","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"x4","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"}],"rows":[{"y":"508.78","x0":"0.34","x1":"3.95","x2":"1.75","x3":"4.41","x4":"49.00"},{"y":"276.59","x0":"0.39","x1":"3.72","x2":"1.17","x3":"3.48","x4":"46.72"},{"y":"82.07","x0":"0.41","x1":"2.35","x2":"1.64","x3":"3.98","x4":"13.03"},{"y":"225.66","x0":"0.54","x1":"5.08","x2":"1.57","x3":"3.21","x4":"16.32"},{"y":"176.03","x0":"0.35","x1":"3.31","x2":"1.19","x3":"3.73","x4":"34.16"},{"y":"148.20","x0":"0.37","x1":"4.46","x2":"1.40","x3":"4.36","x4":"14.62"},{"y":"357.56","x0":"0.38","x1":"5.51","x2":"1.63","x3":"3.74","x4":"27.97"},{"y":"470.35","x0":"0.38","x1":"3.74","x2":"1.72","x3":"3.94","x4":"48.80"},{"y":"308.70","x0":"0.48","x1":"4.76","x2":"1.55","x3":"3.37","x4":"25.84"},{"y":"219.60","x0":"0.37","x1":"5.03","x2":"1.53","x3":"4.37","x4":"17.60"}]}
To invoke the table-valued function LOGEST_q, we enter the following SQL.
SELECT *
FROM wct.LOGEST_q( 'SELECT y,x0,x1,x2,x3,x4 FROM #xy', --@Matrix_RangeQuery
1, --@Y_ColumnNumber
'True' --@Lconst
);
Essentially the function is dynamically creating a SQL statement to SELECT the column names from the #xy table. This means that we can actually simplify the second parameter, since we are selecting all the columns in this particular table, by entering the following:
SELECT *
FROM wct.LOGEST_q( 'SELECT * FROM #xy', --@Matrix_RangeQuery
1, --@Y_ColumnNumber
'True' --@Lconst
);
The following results are produced.
{"columns":[{"field":"stat_name"},{"field":"idx"},{"field":"stat_val","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"col_name"}],"rows":[{"stat_name":"m","idx":"0","stat_val":"4.21565610446703","col_name":"Intercept"},{"stat_name":"m","idx":"1","stat_val":"2.85133142337479","col_name":"x0"},{"stat_name":"m","idx":"2","stat_val":"1.38181447807267","col_name":"x1"},{"stat_name":"m","idx":"3","stat_val":"1.88972141616803","col_name":"x2"},{"stat_name":"m","idx":"4","stat_val":"1.08094701123002","col_name":"x3"},{"stat_name":"m","idx":"5","stat_val":"1.03536022300628","col_name":"x4"},{"stat_name":"se","idx":"0","stat_val":"0.860087843692011","col_name":"Intercept"},{"stat_name":"se","idx":"1","stat_val":"1.07785882850791","col_name":"x0"},{"stat_name":"se","idx":"2","stat_val":"0.028420841607503","col_name":"x1"},{"stat_name":"se","idx":"3","stat_val":"0.175789083450687","col_name":"x2"},{"stat_name":"se","idx":"4","stat_val":"0.144735344427569","col_name":"x3"},{"stat_name":"se","idx":"5","stat_val":"0.0025961044875186","col_name":"x4"},{"stat_name":"tstat","idx":"0","stat_val":"1.67285847523053","col_name":"Intercept"},{"stat_name":"tstat","idx":"1","stat_val":"0.972099521362486","col_name":"x0"},{"stat_name":"tstat","idx":"2","stat_val":"11.3788845233473","col_name":"x1"},{"stat_name":"tstat","idx":"3","stat_val":"3.62041491349439","col_name":"x2"},{"stat_name":"tstat","idx":"4","stat_val":"0.537792060915596","col_name":"x3"},{"stat_name":"tstat","idx":"5","stat_val":"13.3852115290248","col_name":"x4"},{"stat_name":"pval","idx":"0","stat_val":"0.169670906822841","col_name":"Intercept"},{"stat_name":"pval","idx":"1","stat_val":"0.386047045013165","col_name":"x0"},{"stat_name":"pval","idx":"2","stat_val":"0.000340185528319693","col_name":"x1"},{"stat_name":"pval","idx":"3","stat_val":"0.0223489706841389","col_name":"x2"},{"stat_name":"pval","idx":"4","stat_val":"0.619246709635609","col_name":"x3"},{"stat_name":"pval","idx":"5","stat_val":"0.000180160941957814","col_name":"x4"},{"stat_name":"rsq","idx":"NULL","stat_val":"0.991381099586957","col_name":"NULL"},{"stat_name":"sey","idx":"NULL","stat_val":"0.0769397569469466","col_name":"NULL"},{"stat_name":"F","idx":"NULL","stat_val":"92.0192648321283","col_name":"NULL"},{"stat_name":"df","idx":"NULL","stat_val":"4","col_name":"NULL"},{"stat_name":"ss_reg","idx":"NULL","stat_val":"2.72364426422275","col_name":"NULL"},{"stat_name":"ss_resid","idx":"NULL","stat_val":"0.0236789047962209","col_name":"NULL"},{"stat_name":"rsqm","idx":"NULL","stat_val":"0.995681223879891","col_name":"NULL"},{"stat_name":"rsqa","idx":"NULL","stat_val":"0.980607474070653","col_name":"NULL"}]}
The results are returned in 3rd normal form. You can use standard SQL commands to re-format the results. For example, if you wanted to produce the the coefficients in a format similar to output of the Excel Data Analysis Regression Tool, you could use the following SQL[1].
;WITH mycte
as (SELECT *
FROM wct.LOGEST_q('SELECT * FROM #xy', 1, 1) p )
SELECT d.col_name,
d.m,
d.se,
d.tstat,
d.pval,
LOG(m) - wct.T_INV_2T(.05, m.stat_val) * se as [Lower Confidence Level],
LOG(m) + wct.T_INV_2T(.05, m.stat_val) * se as [Upper Confidence Level]
FROM mycte p
PIVOT ( MAX(stat_val)
FOR stat_name in (m, se, tstat, pval)) d
CROSS JOIN mycte m
WHERE stat_name = 'df'
AND d.col_name IS NOT NULL;
This produces the following result.
{"columns":[{"field":"col_name"},{"field":"m","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"se","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"tstat","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"pval","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"Lower Confidence Level","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"Upper Confidence Level","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"}],"rows":[{"col_name":"Intercept","m":"4.21565610446703","se":"0.860087843692011","tstat":"1.67285847523053","pval":"0.169670906822841","Lower Confidence Level":"1.82766942080822","Upper Confidence Level":"6.60364278812584"},{"col_name":"x0","m":"2.85133142337479","se":"1.07785882850791","tstat":"0.972099521362486","pval":"0.386047045013165","Lower Confidence Level":"-0.141284445130228","Upper Confidence Level":"5.8439472918798"},{"col_name":"x1","m":"1.38181447807267","se":"0.028420841607503","tstat":"11.3788845233473","pval":"0.000340185528319693","Lower Confidence Level":"1.30290557150592","Upper Confidence Level":"1.46072338463942"},{"col_name":"x2","m":"1.88972141616803","se":"0.175789083450687","tstat":"3.62041491349439","pval":"0.0223489706841389","Lower Confidence Level":"1.40165267587416","Upper Confidence Level":"2.3777901564619"},{"col_name":"x3","m":"1.08094701123002","se":"0.144735344427569","tstat":"0.537792060915596","pval":"0.619246709635609","Lower Confidence Level":"0.679097272644978","Upper Confidence Level":"1.48279674981506"},{"col_name":"x4","m":"1.03536022300628","se":"0.0025961044875186","tstat":"13.3852115290248","pval":"0.000180160941957814","Lower Confidence Level":"1.02815228140933","Upper Confidence Level":"1.04256816460324"}]}
Similarly, if you wanted to reformat the results to produce the equivalent of the ANOVA table from the Excel Data Analysis Regression tool, you could use the following SQL.
;WITH mycte
as (SELECT df as [Residual df],
Obs - df - 1 as [Regression df],
ss_reg as [Regression SS],
ss_resid as [Residual SS],
F
FROM ( SELECT stat_name,
stat_val
FROM wct.LOGEST_q('SELECT * FROM #xy', 1, 1)
WHERE stat_name in ( 'ss_reg', 'ss_resid', 'F', 'df' )
UNION
SELECT 'Obs',
COUNT(*)
FROM #xy) d
PIVOT ( MAX(stat_val)
FOR stat_name in (df, F, ss_reg, ss_resid, Obs)) p)
SELECT 'Regression',
[Regression df] as DF,
[Regression SS] as SS,
[Regression SS] / [Regression df] as MS,
F,
wct.F_DIST_RT(F, [Regression df], [Residual df]) as [Significance F]
FROM mycte
UNION ALL
SELECT 'Residual',
[Residual df],
[Residual SS],
[Residual SS] / [Residual df] as [Residual MS],
NULL,
NULL
FROM mycte
UNION ALL
SELECT 'Total',
[Regression df] + [Residual df],
[Regression SS] + [Residual SS],
NULL,
NULL,
NULL
FROM mycte;
This produces the following result.
{"columns":[{"field":"(No column name)"},{"field":"DF","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"SS","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"MS"},{"field":"F"},{"field":"Significance F"}],"rows":[{"(No column name)":"Regression","DF":"5","SS":"2.72364426422275","MS":"0.544728852844551","F":"92.0192648321283","Significance F":"0.000322202218457691"},{"(No column name)":"Residual","DF":"4","SS":"0.0236789047962209","MS":"0.00591972619905522","F":"NULL","Significance F":"NULL"},{"(No column name)":"Total","DF":"9","SS":"2.74732316901898","MS":"NULL","F":"NULL","Significance F":"NULL"}]}
Example #2
Using the same data as Example #1, if we wanted to calculate the coefficients without a y-intercept, we would modify function call to make the last parameter FALSE. This time, however, we will put the results into a temporary table, #L0, and reformat the results using the #L0 table.
SELECT *
INTO #L0
FROM wct.LOGEST_q( 'SELECT * FROM #xy', --@Matrix_RangeQuery
1, --@Y_ColumnNumber
'True' --@Lconst
);
The #L0 table should contain the following data.
stat_name idx stat_val col_name
---------- ----------- ---------------------- --------------------------------------------------------------------------------------------------------------------------------
m 0 4.21565610446699 Intercept
m 1 2.85133142337482 x0
m 2 1.38181447807267 x1
m 3 1.88972141616803 x2
m 4 1.08094701123002 x3
m 5 1.03536022300628 x4
se 0 0.860087843692011 Intercept
se 1 1.07785882850791 x0
se 2 0.0284208416075031 x1
se 3 0.175789083450687 x2
se 4 0.144735344427569 x3
se 5 0.00259610448751859 x4
tstat 0 1.67285847523052 Intercept
tstat 1 0.972099521362497 x0
tstat 2 11.3788845233472 x1
tstat 3 3.62041491349439 x2
tstat 4 0.537792060915604 x3
tstat 5 13.3852115290248 x4
pval 0 0.169670906822843 Intercept
pval 1 0.38604704501316 x0
pval 2 0.000340185528319697 x1
pval 3 0.0223489706841388 x2
pval 4 0.619246709635603 x3
pval 5 0.000180160941957813 x4
rsq NULL 0.991381099586957 NULL
sey NULL 0.0769397569469467 NULL
F NULL 92.0192648321303 NULL
df NULL 4 NULL
ss_reg NULL 2.72364426422282 NULL
ss_resid NULL 0.023678904796221 NULL
rsqm NULL 0.995681223879891 NULL
rsqa NULL 0.980607474070653 NULL
We can use the same technique as in Example #1 to reformat the coefficient statistics.
SELECT d.col_name,
d.m,
d.se,
d.tstat,
d.pval,
LOG(m) - wct.T_INV_2T(.05, m.stat_val) * se as [Lower Confidence Level],
LOG(m) + wct.T_INV_2T(.05, m.stat_val) * se as [Upper Confidence Level]
FROM #L0 p
PIVOT ( MAX(stat_val)
FOR stat_name in (m, se, tstat, pval)) d
CROSS JOIN #L0 m
WHERE m.stat_name = 'df'
AND d.col_name IS NOT NULL;
This produces the following result.
col_name m se tstat pval Lower Confidence Level Upper Confidence Level
-------------------------------------------------------------------------------------------------------------------------------- ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ----------------------
Intercept 4.21565610446699 0.860087843692011 1.67285847523052 0.169670906822843 -0.949181444895887 3.82679192242173
x0 2.85133142337482 1.07785882850791 0.972099521362497 0.38604704501316 -1.94482981721613 4.04040191979389
x1 1.38181447807267 0.0284208416075031 11.3788845233472 0.000340185528319697 0.244488568141366 0.402306381274874
x2 1.88972141616803 0.175789083450687 3.62041491349439 0.0223489706841388 0.148360679060511 1.12449815964825
x3 1.08094701123002 0.144735344427569 0.537792060915604 0.619246709635603 -0.324012219418008 0.479687257752072
x4 1.03536022300628 0.00259610448751859 13.3852115290248 0.000180160941957813 0.0275414661199339 0.0419573493138399
Note that even though we specified no intercept, an intercept row is still created with a regression coefficient of 1 (keeping the results consistent with the Excel Data Analysis Regression). It is simple enough to exclude this from the output (like R does) simply by adding another condition to the WHERE clause.
SELECT d.col_name,
d.m,
d.se,
d.tstat,
d.pval,
LOG(m) - wct.T_INV_2T(.05, m.stat_val) * se as [Lower Confidence Level],
LOG(m) + wct.T_INV_2T(.05, m.stat_val) * se as [Upper Confidence Level]
FROM #L0 p
PIVOT ( MAX(stat_val)
FOR stat_name in (m, se, tstat, pval)) d
CROSS JOIN #L0 m
WHERE m.stat_name = 'df'
AND d.col_name IS NOT NULL
AND d.col_name <> 'Intercept';
The following SQL can be used to reproduce the ANOVA table.
;WITH mycte
as (SELECT df as [Residual df],
--,Obs - df - 1 as [Regression df]
Obs - df as [Regression df],
ss_reg as [Regression SS],
ss_resid as [Residual SS],
F
FROM ( SELECT stat_name,
stat_val
FROM #L0
WHERE stat_name in ( 'ss_reg', 'ss_resid', 'F', 'df' )
UNION
SELECT 'Obs',
COUNT(*)
FROM #xy) d
PIVOT ( MAX(stat_val)
FOR stat_name in (df, F, ss_reg, ss_resid, Obs)) p)
SELECT 'Regression',
[Regression df] as DF,
[Regression SS] as SS,
[Regression SS] / [Regression df] as MS,
F,
wct.F_DIST_RT(F, [Regression df], [Residual df]) as [Significance F]
FROM mycte
UNION ALL
SELECT 'Residual',
[Residual df],
[Residual SS],
[Residual SS] / [Residual df] as [Residual MS],
NULL,
NULL
FROM mycte
UNION ALL
SELECT 'Total',
[Regression df] + [Residual df],
[Regression SS] + [Residual SS],
NULL,
NULL,
NULL
FROM mycte;
This produces the following result.
{"columns":[{"field":"(No column name)"},{"field":"DF","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"SS","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"MS"},{"field":"F"},{"field":"Significance F"}],"rows":[{"(No column name)":"Regression","DF":"6","SS":"2.72364426422282","MS":"0.453940710703804","F":"92.0192648321303","Significance F":"0.000307444910578283"},{"(No column name)":"Residual","DF":"4","SS":"0.023678904796221","MS":"0.00591972619905524","F":"NULL","Significance F":"NULL"},{"(No column name)":"Total","DF":"10","SS":"2.74732316901904","MS":"NULL","F":"NULL","Significance F":"NULL"}]}
Note that the calculation of the regression degrees of freedom needs to be adjusted to reflect the absence on intercept. It also worthwhile noting that the calculation in this SQL produces a Significance F that agrees with R while Excel produces a different value (which seems to be caused be subtracting 1 from the residual degrees of freedom).
Example #3
Instead of just having a matrix of x- and y-values in our table, we are going to add a column, testid, which is a way of grouping x- and y-values together for purposes of doing the LOGEST calculation. This allows us to compute the ordinary least squares values for multiple sets of data in a single SELECT statement.
SELECT *
INTO #xy
FROM ( VALUES ('Test1', 3.93, 57.91),
('Test1', 4.4, 76.2),
('Test1', 3.33, 20.78),
('Test1', 1.88, 5.06),
('Test1', 3.68, 32.53),
('Test1', 4.81, 55.32),
('Test1', 4.27, 57.83),
('Test1', 3.11, 15.62),
('Test1', 3.81, 22.93),
('Test1', 3.36, 20.58),
('Test2', 12.56, 5.19),
('Test2', 544.52, 66.68),
('Test2', 955.35, 96.94),
('Test2', 478.54, 61.1),
('Test2', 565.07, 68.28),
('Test2', 879.44, 91.75),
('Test2', 988.58, 99.19),
('Test2', 218.15, 36.2),
('Test2', 523.75, 64.97),
('Test2', 653.03, 75.2),
('Test3', 10011.32, 73.6),
('Test3', 14224.21, 86.71),
('Test3', 12616.39, 81.99),
('Test3', 731.51, 21.7),
('Test3', 83.07, 7.84),
('Test3', 7420.4, 64),
('Test3', 3470.08, 44.89),
('Test3', 4320.23, 49.72),
('Test3', 257.08, 13.31),
('Test3', 4083.32, 48.43),
('Test4', 6.75, 33.34),
('Test4', 10.66, 95.33),
('Test4', 8.78, 64.92),
('Test4', 8.46, 57.19),
('Test4', 8.58, 58.07),
('Test4', 8.13, 64.62),
('Test4', 8.83, 74.75),
('Test4', 8.56, 59.43),
('Test4', 9.35, 73.64),
('Test4', 7.4, 46.11),
('Test5', 67.86, 26.69),
('Test5', 198.1, 66.08),
('Test5', 46.65, 18.53),
('Test5', 246.06, 79.92),
('Test5', 82.08, 32.66),
('Test5', 306.95, 97.34),
('Test5', 315.58, 98.06),
('Test5', 231.01, 76.41),
('Test5', 214.47, 71.35),
('Test5', 155.25, 54.05)) n (testid, y, x0);
Let’s say we wanted to run LOGEST_q for the all the data where the testid is equal to Test3. We could simply enter the following statement.
SELECT *
FROM wct.LOGEST_q('SELECT y,x0 FROM #xy WHERE testid = ''Test3''', 1, 'True');
This produces the following result.
{"columns":[{"field":"stat_name"},{"field":"idx"},{"field":"stat_val","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"col_name"}],"rows":[{"stat_name":"m","idx":"0","stat_val":"135.488598412649","col_name":"Intercept"},{"stat_name":"m","idx":"1","stat_val":"1.06164344089933","col_name":"x0"},{"stat_name":"se","idx":"0","stat_val":"0.361584667292727","col_name":"Intercept"},{"stat_name":"se","idx":"1","stat_val":"0.00646345630933807","col_name":"x0"},{"stat_name":"tstat","idx":"0","stat_val":"13.5760388541288","col_name":"Intercept"},{"stat_name":"tstat","idx":"1","stat_val":"9.25481979763138","col_name":"x0"},{"stat_name":"pval","idx":"0","stat_val":"8.32872857005995E-07","col_name":"Intercept"},{"stat_name":"pval","idx":"1","stat_val":"1.50855967470192E-05","col_name":"x0"},{"stat_name":"rsq","idx":"NULL","stat_val":"0.914577088316788","col_name":"NULL"},{"stat_name":"sey","idx":"NULL","stat_val":"0.543507567190628","col_name":"NULL"},{"stat_name":"F","idx":"NULL","stat_val":"85.6516894866305","col_name":"NULL"},{"stat_name":"df","idx":"NULL","stat_val":"8","col_name":"NULL"},{"stat_name":"ss_reg","idx":"NULL","stat_val":"25.3015498097353","col_name":"NULL"},{"stat_name":"ss_resid","idx":"NULL","stat_val":"2.3632038047478","col_name":"NULL"},{"stat_name":"rsqm","idx":"NULL","stat_val":"0.95633523845814","col_name":"NULL"},{"stat_name":"rsqa","idx":"NULL","stat_val":"0.903899224356387","col_name":"NULL"}]}
In the following SQL we return selected values for each test.
SELECT p.testid,
p.rsq,
p.rsqa,
p.rsqm,
p.F,
p.df,
p.ss_reg,
p.ss_resid
FROM ( SELECT n.testid,
k.stat_name,
k.stat_val
FROM (SELECT DISTINCT testid FROM #xy) n
CROSS APPLY wct.LOGEST_q('SELECT y,x0 FROM #xy WHERE testid = ''' + n.testid + '''', 1, 'True') k ) d
PIVOT ( SUM(stat_val)
FOR stat_name IN (F, df, ss_reg, ss_resid, rsq, rsqm, rsqa)) p;
This produces the following result.
{"columns":[{"field":"testid"},{"field":"rsq","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"rsqa","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"rsqm","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"F","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"df","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"ss_reg","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"ss_resid","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"}],"rows":[{"testid":"Test1","rsq":"0.627439146717316","rsqa":"0.58086904005698","rsqm":"0.792110564705026","F":"13.473002140485","df":"8","ss_reg":"0.387585530606066","ss_resid":"0.230140559061539"},{"testid":"Test2","rsq":"0.85805225141019","rsqa":"0.840308782836464","rsqm":"0.926311098611147","F":"48.3587663733775","df":"8","ss_reg":"12.9581267103588","ss_resid":"2.14366538803894"},{"testid":"Test3","rsq":"0.914577088316788","rsqa":"0.903899224356387","rsqm":"0.95633523845814","F":"85.6516894866305","df":"8","ss_reg":"25.3015498097353","ss_resid":"2.3632038047478"},{"testid":"Test4","rsq":"0.919683577592861","rsqa":"0.909644024791969","rsqm":"0.959001343895232","F":"91.6060302517774","df":"8","ss_reg":"0.126304628613475","ss_resid":"0.0110302457832812"},{"testid":"Test5","rsq":"0.963626482306433","rsqa":"0.959079792594737","rsqm":"0.981644784179304","F":"211.940234194476","df":"8","ss_reg":"3.88948917792835","ss_resid":"0.14681456563304"}]}
In this SQL we select the coefficient statistics for each of the tests.
SELECT p.testid,
p.col_name,
p.m,
p.se,
p.tstat,
p.pval
FROM ( SELECT n.testid,
k.stat_name,
k.idx,
k.col_name,
k.stat_val
FROM (SELECT DISTINCT testid FROM #xy) n
CROSS APPLY wct.LOGEST_q('SELECT y,x0 FROM #xy WHERE testid = ''' + n.testid + '''', 1, 'True') k
WHERE k.idx IS NOT NULL) d
PIVOT ( SUM(stat_val)
FOR stat_name IN (m, se, tstat, pval)) p
ORDER BY testid,
idx;
This produces the following result.
{"columns":[{"field":"testid"},{"field":"col_name"},{"field":"m","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"se","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"tstat","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"},{"field":"pval","headerClass":"ag-right-aligned-header","cellClass":"ag-right-aligned-cell"}],"rows":[{"testid":"Test1","col_name":"Intercept","m":"2.57854985167313","se":"0.102860345445229","tstat":"9.20886629163371","pval":"1.56501747188535E-05"},{"testid":"Test1","col_name":"x0","m":"1.00887131799251","se":"0.00240622734641136","tstat":"3.67055883217868","pval":"0.0063043980680377"},{"testid":"Test2","col_name":"Intercept","m":"25.0101922364977","se":"0.432126191953359","tstat":"7.44986879108872","pval":"7.26541556172679E-05"},{"testid":"Test2","col_name":"x0","m":"1.04267468711693","se":"0.00600933926115219","tstat":"6.95404676238072","pval":"0.000117938606421147"},{"testid":"Test3","col_name":"Intercept","m":"135.488598412649","se":"0.361584667292727","tstat":"13.5760388541288","pval":"8.32872857005995E-07"},{"testid":"Test3","col_name":"x0","m":"1.06164344089933","se":"0.00646345630933807","tstat":"9.25481979763138","pval":"1.50855967470192E-05"},{"testid":"Test4","col_name":"Intercept","m":"5.4547907690067","se":"0.0477111687082712","tstat":"35.5575918314532","pval":"4.2844720029747E-10"},{"testid":"Test4","col_name":"x0","m":"1.00707950093233","se":"0.000737068469138532","tstat":"9.57110392022625","pval":"1.17632661453333E-05"},{"testid":"Test5","col_name":"Intercept","m":"37.3588249656326","se":"0.107749158004868","tstat":"33.6018325176432","pval":"6.71854140904409E-10"},{"testid":"Test5","col_name":"x0","m":"1.02344475968034","se":"0.00159183173667064","tstat":"14.5581672677048","pval":"4.85755624173566E-07"}]}
[1]The Excel Regression Tool does not directly support exponential regression. You would need to convert the y-values to ln(y). This means that the coefficients returned are the natural log of the coefficients.
See Also
LINEST_q - the Ordinary Least Squares (OLS) solution for a series of x-values and y-values
LOGEST - Logarithmic regression
GROWTH - calculate predicted exponential growth using existing values
TRENDMX - Calculate the values along a linear trend for multiple x values