How to report t statistic from R Announcing the arrival of Valued Associate #679: Cesar...
Do I really need to have a message in a novel to appeal to readers?
If Windows 7 doesn't support WSL, then what is "Subsystem for UNIX-based Applications"?
Is CEO the "profession" with the most psychopaths?
Did Mueller's report provide an evidentiary basis for the claim of Russian govt election interference via social media?
What is the difference between a "ranged attack" and a "ranged weapon attack"?
Why do early math courses focus on the cross sections of a cone and not on other 3D objects?
What order were files/directories output in dir?
How could we fake a moon landing now?
Why are my pictures showing a dark band on one edge?
An adverb for when you're not exaggerating
The Nth Gryphon Number
AppleTVs create a chatty alternate WiFi network
Is there hard evidence that the grant peer review system performs significantly better than random?
Project Euler #1 in C++
Has negative voting ever been officially implemented in elections, or seriously proposed, or even studied?
What is best way to wire a ceiling receptacle in this situation?
What initially awakened the Balrog?
macOS: Name for app shortcut screen found by pinching with thumb and three fingers
What is the meaning of 'breadth' in breadth first search?
Girl Hackers - Logic Puzzle
How to report t statistic from R
Crossing US/Canada Border for less than 24 hours
What's the difference between the capability remove_users and delete_users?
Converted a Scalar function to a TVF function for parallel execution-Still running in Serial mode
How to report t statistic from R
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)two-sample t-test VS two one-sample t-tests. What's the difference?non parametric or parametric test for means of groups?How to test difference from 50%Predicting population mean and variance based on sample mean and varianceInconsistent results with median test (Coin-package) RDifferent results with repeated measure correlation (rmcorr) and cor.testHow to interpret Wilcoxon test for small difference in location?What to report for Bootstrapping?Do we assume a t distribution for the estimate of the difference of normal distributions?How to interpret results on different t-tests for the same samples?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}
$begingroup$
I'm wondering how to report the result of a t-test from R given that the degrees of freedom change when the lengths of the vectors are the same.
For example
set.seed(1)
n = 500
x = rnorm(n, 6, 1)
y = rnorm(n, 6, 2)
t = t.test(x,y)
t
t$parameter
Gives the output
> t
Welch Two Sample t-test
data: x and y
t = 1.0924, df = 716.16, p-value = 0.275
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.09130295 0.32035262
sample estimates:
mean of x mean of y
6.022644 5.908119
> t$parameter
df
716.156
Whereas
set.seed(2)
n = 500
x = rnorm(n, 6, 1)
y = rnorm(n, 6, 2)
t = t.test(x,y)
t
t$parameter
Gives the output
> t
Welch Two Sample t-test
data: x and y
t = -0.62595, df = 748.05, p-value = 0.5315
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.2602459 0.1344099
sample estimates:
mean of x mean of y
6.061692 6.124610
> t$parameter
df
748.0475
I'm not sure if it would be typical to report the first as $t(716.15), p = 0.275$ and the second as $t(748.05), p = 0.53$
r hypothesis-testing t-test reporting
$endgroup$
add a comment |
$begingroup$
I'm wondering how to report the result of a t-test from R given that the degrees of freedom change when the lengths of the vectors are the same.
For example
set.seed(1)
n = 500
x = rnorm(n, 6, 1)
y = rnorm(n, 6, 2)
t = t.test(x,y)
t
t$parameter
Gives the output
> t
Welch Two Sample t-test
data: x and y
t = 1.0924, df = 716.16, p-value = 0.275
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.09130295 0.32035262
sample estimates:
mean of x mean of y
6.022644 5.908119
> t$parameter
df
716.156
Whereas
set.seed(2)
n = 500
x = rnorm(n, 6, 1)
y = rnorm(n, 6, 2)
t = t.test(x,y)
t
t$parameter
Gives the output
> t
Welch Two Sample t-test
data: x and y
t = -0.62595, df = 748.05, p-value = 0.5315
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.2602459 0.1344099
sample estimates:
mean of x mean of y
6.061692 6.124610
> t$parameter
df
748.0475
I'm not sure if it would be typical to report the first as $t(716.15), p = 0.275$ and the second as $t(748.05), p = 0.53$
r hypothesis-testing t-test reporting
$endgroup$
add a comment |
$begingroup$
I'm wondering how to report the result of a t-test from R given that the degrees of freedom change when the lengths of the vectors are the same.
For example
set.seed(1)
n = 500
x = rnorm(n, 6, 1)
y = rnorm(n, 6, 2)
t = t.test(x,y)
t
t$parameter
Gives the output
> t
Welch Two Sample t-test
data: x and y
t = 1.0924, df = 716.16, p-value = 0.275
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.09130295 0.32035262
sample estimates:
mean of x mean of y
6.022644 5.908119
> t$parameter
df
716.156
Whereas
set.seed(2)
n = 500
x = rnorm(n, 6, 1)
y = rnorm(n, 6, 2)
t = t.test(x,y)
t
t$parameter
Gives the output
> t
Welch Two Sample t-test
data: x and y
t = -0.62595, df = 748.05, p-value = 0.5315
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.2602459 0.1344099
sample estimates:
mean of x mean of y
6.061692 6.124610
> t$parameter
df
748.0475
I'm not sure if it would be typical to report the first as $t(716.15), p = 0.275$ and the second as $t(748.05), p = 0.53$
r hypothesis-testing t-test reporting
$endgroup$
I'm wondering how to report the result of a t-test from R given that the degrees of freedom change when the lengths of the vectors are the same.
For example
set.seed(1)
n = 500
x = rnorm(n, 6, 1)
y = rnorm(n, 6, 2)
t = t.test(x,y)
t
t$parameter
Gives the output
> t
Welch Two Sample t-test
data: x and y
t = 1.0924, df = 716.16, p-value = 0.275
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.09130295 0.32035262
sample estimates:
mean of x mean of y
6.022644 5.908119
> t$parameter
df
716.156
Whereas
set.seed(2)
n = 500
x = rnorm(n, 6, 1)
y = rnorm(n, 6, 2)
t = t.test(x,y)
t
t$parameter
Gives the output
> t
Welch Two Sample t-test
data: x and y
t = -0.62595, df = 748.05, p-value = 0.5315
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.2602459 0.1344099
sample estimates:
mean of x mean of y
6.061692 6.124610
> t$parameter
df
748.0475
I'm not sure if it would be typical to report the first as $t(716.15), p = 0.275$ and the second as $t(748.05), p = 0.53$
r hypothesis-testing t-test reporting
r hypothesis-testing t-test reporting
edited 7 hours ago
Karolis Koncevičius
2,40141630
2,40141630
asked 8 hours ago
baxxbaxx
310111
310111
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
If you have to report all the details then you should also report the actual t-value, not just degrees of freedom.
About the degrees of freedom: your degrees of freedom changes because you are using t-test with Welch correction for pooling the variances of the two groups. If your context permits to assume equal variances in both groups you could call the t.test()
in the following way:
t.test(x, y, var.equal=TRUE)
then you would get the same degrees of freedom for both cases - a whole number dependant on the number of observations. However don't do this just to get a round degrees of freedom value.
And if Welch t-test is more appropriate in your case consider stating that Welch t-test was used in your report as well.
$endgroup$
add a comment |
$begingroup$
The Student's t-test assumes both samples having the same variance and in this case the degrees of freedom are simply n1 + n2 - 2. On the other hand, the Welch test does mot make this assumption and in this case you have to calculate the degrees of freedom where the variances of the samples are considered and thus you do not always get the same degrees of freedom for the same sample size. The answer is to report the degrees of freedom as you did (reading it up from the R output).
EDIT
I agree with Karolis Koncevičius that you need to report the t value as well, of course. For your first example you would report t(716.16)= 1.09, p= 0.275. Although it depends on the citing format in your discipline how many decimal places you need to report, for example. But I would suggest using the Welch t test as default as it is the case in R "because Welch's t-test performs better than Student's t-test whenever sample sizes and variances are unequal between groups, and gives the same result when sample sizes and variances are equal." (quote from source in link before).
New contributor
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "65"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f404032%2fhow-to-report-t-statistic-from-r%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
If you have to report all the details then you should also report the actual t-value, not just degrees of freedom.
About the degrees of freedom: your degrees of freedom changes because you are using t-test with Welch correction for pooling the variances of the two groups. If your context permits to assume equal variances in both groups you could call the t.test()
in the following way:
t.test(x, y, var.equal=TRUE)
then you would get the same degrees of freedom for both cases - a whole number dependant on the number of observations. However don't do this just to get a round degrees of freedom value.
And if Welch t-test is more appropriate in your case consider stating that Welch t-test was used in your report as well.
$endgroup$
add a comment |
$begingroup$
If you have to report all the details then you should also report the actual t-value, not just degrees of freedom.
About the degrees of freedom: your degrees of freedom changes because you are using t-test with Welch correction for pooling the variances of the two groups. If your context permits to assume equal variances in both groups you could call the t.test()
in the following way:
t.test(x, y, var.equal=TRUE)
then you would get the same degrees of freedom for both cases - a whole number dependant on the number of observations. However don't do this just to get a round degrees of freedom value.
And if Welch t-test is more appropriate in your case consider stating that Welch t-test was used in your report as well.
$endgroup$
add a comment |
$begingroup$
If you have to report all the details then you should also report the actual t-value, not just degrees of freedom.
About the degrees of freedom: your degrees of freedom changes because you are using t-test with Welch correction for pooling the variances of the two groups. If your context permits to assume equal variances in both groups you could call the t.test()
in the following way:
t.test(x, y, var.equal=TRUE)
then you would get the same degrees of freedom for both cases - a whole number dependant on the number of observations. However don't do this just to get a round degrees of freedom value.
And if Welch t-test is more appropriate in your case consider stating that Welch t-test was used in your report as well.
$endgroup$
If you have to report all the details then you should also report the actual t-value, not just degrees of freedom.
About the degrees of freedom: your degrees of freedom changes because you are using t-test with Welch correction for pooling the variances of the two groups. If your context permits to assume equal variances in both groups you could call the t.test()
in the following way:
t.test(x, y, var.equal=TRUE)
then you would get the same degrees of freedom for both cases - a whole number dependant on the number of observations. However don't do this just to get a round degrees of freedom value.
And if Welch t-test is more appropriate in your case consider stating that Welch t-test was used in your report as well.
answered 8 hours ago
Karolis KoncevičiusKarolis Koncevičius
2,40141630
2,40141630
add a comment |
add a comment |
$begingroup$
The Student's t-test assumes both samples having the same variance and in this case the degrees of freedom are simply n1 + n2 - 2. On the other hand, the Welch test does mot make this assumption and in this case you have to calculate the degrees of freedom where the variances of the samples are considered and thus you do not always get the same degrees of freedom for the same sample size. The answer is to report the degrees of freedom as you did (reading it up from the R output).
EDIT
I agree with Karolis Koncevičius that you need to report the t value as well, of course. For your first example you would report t(716.16)= 1.09, p= 0.275. Although it depends on the citing format in your discipline how many decimal places you need to report, for example. But I would suggest using the Welch t test as default as it is the case in R "because Welch's t-test performs better than Student's t-test whenever sample sizes and variances are unequal between groups, and gives the same result when sample sizes and variances are equal." (quote from source in link before).
New contributor
$endgroup$
add a comment |
$begingroup$
The Student's t-test assumes both samples having the same variance and in this case the degrees of freedom are simply n1 + n2 - 2. On the other hand, the Welch test does mot make this assumption and in this case you have to calculate the degrees of freedom where the variances of the samples are considered and thus you do not always get the same degrees of freedom for the same sample size. The answer is to report the degrees of freedom as you did (reading it up from the R output).
EDIT
I agree with Karolis Koncevičius that you need to report the t value as well, of course. For your first example you would report t(716.16)= 1.09, p= 0.275. Although it depends on the citing format in your discipline how many decimal places you need to report, for example. But I would suggest using the Welch t test as default as it is the case in R "because Welch's t-test performs better than Student's t-test whenever sample sizes and variances are unequal between groups, and gives the same result when sample sizes and variances are equal." (quote from source in link before).
New contributor
$endgroup$
add a comment |
$begingroup$
The Student's t-test assumes both samples having the same variance and in this case the degrees of freedom are simply n1 + n2 - 2. On the other hand, the Welch test does mot make this assumption and in this case you have to calculate the degrees of freedom where the variances of the samples are considered and thus you do not always get the same degrees of freedom for the same sample size. The answer is to report the degrees of freedom as you did (reading it up from the R output).
EDIT
I agree with Karolis Koncevičius that you need to report the t value as well, of course. For your first example you would report t(716.16)= 1.09, p= 0.275. Although it depends on the citing format in your discipline how many decimal places you need to report, for example. But I would suggest using the Welch t test as default as it is the case in R "because Welch's t-test performs better than Student's t-test whenever sample sizes and variances are unequal between groups, and gives the same result when sample sizes and variances are equal." (quote from source in link before).
New contributor
$endgroup$
The Student's t-test assumes both samples having the same variance and in this case the degrees of freedom are simply n1 + n2 - 2. On the other hand, the Welch test does mot make this assumption and in this case you have to calculate the degrees of freedom where the variances of the samples are considered and thus you do not always get the same degrees of freedom for the same sample size. The answer is to report the degrees of freedom as you did (reading it up from the R output).
EDIT
I agree with Karolis Koncevičius that you need to report the t value as well, of course. For your first example you would report t(716.16)= 1.09, p= 0.275. Although it depends on the citing format in your discipline how many decimal places you need to report, for example. But I would suggest using the Welch t test as default as it is the case in R "because Welch's t-test performs better than Student's t-test whenever sample sizes and variances are unequal between groups, and gives the same result when sample sizes and variances are equal." (quote from source in link before).
New contributor
edited 7 hours ago
New contributor
answered 8 hours ago
stats.and.rstats.and.r
514
514
New contributor
New contributor
add a comment |
add a comment |
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f404032%2fhow-to-report-t-statistic-from-r%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown