万维书刊网微信二维码

扫微信,关注编辑QQ!

投稿问答最小化  关闭

投稿有问题?问问网友吧!

我要提问

您的位置:万维书刊网 >>sci/scie期刊 >>医药卫生>>兽医学

Learning & Behavior(研究方向:动物学) (官网投稿)

简介
  • 期刊简称待设置
  • 参考译名待设置
  • 核心类别 外文期刊,
  • IF影响因子待设置
  • 自引率待设置
  • 主要研究方向待设置

主要研究方向:

等待设置主要研究方向
待设置

Learning & Behavior(季刊),创刊于2003年,出版国家为美国。杂志发表关于非人类和人类动物学习和行为基本过程的实验和理论贡献和批判性评论。涵盖的主题包括感觉,感知,条件,学习,注意力,记忆,动机,情感,发展,社会行为和比较调查。
征稿信息

万维提示:

1、投稿方式:在线投稿。

2、期刊官网:

http://www.springer.com/psychology/journal/13420

3、期刊投稿:

https://mc.manuscriptcentral.com/lb

4、信息说明:本刊信息来源于网络,包括SCI收录核心期刊,增补期刊,期刊收录数据每年进行更新。本站是公益性网站,为网友投稿提供免费服务,由于受相关约束,我们不能提供相关期刊的影响因子、JCR期刊分区等数据供大家参考,造成不便,敬请谅解。

201761日星期四

                   

 

Author Guidelines

 

Aims and Scope

 

Learning & Behavior publishes experimental and theoretical contributions and critical reviews concerning fundamental processes of learning and behavior in nonhuman and human animals. Topics covered include sensation, perception, conditioning, learning, attention, memory, motivation, emotion, development, social behavior, and comparative investigations.

 

HOW TO SUBMIT

Manuscripts are to be submitted electronically via ScholarOne:

If you have not submitted via the ScholarOne submission system before, then you will first be asked to create an account. Otherwise you can use your existing account.

http://mc.manuscriptcentral.com/lb

 

AFFIRMATIONS AT THE TIME OF SUBMISSION

(a) the work conforms to Standard 8 of the American Psychological Association’s Ethical Principles of Psychologist and Code of Conduct [click on “Standard 8” on http://www.apa.org/ethics/code/index.aspx ], which speaks to the ethics of conducting and publishing research and sharing data for the purpose of verification;

(b) if the manuscript includes any copyrighted material the author understands that if the manuscript is accepted for publication s/he will be responsible for obtaining written permission to use that material;

(c) if any of the authors has a potential conflict of interest pertaining to the manuscript that conflict has been disclosed in a message to the Editor;

(d) the author(s) understand(s) that before a manuscript can be published in a Learning & Behavior the copyright to that manuscript must be transferred to the Psychonomic Society (see http://www.psychonomic.org/psp/access.html for details);

(e) The corresponding author is familiar with the Psychonomic Society’s Statistical Guidelines. Please see tab “Statistical Guidelines” below.

 

STATISTICAL GUIDELINES

The Psychonomic Society’s Publications Committee and Ethics Committee and the Editors in Chief of the Society’s six journals worked together (with input from others) to create these guidelines on statistical issues. These guidelines focus on the analysis and reporting of quantitative data. Many of the issues described below pertain to vulnerabilities in null hypothesis significance testing (NHST), in which the central question is whether or not experimental measures differ from what would be expected due to chance. Below we emphasize some steps that researchers using NHST can take to avoid exacerbating those vulnerabilities. Many of the guidelines are long-standing norms about how to conduct experimental research in psychology. Nevertheless, researchers may benefit from being reminded of some of the ways that poor experimental procedure and analysis can compromise research conclusions. Authors are asked to consider the following issues for each manuscript submitted for publication in a Psychonomic Society journal. Some of these issues are specific to NHST, but many of them apply to other approaches as well. We welcome feedback regarding these guidelines via email to info@psychonomic.org with the Subject heading “Statistical Guidelines.”

1. It is important to address the issue of statistical power. Statistical power refers to the probability that a test will reject a false null hypothesis. Studies with low statistical power produce inherently ambiguous results because they often fail to replicate. Thus it is highly desirable to have ample statistical power and to report an estimate of a priori power (not post hoc power) for tests of your main hypotheses. Best practice when feasible is to draw on the literature and/or theory to make a plausible estimate of effect size and then to test a sufficient number of participants to attain adequate power to detect an effect of that size. There is no hard-and-fast rule specifying “adequate” power, and Editors may judge that other considerations (e.g., novelty, difficulty) partially offset low power. If a priori power cannot be calculated because there is no estimate of effect size, then perhaps the analysis should focus on estimation of the effect size rather than on a hypothesis test. In any case, the Method section should make clear what criteria were used to determine the sample size. The main points here are to (a) do what you reasonably can to attain adequate power and (b) explain how the number of participants was determined.

2. Multiple NHST tests inflate null-hypothesis rejection rates. Tests of statistical significance (e.g., t-tests, analyses of variance) should not be used repeatedly on different subsets of the same data set (e.g., on varying numbers of participants in a study) without statistical correction, because the Type I error rate increases across multiple tests.

A. One concern is the practice of testing a small sample of participants and then analyzing the data and deciding what to do next depending on whether the predicted effect (a) is statistically significant (stop and publish!), (b) clearly is not being obtained (stop, tweak, and start a new experiment), or (c) looks like it might become significant if more participants are added to the sample (test more participants, then reanalyze; repeat as needed). If this “optional stopping rule” has been followed without appropriate corrections, then report that fact and acknowledge that the Type I error rate is inflated by the multiple tests. Depending on the views of the Editor and reviewers, having used this stopping rule may not preclude publication, but unless appropriate corrections to the Type I error rate are made it will lessen confidence in the reported results. Note that Bayesian data analysis methods are less sensitive to problems related to optional stopping than NHST methods.

B. It is problematic to analyze data and then drop some participants or some observations, re-run the analyses, and then report only the last set of analyses. If participants or observations were eliminated, then explicitly indicate why, when, and how this was done and either (a) report or synopsize the results of analyses that include all of the observations or (b) explain why such analyses would not be appropriate.

C. Covariate analyses should either be planned in advance or be described as exploratory. It is inappropriate to analyze data without a covariate, then re-analyze those same data with a covariate and report only the latter analysis as confirmation of an idea. It may be appropriate to conduct multiple analyses in exploratory research, but it is important to report those analyses as exploratory and to acknowledge possible inflations of the Type I error rate.

D. If multiple dependent variables (DVs) are individually analyzed with NHST, the probability that at least one of them will be “significant” by chance alone grows with the number of DVs. Therefore it is important to inform readers of all of the DVs collected that are relevant to the study. For example, if accuracy, latency, and confidence were measured, but the paper focuses on the accuracy data, then report the existence of the other measures and (if possible) adjust the analyses as appropriate. Similarly, if several different measures were used to tap a construct, then it is important to report the existence of all of those indices, not just the ones that yielded significant effects (although it may be reasonable to present a rationale for why discounting or not reporting detailed results for some of the measures is justified). There is no need to report measures that were available to you (e.g., via a participant pool data base) but that are irrelevant to the study.

3. Rich descriptions of the data help reviewers, the Editor, and other readers understand your findings. Thus it is important to report appropriate measures of variability around means and around effects (e.g., confidence intervals around means and/or around standardized effect sizes).

4. Cherry picking experiments, conditions, DVs, or observations can be misleading. Give readers the information they need to gain an accurate impression of the reliability and size of the effect in question.

A. Conducting multiple experiments with the same basic procedure and then reporting only the subset of those studies that yielded significant results (and putting the other experiments in an unpublished “file drawer”) can give a misleading impression of the size and replicability of an effect. If several experiments testing the same hypothesis with the same or very similar methods have been conducted and have varied in the pattern of significant and null effects obtained (as would be expected, if only due to chance), then you should report both the significant and the non-significant findings. Reporting the non-significant findings can actually strengthen evidence for the existence of an effect when meta-analytical techniques pool effect sizes across experiments. It is not generally necessary to report results from exploratory pilot experiments, such as when pilot experiments were used to estimate effect size, provided the final experiment has high power. In contrast, it is not appropriate to run multiple low-powered pilot experiments on a given topic and then report only the experiments that reject the null hypothesis.

B. Deciding whether or not to report data from experimental conditions post hoc, contingent on the outcome of NHST, inflates the Type I error rate. Therefore, please inform readers of all of the conditions tested in the study. If, for example, 2nd, 4th, and 6th graders were tested in a study of memory development then it is appropriate to report on all three of those groups, even if one of them yielded discrepant data. This holds even if there are reasons to believe that some data should be discounted (e.g., due to a confound, a ceiling or floor effect in one condition, etc.). Here again, anomalous results do not necessarily preclude publication (after all, even ideal procedures yield anomalous results sometimes by chance). Failing to report the existence of a condition that did not yield the expected data can be misleading.

C. Deciding to drop participants or observations post hoc contingent on the outcome of NHST inflates the Type I error rate. Best practice is to set inclusion/exclusion criteria in advance and stick to them, but if that is not done then whatever procedure was followed should be reported.

5. Be careful about using null results to infer “boundary conditions” for an effect. A single experiment that does not reject the null hypothesis provides only weak evidence for the absence of an effect. Too much faith in the outcome of a single experiment can lead to hypothesizing after the results are known (HARKing), which can lead to theoretical ideas being defined by noise in experimental results. Unless the experimental evidence for a boundary condition is strong, it may be more appropriate to consider a non-significant experimental finding as a Type II error. Such errors occur at a rate that reflects experimental power (e.g., if power is .80, then 20% of exact replications would be expected to fail to reject the null).

6. Authors should use statistical methods that best describe and convey the properties of their data. The Psychonomic Society does not require authors to use any particular data analysis method. The following sections highlight some important considerations.

A. Statistically significant findings are not a prerequisite for publication in Psychonomic Society journals. Indeed, too many significant findings relative to experimental power can indicate bias. Sometimes strong evidence for null effects can be deeply informative for theorizing and for identifying boundary conditions of an effect.

B. In many scientific investigations the goal of an experiment is to measure the magnitude of an effect with some degree of precision. In such a situation a hypothesis test may be inappropriate as it only indicates whether data appear to differ from some specific theoretical value. Sometimes stronger scientific arguments can be made with confidence intervals (of parameter values or of standardized effect sizes). Moreover, some of the bias issues described above can be avoided by designing experiments to measure effects to a desired degree of precision (range of confidence interval).

C. The Psychonomic Society encourages the use of data analysis methods other than NHST when appropriate. For example, Bayesian data analysis methods avoid some of the problems described above. They can be used instead of traditional NHST methods for both hypothesis testing and estimation.

Last Word

Ultimately, journal Editors work with reviewers and authors to promote good scientific practice in publications in Psychonomic Society journals. A publication decision on any specific manuscript depends on much more than the above guidelines, and individual Editors and reviewers may stress some points more than others. Nonetheless, all else being equal submissions that comply with these guidelines will be better science and be more likely to be published than submissions that deviate from them.

 

总编辑

Jonathon D. Crystal,印第安纳大学

 

副主编

Chana Akins,肯塔基大学

Robin A. Murphy,牛津大学

Federico Sanabria,亚利桑那州立大学

Amanda M. Seed,圣安德鲁斯大学

 

咨询编辑

Chana K. Akins,肯塔基大学

Bernard Balleine,悉尼大学

Louise Barrett,莱斯布里奇大学

Melissa Bateson,纽卡斯尔大学

Irina Baetu,阿德莱德大学

Michael J.Beran,佐治亚州立大学

Aaron P. Blaisdell,加利福尼亚大学,洛杉矶

Mark E. Bouton,佛蒙特大学

Sarah F. Brosnan,乔治亚州立大学

Michael F. Brown,维拉诺瓦大学

Catalin V. Buhusi,犹他州立大学

Josep Call,圣安德鲁斯大学

维多利亚D. Chamizo,巴塞罗那大学

Jackie Chappell,伯明翰大学

Ken Cheng,麦考瑞大学

罗素M.教堂,布朗大学

Nicola S. Clayton,剑桥大学

Andrew R. Delameter,布鲁克林学院,CUNY

Karyn M. Frick,威斯康星大学密尔沃基分校

Robert Gerlai ,多伦多大学密西沙加

伦道夫·格雷斯,坎特伯雷大学

杰弗里·霍尔,约克大学

罗伯特·汉普顿,埃默里大学

贾斯汀·哈里斯,悉尼大学

Mark Haselgrove,诺丁汉大学

Susan Healy,圣安德鲁斯大学

卢西亚F. Jacobs,加利福尼亚大学伯克利分校

基思·詹森,曼彻斯特大学

Jeffrey S. Katz,奥本大学

Debbie M. Kelly,曼尼托巴大学

Kimberly Kirkpatrick,堪萨斯州立大学

Olga F. Lazavera,德雷克大学

Kenneth J 德克萨斯州克里斯蒂安大学的 Leising 德克萨斯基督教大学

Mike E. Le Pelley,新南威尔士大学

Suzanne E.麦克唐纳,约克大学

Armando D. MachadoMinho大学

Robert J. McDonald,莱斯布里奇大学

Ralph R. Miller,纽约州立大学在宾厄姆

顿·艾米Odum, 犹他州立大学

Irene M. Pepperberg,哈佛大学C.MS

Plowright,渥太华大学

William A. Roberts,西方大学

David R. Shanks,伦敦大学学院

Marcia L. Spetch,阿尔伯塔

大学Christopher B. SturdyUniversity of Alberta

Peter J. Urcuioli,普渡大学

Jennifer Vonk,奥克兰大学


投稿问答最小化  关闭

投稿有问题?问问网友吧!

问答