万维书刊网微信二维码

扫微信,关注编辑QQ!

您的位置:万维书刊网 >>ssci期刊 >>心理学类>>生物心理学
您的位置:万维书刊网 >>ssci期刊 >>心理学类>>实验心理学
您的位置:万维书刊网 >>sci/e期刊大全 >>医药卫生3>>行为科学

LEARNING & BEHAVIOR《学习与行为》 (官网投稿)

简介
  • 期刊简称LEARN BEHAV
  • 参考译名《学习与行为》
  • 核心类别 SSCI(2024版), SCIE(2024版), 目次收录(维普),外文期刊,
  • IF影响因子
  • 自引率
  • 主要研究方向PSYCHOLOGY, BIOLOGICAL;PSYCHOLOGY, EXPERIMENTAL

主要研究方向:

等待设置主要研究方向
PSYCHOLOGY, BIOLOGICAL;PSYCHOLOGY, EXPERIMENTAL

LEARNING & BEHAVIOR《学习与行为》(季刊)。Learning & Behavior publishes experimental and theoretical contributions and critical reviews concerning fundamental&nb...[显示全部]
征稿信息

万维提示:

1、投稿方式:在线投稿。

2、官网网址:https://www.springer.com/journal/13420

3、投稿网址:http://mc.manuscriptcentral.com/lb

4、官网邮箱:

Jeff.Davis@springer.com

(有关稿件出版或出版后更正问题的查询)

morgan.ryan@springer.com

(有关期刊的任何其他查询或提交前的查询)

5、期刊刊期:季刊,逢季末月出版。

2021315日星期一

                            

 

投稿须知

【官网信息】

 

Submission guidelines

Instructions for Authors

General Information for Learning & Behavior

Learning & Behavior presents experimental and theoretical contributions and critical reviews concerning fundamental processes of learning and behavior in nonhuman and human animals. Topics covered include sensation, perception, conditioning, learning, attention, memory, motivation, emotion, development, social behavior, and comparative investigations.

How to Submit

Manuscripts are to be submitted electronically via ScholarOne:

If you have not submitted via the ScholarOne submission system before, then you will first be asked to create an account. Otherwise you can use your existing account.

http://mc.manuscriptcentral.com/lb

Affirmations at the Time of Submission

(a) if the manuscript includes any copyrighted material the author understands that if the manuscript is accepted for publication s/he will be responsible for obtaining written permission to use that material;

 (b) if any of the authors has a potential conflict of interest pertaining to the manuscript that conflict has been disclosed in a message to the Editor;

 (c) the author(s) understand(s) that before a manuscript can be published in a Learning & Behavior the copyright to that manuscript must be transferred to the Psychonomic Society (see http://www.psychonomic.org/psp/access.html for details). This does not include open access articles published in the journal, which are published under a difference license. Please see Open Access Publishing for more information;

 (d) The corresponding author is familiar with the Psychonomic Society’s Statistical Guidelines. Please see tab “Statistical Guidelines” below.

Declarations

All manuscripts must contain the following sections under the heading 'Declarations'.

If any of the sections are not relevant to your manuscript, please include the heading and write 'Not applicable' for that section.

To be used for life science journals + articles with biological applications

Funding (information that explains whether and by whom the research was supported)

Conflicts of interest/Competing interests (include appropriate disclosures)

Ethics approval (include appropriate approvals or waivers)

Consent to participate (include appropriate statements)

Consent for publication (include appropriate statements)

Availability of data and materials (data transparency)

Code availability (software application or custom code)

Authors' contributions (optional: please review the submission guidelines from the journal whether statements are mandatory)

Please see the relevant sections in the submission guidelines for further information as well as various examples of wording. Please revise/customize the sample statements according to your own needs.

Open Practices

Since its inception, the core mission of the Psychonomic Society has been to foster the science of cognition through the advancement and communication of basic research in experimental psychology and allied sciences. To promote replicable research practices, the policy of the Psychonomic Society is to publish papers in which authors follow standards for disclosing all important aspects of the research design and data analysis. The Society does not enforce any single reporting standard, but authors are encouraged to review and adopt guidelines described, for example, by the American Psychological Association (APA).

In 2017, the Society signed on to the Open Science Initiative’s Level 1 Transparency and Openness Guidelines. All authors are required to respond to the questions below, and in addition, all submitted manuscripts must include an Open Practices Statement immediately prior to the References section of the paper. The statement must specify (1) whether data and/or materials are available, and if so, where (as per Level 1 TOP guidelines, URLs are required to have a persistent identifier); and (2) whether any experiments were preregistered, and if so, which.

The following are examples of appropriate Open Practices Statements:

The data and materials for all experiments are available at (url for the site hosting the data and materials) and Experiment 1 was preregistered (url for the preregistration).

None of the data or materials for the experiments reported here is available, and none of the experiments was preregistered.

Statistical Guidelines

The Psychonomic Society’s Publications Committee and Ethics Committee and the Editors-in-Chief of the Society’s seven journals worked together (with input from others) to create these guidelines on statistical issues. These guidelines focus on the analysis and reporting of quantitative data. Many of the issues described below pertain to vulnerabilities in null hypothesis significance testing (NHST), in which the central question is whether or not experimental measures differ from what would be expected due to chance. Below we emphasize some steps that researchers using NHST can take to avoid exacerbating those vulnerabilities. Many of the guidelines are long-standing norms about how to conduct experimental research in psychology. Nevertheless, researchers may benefit from being reminded of some of the ways that poor experimental procedure and analysis can compromise research conclusions. Authors are asked to consider the following issues for each manuscript submitted for publication in a Psychonomic Society journal. Some of these issues are specific to NHST, but many of them apply to other approaches as well.

1. It is important to address the issue of statistical power. Statistical power refers to the sensitivity of a test to detect hypothetical true effects. Studies with low statistical power often produce ambiguous results. Thus it is highly desirable to have ample statistical power for effects that would be of interest to others and to report a priori power at several effect sizes (not post hoc power) for tests of your main hypotheses. Best practice is to determine what effects would be interesting (e.g., those that one would consider non-negligible, useful, or theoretically meaningful) and then to test a sufficient number of participants to attain adequate power to detect an effect of that size. There is no hard-and-fast rule specifying “adequate” power, and Editors may judge that other considerations (e.g., novelty, difficulty) justify a low-powered design. If there is no smallest effect size of interest for an a priori power analysis, then authors can report the effect size at which the design and test have 50% power. Alternatively an analysis might focus on estimation of an effect size rather than on a hypothesis test. In any case, the Method section should make clear what criteria were used to determine the sample size. The main points here are to (a) do what you reasonably can to design an experiment that allows a sensitive test and (b) explain how the number of participants was determined.

2. Multiple NHST tests inflate null-hypothesis rejection rates. Tests of statistical significance (e.g., t-tests, analyses of variance) should not be used repeatedly on different subsets of the same data set (e.g., on varying numbers of participants in a study) without statistical correction, because the Type I error rate increases across multiple tests.

A. One concern is the practice of testing a small sample of participants and then analyzing the data and deciding what to do next depending on whether the predicted effect (a) is statistically significant (stop and publish!), (b) clearly is not being obtained (stop, tweak, and start a new experiment), or (c) looks like it might become significant if more participants are added to the sample (test more participants, then reanalyze; repeat as needed). If this “optional stopping rule” has been followed without appropriate corrections, then report that fact and acknowledge that the Type I error rate is inflated by the multiple tests. Depending on the views of the Editor and reviewers, having used this stopping rule may not preclude publication, but unless appropriate corrections to the Type I error rate are made it will lessen confidence in the reported results. Note that Bayesian data analysis methods are less sensitive to problems related to optional stopping than NHST methods; see Rouder (2014) for a discussion and pointers to other literature.

B. It is problematic to analyze data and then drop some participants or some observations, re-run the analyses, and then report only the last set of analyses. If participants or observations were eliminated, then explicitly indicate why, when, and how this was done and either (a) report or synopsize the results of analyses that include all of the observations or (b) explain why such analyses would not be appropriate.

C. Covariate analyses should either be planned in advance (and in that case would benefit from preregistration) or be described as exploratory. It is inappropriate to analyze data without a covariate, then re-analyze those same data with a covariate and report only the latter analysis as confirmation of an idea. It may be appropriate to conduct multiple analyses in exploratory research, but it is important to report those analyses as exploratory and to acknowledge possible inflations of the Type I error rate.

D. If multiple dependent variables (DVs) are individually analyzed with NHST, the probability that at least one of them will be “significant” by chance alone grows with the number of DVs. Therefore it is important to inform readers of all of the DVs collected that are relevant to the study. For example, if accuracy, latency, and confidence were measured, but the paper focuses on the accuracy data, then report the existence of the other measures and (if possible) adjust the analyses as appropriate. Similarly, if several different measures were used to tap a construct, then it is important to report the existence of all of those indices, not just the ones that yielded significant effects (although it may be reasonable to present a rationale for why discounting or not reporting detailed results for some of the measures is justified). There is no need to report measures that were available to you (e.g., via a participant pool data base) but that are irrelevant to the study. The selection of variables would again benefit from preregistration; see the Psychonomic Society’s recent digital event on this issue: https://featuredcontent.psychonomic.org/4527-2/.

3. Rich descriptions of the data help reviewers, the Editor, and other readers understand your findings. Thus it is important to report appropriate measures of variability around means and around effects (e.g., confidence intervals or Baysesian credible intervals), and ideally plot the raw data or descriptions of the data (e.g., violin plots, box plots, scatterplots).

4. Cherry picking experiments, conditions, DVs, or observations can be misleading. Give readers the information they need to gain an accurate impression of the reliability and size of the effect in question.

A. Conducting multiple experiments with the same basic procedure and then reporting only the subset of those studies that yielded significant results (and putting the other experiments in an unpublished “file drawer”) can give a misleading impression of the size and replicability of an effect. If several experiments testing the same hypothesis with the same or very similar methods have been conducted and have varied in the pattern of significant and null effects obtained (as would be expected, if only due to chance), then you should report both the significant and the non-significant findings. Reporting the non-significant findings can actually strengthen evidence for the existence of an effect when meta-analytical techniques pool effect sizes across experiments. It is not generally necessary to report results from exploratory pilot experiments (although their existence might be mentioned), such as when pilot experiments were used to estimate effect size, provided the final experiment is sufficiently informative. In contrast, it is not appropriate to run multiple low-powered pilot experiments on a given topic and then report only the experiments that reject the null hypothesis.

B. Deciding whether or not to report data from experimental conditions post hoc, contingent on the outcome of NHST, inflates the Type I error rate. Therefore, please inform readers of all of the conditions tested in the study. If, for example, 2nd, 4th, and 6th graders were tested in a study of memory development then it is appropriate to report on all three of those groups, even if one of them yielded discrepant data. This holds even if there are reasons to believe that some data should be discounted (e.g., due to a confound, a ceiling or floor effect in one condition, etc.). Here again, anomalous results do not necessarily preclude publication (after all, even ideal procedures yield anomalous results sometimes by chance). Failing to report the existence of a condition that did not yield the expected data can be misleading.

C. Deciding to drop participants or observations post hoc contingent on the outcome of NHST inflates the Type I error rate. Best practice is to set inclusion/exclusion criteria in advance (and pre-register where appropriate) and stick to them, but if that is not done then whatever procedure was followed should be reported.

5. Be careful about using null results to infer “boundary conditions” for an effect. A single experiment that does not reject the null hypothesis may provide only weak evidence for the absence of an effect. Too much faith in the outcome of a single experiment can lead to hypothesizing after the results are known (HARKing), which can lead to theoretical ideas being defined by noise in experimental results. Unless the experimental evidence for a boundary condition is strong, it may be more appropriate to consider a non-significant experimental finding as a Type II error. Such errors occur at a rate that reflects experimental power (e.g., if power is .80, then 20% of exact replications would be expected to fail to reject the null). Bayesian tests may provide stronger evidence for the null hypothesis than is possible through frequentist statistics.

6. Authors should use statistical methods that best describe and convey the properties of their data. The Psychonomic Society does not require authors to use any particular data analysis method. The following sections highlight some important considerations.

A. Statistically significant findings are not a prerequisite for publication in Psychonomic Society journals. Indeed, too many significant findings relative to experimental power can indicate bias. Sometimes strong evidence for null or negligible effects can be deeply informative for theorizing and for identifying boundary conditions of an effect.

B. In many scientific investigations the goal of an experiment is to measure the magnitude of an effect with some degree of precision. In such a situation stronger scientific arguments may be available through confidence or credible intervals (of parameter values or of standardized effect sizes). Moreover, some of the bias issues described above can be avoided by designing experiments to measure effects to a desired degree of precision (range of confidence interval). Confidence intervals must be interpreted with care, see the Psychonomic Society’s digital event on this issue: https://featuredcontent.psychonomic.org/confidence-intervals-digital-event-december-2015/.

C. The Psychonomic Society encourages the use of data analysis methods other than NHST when appropriate. For example, Bayesian data analysis methods avoid some of the problems described above. They can be used instead of traditional NHST methods for both hypothesis testing and estimation. The Psychonomic Society’s recent digital event on Bayesian methods may be helpful here: https://featuredcontent.psychonomic.org/bayesinpsych-a-digital-event-on-the-role-of-bayesian-statistics-and-modelling-in-psychology/.

Last Word

Ultimately, journal Editors work with reviewers and authors to promote good scientific practice in publications in Psychonomic Society journals. A publication decision on any specific manuscript depends on much more than the above guidelines, and individual Editors and reviewers may stress some points more than others. Nonetheless, all else being equal submissions that comply with these guidelines will be better science and be more likely to be published than submissions that deviate from them.

Resources

There are many excellent sources for information on statistical issues. Listed below are some that the 2019 Publications Committee and Editors recommend.

Confidence Intervals

o Masson, M. J., & Loftus, G. R. (2003). Using confidence intervals for graphically based data interpretation. Canadian Journal of Experimental Psychology/Revue Canadienne de Psychologie Expérimentale, 57, 203-220. doi:10.1037/h0087426

o Morey, R. D., Hoekstra, R., Rouder, J. N., Lee, M. D., & Wagenmakers, E. J. (2016). The fallacy of placing confidence in confidence intervals. Psychonomic bulletin & review, 23, 103-123.

Effect Size Estimates

o Ellis, P. D. (2010). The essential guide to effect sizes: Statistical power, meta-analysis and the interpretation of research results. Cambridge University Press. ISBN 978-0-521-14246-5.

o Fritz, C. O., Morris, P. E., & Richler, J. J. (2011). Effect size estimates: Current use, calculations and interpretation. Journal of Experimental Psychology: General, 141, 2-18.

o Grissom, R. J., & Kim, J. J. (2012). Effect sizes for research: Univariate and multivariate applications (2nd ed.). New York, NY US: Routledge/Taylor & Francis Group

Meta-analysis

o Cumming, G. (2012). Understanding the new statistics: Effect sizes, confidence intervals, and meta-analysis. New York, NY US: Routledge/Taylor & Francis Group.

o Littell, J. H., Corcoran, J., & Pillai, V. (2008). Systematic reviews and meta-analysis. New York: Oxford University Press.

Bayesian Data Analysis

o Kruschke, J. K. (2011). Doing Bayesian data analysis: A tutorial with R and BUGS. San Diego, CA US: Elsevier Academic Press.

o Kruschke, J. K. (2013). Bayesian estimation supersedes the t test. Journal of Experimental Psychology: General, 144(2), 573-603.

o McElreath, R. (2015). Statistical rethinking: A Bayesian course with examples in R and Stan. Chapman and Hall/CRC.

o Rouder, J. N. (2014). Optional stopping: No problem for Bayesians. Psychonomic bulletin & review, 21, 301-308.

Power Analysis

o Faul, F., Erdfelder, E., Lang, A., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175-191. (See http://www.psycho.uni-duesseldorf.de/abteilungen/aap/gpower3/)

Manuscript Style

Manuscripts are to adhere to the conventions described in the Publication Manual of the American Psychological Association (6th ed.). See www.apastyle.org/ for information on APA style, or type “APA style” into a search engine to find numerous online sources of information about APA style. Here we highlight only the most fundamental aspects of that style.

Layout: All manuscripts are to be double spaced and have 1” margins with page numbers in the upper right corner of each page.

Title Page: The title page must include the authors’ names and affiliations and the corresponding author’s address, telephone number, and e-mail address.

Abstract: There must be an abstract of no more than 250 words.

Sections: Manuscript should be divided into sections (and perhaps subsections) appropriate for their content (e.g., introduction/background, Method, Results, etc.), as per APA style.

Acknowledgments: The Author Note should include sources of financial support and any possible conflicts of interest. If desirable, contributions of different authors may be briefly described here. Reviewers and the Editor should not be thanked in the Author Note.

Figures and Tables: Figures and tables are to be designed as per APA style.

Location of Figures, Tables, and Footnotes: In submitted manuscripts, figures and tables can be embedded in the body of the text and footnotes can be placed at the bottom of the page on which the footnoted material is referenced. Note that this is a departure from APA style; if you prefer you can submit the manuscript with the figures, tables, and footnotes at the end, but it is slightly easier for reviewers if these elements appear near the text that refers to them. When a paper is accepted, in the final version that the author submits for production each figure and table must be on a separate page near the end of the manuscript and all footnotes must be listed on a footnote page, as per the APA Publication Manual.

Citations and References: These should conform to APA style.

Acknowledgments and Funding Information

Acknowledgments of people, grants, funds, etc. should be placed in a separate section on the Title page. The names of funding organizations should be written in full. In addition, please provide the funding information in a separate step of the submission process in the peer review system. Funder names should preferably be selected from the standardized list you will see during submission. If the funding institution you need is not listed, it can be entered as free text. Funding information will be published as searchable metadata for the accepted article, whereas acknowledgements are published within the paper.

Color Figures

Authors are encouraged to use color in figures if they believe that doing so improves the clarity of those figures. With the approval of the Editor, color can be used in the online version of the journal at no cost to authors. Moreover, as of 2011, the Editor has a limited budget for printing hard copy articles with color figures at no expense to authors. The Editor makes the final decision as to whether or not an article will be printed in hard copy with color: The greater the scientific value of using color the more likely an Editor will approve its use. Also, authors can pay for printed production of their articles with color figures; the current fee is $1,100 per article (regardless of the number of color figures).

Whether used only online or both in print and online, color figures should (insofar as is possible) be designed such that grayscale versions are interpretable. This is important because readers may wish to print or photocopy articles in grayscale.

Research Data Policy and Data Availability Statements

A submission to the journal implies that materials described in the manuscript, including all relevant raw data, will be freely available to any researcher wishing to use them for non-commercial purposes, without breaching participant confidentiality.

Data availability

All original research must include a data availability statement. Data availability statements should include information on where data supporting the results reported in the article can be found, if applicable. Statements should include, where applicable, hyperlinks to publicly archived datasets analysed or generated during the study. For the purposes of the data availability statement, “data” is defined as the minimal dataset that would be necessary to interpret, replicate and build upon the findings reported in the article. When it is not possible to share research data publicly, for instance when individual privacy could be compromised, data availability should still be stated in the manuscript along with any conditions for access. Data availability statements can take one of the following forms (or a combination of more than one if required for multiple datasets):

1. The datasets generated during and/or analysed during the current study are available in the [NAME] repository, [PERSISTENT WEB LINK TO DATASETS]

2. The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

3. All data generated or analysed during this study are included in this published article [and its supplementary information files].

4. The datasets generated during and/or analysed during the current study are not publicly available due [REASON(S) WHY DATA ARE NOT PUBLIC] but are available from the corresponding author on reasonable request.].

5. Data sharing not applicable to this article as no datasets were generated or analysed during the current study.

6. The data that support the findings of this study are available from [THIRD PARTY NAME] but restrictions apply to the availability of these data, which were used under licence for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of [THIRD PARTY NAME].

More templates for data availability statements, including examples of openly available and restricted access datasets, are available here:

Data availability statements

Data repositories

This journal strongly encourages that all datasets on which the conclusions of the paper rely are available to readers. We encourage authors to ensure that their datasets are either deposited in publicly available repositories (where available and appropriate) or presented in the main manuscript or additional supporting files whenever possible. Please see Springer Nature’s information on recommended repositories.

List of Repositories

General repositories - for all types of research data - such as figshare and Dryad may be used where appropriate.

Data citation

The journal also requires that authors cite any publicly available data on which the conclusions of the paper rely. Data citations should include a persistent identifier (such as a DOI), should be included in the reference list using the minimum information recommended by DataCite, and follow journal style. Dataset identifiers including DOIs should be expressed as full URLs.

Research data and peer review

Peer reviewers are encouraged to check the manuscript’s Data availability statement, where applicable. They should consider if the authors have complied with the journal’s policy on the availability of research data, and whether reasonable effort has been made to make the data that support the findings of the study available for replication or reuse by other researchers. Peer reviewers are entitled to request access to underlying data (and code) when needed for them to perform their evaluation of a manuscript.

Authors who need help understanding our data sharing policies, help finding a suitable data repository, or help organising and sharing research data can access our Author Support portal for additional guidance.

For more information:

http://www.springernature.com/gp/group/data-policy/faq

English Language Editing

For editors and reviewers to accurately assess the work presented in your manuscript you need to ensure the English language is of sufficient quality to be understood. If you need help with writing in English you should consider:

Asking a colleague who is a native English speaker to review your manuscript for clarity.

Visiting the English language tutorial which covers the common mistakes when writing in English.

Using a professional language editing service where editors will improve the English to ensure that your meaning is clear and identify problems that require your review. Two such services are provided by our affiliates Nature Research Editing Service and American Journal Experts. Springer authors are entitled to a 10% discount on their first submission to either of these services, simply follow the links below.

English language tutorial

Nature Research Editing Service

American Journal Experts

Please note that the use of a language editing service is not a requirement for publication in this journal and does not imply or guarantee that the article will be selected for peer review or accepted.

If your manuscript is accepted it will be checked by our copyeditors for spelling and formal style before publication.

……


  • 万维QQ投稿交流群    招募志愿者

    版权所有 Copyright@2009-2015豫ICP证合字09037080号

     纯自助论文投稿平台    E-mail:eshukan@163.com


投稿问答最小化  关闭