path_bib <- here::here("public", "bibliography.csv")
bib <- data.table::fread(path_bib) %>%
# turn URLs into hyperlinks that open in a new tap:
.[, url := paste0("<a href='", url , "' target='_blank'>URL</a>")] %>%
# remove curly brackets from the title:
.[, title := gsub("[{}]", "", title)] %>%
# convert dates when references were added:
.[, ":="(date_added_converted = as.Date(
get("date-added"), format = "%Y-%m-%d %H:%M:%S %z"))]
bib_all = bib %>%
# select columns:
.[,c("ID", "title", "author", "year", "journal", "url")]
Welcome! This website showcases the bibliography maintained by Lennart Wittkuhn.
At the moment, this bibliography contains 1926 references for publications in neuroscience, psychology, statistics, artificial intelligence, meta-science and more. The website was last updated on 05 November, 2021.
The source code of this website can be found at https://github.com/lnnrtwttkhn/bibliography/.
To understand how the bibliographical data was processed in this document, you can click on the Code
tabs on the right.
For example, by clicking on the first Code
tab on the right, you will be able to see the R code that was used to load and process the bibliographical information stored in the bibliography.bib
file.
The table below lists all references sorted by the first author’s last name, publication year, and journal name (in that order).
You can use the Search
bar on the right to search for certain publications (e.g., by author or journal name).
The URL
column contains hyperlinks that should bring you to the publisher’s website for the corresponding publication.
When you use the bibliography.bib
file in one of your documents (in LaTeX) and you want to cite one of the references, use the citation key in the ID
column to cite the relevant publication, e.g., \cite{Wittkuhn2020B}
.
If a reference is missing, please get in touch.
Figure 1 shows a table of the entire bibliography.
DT::datatable(
bib_all,
class = "cell-border stripe", rownames = FALSE, escape = FALSE,
options = list(
scrollX = TRUE,
pageLength = 5,
lengthMenu = c(5, 10, 15, 20)
),
)
In the sections below, I filter the bibliography e.g., depending on certain keywords.
Table 2 list all publications in the bibliography.bib
file that mention the term replay
in the abstract or the title:
search_terms = c("replay")
bib_replay = bib %>%
.[, ":="(abstract_lower = tolower(abstract), title_lower = tolower(title))] %>%
filter_at(
.vars = vars(abstract_lower, title_lower),
.vars_predicate = any_vars(
stringr::str_detect(
string = .,
pattern = paste(search_terms, collapse = "|")))) %>%
setDT(.) %>%
.[, c("ID", "title", "author", "year", "journal", "url")] %>%
setorder(author, year)
DT::datatable(
bib_replay,
class = "cell-border stripe", rownames = FALSE, escape = FALSE,
options = list(
scrollX = TRUE,
pageLength = 5,
lengthMenu = c(5, 10, 15, 20)
),
)
Figure 3 shows the number of papers published per year, containing replay-related search terms in the abstract.
Note, that the numbers are based only on the publications in the bibliography.
The bibliography likely does not contain all publications on replay
or reactivation
in the literature.
search_terms = c("replay", "reactivation")
bib_replay_time = bib %>%
transform(abstract = tolower(abstract), title = tolower(title)) %>%
# search for search terms in title and abstract
filter_at(
.vars = vars(abstract, title),
.vars_predicate = any_vars(
stringr::str_detect(
string = .,
pattern = paste(search_terms, collapse = "|")))) %>%
setDT(.) %>%
.[, by = .(year), .(N = .N)] %>%
setorder(year) %>%
transform(year = as.Date(ISOdate(year, 1, 1)))
title_string = paste("Number of papers published each year\n",
"with abstracts containing the following search terms:\n",
paste(search_terms, collapse = ", " ))
ggplot(data = bib_replay_time, aes(x = year, y = N)) +
geom_bar(stat = "identity") +
ggtitle(title_string) +
ylab("Number of publications per year") +
xlab("Publication year") +
theme(panel.background = element_blank()) +
theme(panel.grid.major = element_blank()) +
theme(panel.grid.minor = element_blank()) +
theme(axis.line = element_line(colour = "black")) +
coord_capped_cart(bottom = "both", left = "both") +
scale_x_date(date_breaks = "1 years", date_labels = "%Y") +
theme(axis.text.x = element_text(angle = 45, hjust = 1)) +
theme(plot.title = element_text(hjust = 0.5)) +
geom_text(aes(label = N), vjust = -0.5) +
ylim(c(0, 80))
In this section, I filter publications in the bibliography.bib
file for literature review papers on replay.
The matching papers are listed in Figure 4, sorted by year in descending order (newest to oldest) and author name (alphabetical).
bib_replay_review = bib %>%
filter(stringr::str_detect(string = tags, pattern = "replay")) %>%
filter(stringr::str_detect(string = tags, pattern = "review")) %>%
setDT(.) %>%
.[,c("ID", "title", "author", "year", "journal", "url")] %>%
setorder(., -year, author)
DT::datatable(
data = bib_replay_review,
class = "cell-border stripe", rownames = FALSE, escape = FALSE,
options = list(
scrollX = TRUE,
pageLength = 5,
lengthMenu = c(5, 10, 15, 20)
),
)
In this section, I filter publications in the bibliography.bib
file for papers using intracranial recordings in humans.
The matching papers are listed in Figure 5.
Many (but not all!) of them are investigating replay-like signals in the medial temporal lobe and therefore establish important correspondence to electrophysiological recordings of replay in rodents.
bib_intracranial = bib %>%
filter(stringr::str_detect(string = tags, pattern = "human")) %>%
filter(stringr::str_detect(string = tags, pattern = "intracranial")) %>%
setDT(.) %>%
.[,c("ID", "title", "author", "year", "journal", "url")]
DT::datatable(
data = bib_intracranial,
class = "cell-border stripe", rownames = FALSE, escape = FALSE,
options = list(
scrollX = TRUE,
pageLength = 5,
lengthMenu = c(5, 10, 15, 20)
),
)
search_terms = "successor representation"
bib_successor_representation = bib %>%
.[, ":="(abstract_lower = tolower(abstract), title_lower = tolower(title))] %>%
# search for search terms in title and abstract
filter_at(
.vars = vars(abstract_lower, title_lower),
.vars_predicate = any_vars(
stringr::str_detect(
string = .,
pattern = paste(search_terms, collapse = "|")))) %>%
setDT(.) %>%
.[,c("ID", "title", "author", "year", "journal", "url")]
In this section, I filter publications in the bibliography.bib
file for papers on the successor representation in reinforcement learning.
The matching papers are listed in Figure 6.
The filter searches for the keyword successor representation
in the title
, abstract
or tags
of the bibliography entries.
20 matching papers were found.
DT::datatable(
data = bib_successor_representation,
class = "cell-border stripe", rownames = FALSE, escape = FALSE,
options = list(
scrollX = TRUE,
pageLength = 5,
lengthMenu = c(5, 10, 15, 20)
),
)
search_terms = "cognitive map"
bib_cognitive_map = bib %>%
.[, ":="(abstract_lower = tolower(abstract), title_lower = tolower(title))] %>%
# search for search terms in title and abstract
filter_at(
.vars = vars(abstract_lower, title_lower, tags),
.vars_predicate = any_vars(
stringr::str_detect(
string = .,
pattern = paste(search_terms, collapse = "|")))) %>%
setDT(.) %>%
.[,c("ID", "title", "author", "year", "journal", "url")]
In this section, I filter publications in the bibliography.bib
file for papers on cognitive maps.
The matching papers are listed in Figure 7.
The filter searches for the keyword cognitive map
in the title
, abstract
or tags
of the bibliography entries.
100 matching papers were found.
DT::datatable(
data = bib_cognitive_map,
class = "cell-border stripe", rownames = FALSE, escape = FALSE,
options = list(
scrollX = TRUE,
pageLength = 5,
lengthMenu = c(5, 10, 15, 20)
),
)
search_terms = "remapping"
bib_remapping = bib %>%
.[, ":="(abstract_lower = tolower(abstract), title_lower = tolower(title))] %>%
# search for search terms in title and abstract
filter_at(
.vars = vars(abstract_lower, title_lower, tags),
.vars_predicate = any_vars(
stringr::str_detect(
string = .,
pattern = paste(search_terms, collapse = "|")))) %>%
setDT(.) %>%
.[,c("ID", "title", "author", "year", "journal", "url")]
In this section, I filter publications in the bibliography.bib
file for papers on remapping in the hippocampus.
The matching papers are listed in Figure 8.
The filter searches for the keyword remapping
in the title
, abstract
or tags
of the bibliography entries.
32 matching papers were found.
DT::datatable(
data = bib_remapping,
class = "cell-border stripe", rownames = FALSE, escape = FALSE,
options = list(
scrollX = TRUE,
pageLength = 5,
lengthMenu = c(5, 10, 15, 20)
),
)
search_terms = "representation"
bib_representation = bib %>%
.[, ":="(abstract_lower = tolower(abstract), title_lower = tolower(title))] %>%
# search for search terms in title and abstract
filter_at(
.vars = vars(tags),
.vars_predicate = any_vars(
stringr::str_detect(
string = .,
pattern = paste(search_terms, collapse = "|")))) %>%
setDT(.) %>%
.[,c("ID", "title", "author", "year", "journal", "url")]
In this section, I filter publications in the bibliography.bib
file for papers on representations and representation learning.
This is a bit of challenge, because if you would search for “representation” in paper titles or abstracts, you would end up with a long list of papers, since the term is quite ubiquitous in neuroscience, psychology and machine learning.
Therefore, I here resort to manually annotating publications using the tags
field of the bibliography entries.
The filter shown here searches for the keyword representation
in the tags
field of the bibliography entries.
The matching papers are listed in Figure 9.
In total, 80 matching papers were found.
DT::datatable(
data = bib_representation,
class = "cell-border stripe", rownames = FALSE, escape = FALSE,
options = list(
scrollX = TRUE,
pageLength = 5,
lengthMenu = c(5, 10, 15, 20)
),
)
search_terms = "representational drift"
bib_representational_drift = bib %>%
.[, ":="(abstract_lower = tolower(abstract), title_lower = tolower(title))] %>%
# search for search terms in title and abstract
filter_at(
.vars = vars(abstract_lower, title_lower, tags),
.vars_predicate = any_vars(
stringr::str_detect(
string = .,
pattern = paste(search_terms, collapse = "|")))) %>%
setDT(.) %>%
.[,c("ID", "title", "author", "year", "journal", "url")]
In this section, I filter publications in the bibliography.bib
file for papers on representational drift.
The filter shown here searches for the keyword representation
in the abstract
, title
and tags
fields of the bibliography entries.
The matching papers are listed in Figure 10.
In total, 12 matching papers were found.
DT::datatable(
data = bib_representational_drift,
class = "cell-border stripe", rownames = FALSE, escape = FALSE,
options = list(
scrollX = TRUE,
pageLength = 5,
lengthMenu = c(5, 10, 15, 20)
),
)
search_terms = "zoo"
bib_zoo = bib %>%
# search for search terms in title and abstract
filter_at(
.vars = vars(tags),
.vars_predicate = any_vars(
stringr::str_detect(
string = .,
pattern = paste(search_terms, collapse = "|")))) %>%
setDT(.) %>%
.[,c("ID", "title", "author", "year", "journal", "url")]
Figure 11 shows relevant references for the Zoo project.
DT::datatable(
data = bib_zoo,
class = "cell-border stripe", rownames = FALSE, escape = FALSE,
options = list(
scrollX = TRUE,
pageLength = 5,
lengthMenu = c(5, 10, 15, 20)
),
)
Figure 12 lists all publications with missing PDFs.
bib_pdf = bib %>%
filter(get("bdsk-file-1") == "") %>%
filter(journal != "Zenodo") %>%
setDT(.) %>%
.[,c("ID", "title", "author", "year", "journal", "url")]
DT::datatable(
bib_pdf,
class = "cell-border stripe", rownames = FALSE, escape = FALSE,
options = list(
scrollX = TRUE,
pageLength = 5,
lengthMenu = c(5, 10, 15, 20)
),
)
In some of the sections above, I use information from the abstract to filter publications for specific keywords.
It is therefore important that the abstract
information is complete.
In Figure 13 I filter for all publications with missing abstract
information to continously update this information.
bib_abstract = bib %>%
filter(get("abstract") == "") %>%
filter(journal != "Zenodo") %>%
setDT(.) %>%
.[,c("ID", "title", "author", "year", "journal", "url")]
DT::datatable(
bib_abstract,
class = "cell-border stripe", rownames = FALSE, escape = FALSE,
options = list(
scrollX = TRUE,
pageLength = 5,
lengthMenu = c(5, 10, 15, 20)
),
)
bib_time = bib %>%
setorder(date_added_converted) %>%
.[, by = .(date_added_converted), .(num_added = .N)] %>%
verify(.[, .(N = .(sum(num_added)))]$N == nrow(bib))
Figure 14 shows the number of publications added to the bibliography per day. The maximum number of publications added on a single day were 51 publications added on 14 December, 2020.
ggplot(data = bib_time, aes(x = date_added_converted, y = num_added)) +
geom_bar(stat = "identity") +
ggtitle("Number of publications added to the bibliography per day") +
ylab("Number of publications added per day") +
xlab("Time (months)") +
theme(panel.background = element_blank()) +
theme(panel.grid.major = element_blank()) +
theme(panel.grid.minor = element_blank()) +
theme(axis.line = element_line(colour = "black")) +
coord_capped_cart(bottom = "both", left = "both") +
scale_x_date(date_breaks = "1 months", date_labels = "%b %Y") +
theme(axis.text.x = element_text(angle = 45, hjust = 1)) +
theme(plot.title = element_text(hjust = 0.5))
#annotate("segment", x = as.Date("2020-11-01 UTC"),
# xend = as.Date("2020-12-01 UTC"), y = 45, yend = 45,
# colour = "red", arrow = arrow(length = unit(0.30, "cm"))) +
#geom_label(aes(x = as.Date("2020-10-01 UTC"), y = 45,
# label = "hectic review\nwriting starts"),
# color = "red", size = 3, fontface = "bold")
If a reference is missing, please create a new issue and use the issue template for missing publications.
If you have any questions about the bibliography, the repository, if you spotted a bug or would like to make a comment, please also open an issue first, or otherwise email Lennart.
Thanks!
The Python script below (parser.py
) reads the bibliography.bib
file and uses (1) bibtexparser
to read the bibliography contents and (2) pandas
to transform the content into a bibliography.csv
file that is read into this notebook.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
import bibtexparser
import pandas as pd
project_name = 'bibliography'
path_options = [os.getenv('PWD'), os.getcwd()]
path_root = [x for x in path_options if project_name in x]
path_root = list(set(path_root))
assert len(path_root) == 1
path_root = path_root[0]
path_code = os.path.join(path_root, 'code')
if not os.path.exists(os.path.join(path_root, 'public')):
os.makedirs(os.path.join(path_root, 'public'))
with open(os.path.join(path_code, 'bibliography.bib')) as bibtex_file:
bib_database = bibtexparser.bparser.BibTexParser(
common_strings=True).parse_file(bibtex_file)
df = pd.DataFrame(bib_database.entries)
df = df.sort_values(by=['author', 'year', 'journal'])
csv_name = "bibliography.csv"
csv_path = os.path.join(path_root, 'public', csv_name)
df.to_csv(csv_path, index=False)
The R Markdown
notebook is then rendered on every push to the repo using continuous integration via Travis:
dist: bionic
language: r
sudo: true
latex: false
git:
submodules: false
before_install:
# install python 3.8
# see https://docs.python-guide.org/starting/install3/linux/
- sudo apt-get install software-properties-common
- sudo add-apt-repository --yes ppa:deadsnakes/ppa
- sudo apt-get update
- sudo apt-get install python3.8 python3-pip python3-pandas python3-setuptools
- pip3 install --upgrade pip
- pip install -r requirements.txt
#- sudo apt-get install python3 python3-pip python3-pandas python3-setuptools
#- pip3 install bibtexparser==1.2.0
#- pip3 install pyparsing==2.4.7
#- pip3 install future==0.18.2
# command to run tests
script:
- make all
cache: packages
r_packages:
- pacman
- here
- data.table
- DT
- tidyverse
- lemon
- bookdown
- assertr
deploy:
# Use Github pages deploy process
provider: pages
# Keep builded pages
skip-cleanup: true
# Directory where your generated files are located
local_dir: public
# Github security/auth token (added in Travis Settings)
github-token: $GITHUB_TOKEN
# Incremental commit to keep old build/files from previous deployments
keep-history: true
# Git branch on which it should deploy (master, gh-pages, foo...)
target_branch: gh-pages
on:
# Which branch on commit/push will trigger deployment
branch: master
The continuous integration executes a simple Makefile
that first runs the parser.py
and then the bibliography.Rmd
notebook:
all: bibliography.Rmd
bibliography.Rmd: bibliography.csv
R -e "rmarkdown::render('code/bibliography.Rmd', output_file = '../public/index.html')"
bibliography.csv:
python3 code/parser.py
The dependencies of the Python code are listed in requirements.txt
:
appnope==0.1.0
backcall==0.2.0
bibtexparser==1.2.0
decorator==4.4.2
future==0.18.2
ipykernel==5.3.4
ipython==7.16.1
ipython-genutils==0.2.0
jedi==0.17.2
jupyter-client==6.1.7
jupyter-core==4.7.0
numpy==1.19.4
pandas==1.1.4
parso==0.7.1
pexpect==4.8.0
pickleshare==0.7.5
prompt-toolkit==3.0.8
ptyprocess==0.6.0
Pygments==2.7.4
pyparsing==2.4.7
python-dateutil==2.8.1
pytz==2020.4
pyzmq==20.0.0
six==1.15.0
tornado==6.1
traitlets==4.3.3
wcwidth==0.2.5
This R Markdown notebook was built using the following computational environment:
## [1] "en_US.UTF-8"
## R version 4.0.2 (2020-06-22)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 18.04.5 LTS
##
## Matrix products: default
## BLAS: /usr/lib/x86_64-linux-gnu/openblas/libblas.so.3
## LAPACK: /usr/lib/x86_64-linux-gnu/libopenblasp-r0.2.20.so
##
## locale:
## [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
## [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
## [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
## [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
## [9] LC_ADDRESS=C LC_TELEPHONE=C
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] assertr_2.8 bookdown_0.24 lemon_0.4.5 forcats_0.5.1
## [5] stringr_1.4.0 dplyr_1.0.7 purrr_0.3.4 readr_2.0.2
## [9] tidyr_1.1.4 tibble_3.1.5 ggplot2_3.3.5 tidyverse_1.3.1
## [13] DT_0.19 data.table_1.14.2 here_1.0.1
##
## loaded via a namespace (and not attached):
## [1] Rcpp_1.0.7 lattice_0.20-41 lubridate_1.8.0 assertthat_0.2.1
## [5] rprojroot_2.0.2 digest_0.6.28 utf8_1.2.2 plyr_1.8.6
## [9] R6_2.5.1 cellranger_1.1.0 backports_1.3.0 reprex_2.0.1
## [13] evaluate_0.14 highr_0.9 httr_1.4.2 pillar_1.6.4
## [17] rlang_0.4.12 readxl_1.3.1 rstudioapi_0.13 jquerylib_0.1.4
## [21] rmarkdown_2.11 labeling_0.4.2 htmlwidgets_1.5.4 munsell_0.5.0
## [25] broom_0.7.10 compiler_4.0.2 modelr_0.1.8 xfun_0.28
## [29] pkgconfig_2.0.3 htmltools_0.5.2 tidyselect_1.1.1 gridExtra_2.3
## [33] fansi_0.5.0 crayon_1.4.2 tzdb_0.2.0 dbplyr_2.1.1
## [37] withr_2.4.2 grid_4.0.2 jsonlite_1.7.2 gtable_0.3.0
## [41] lifecycle_1.0.1 DBI_1.1.1 pacman_0.5.1 magrittr_2.0.1
## [45] scales_1.1.1 cli_3.1.0 stringi_1.7.5 farver_2.1.0
## [49] fs_1.5.0 xml2_1.3.2 ellipsis_0.3.2 generics_0.1.1
## [53] vctrs_0.3.8 tools_4.0.2 glue_1.4.2 crosstalk_1.2.0
## [57] hms_1.1.1 fastmap_1.1.0 yaml_2.2.1 colorspace_2.0-2
## [61] rvest_1.0.2 knitr_1.36 haven_2.4.3