Implementing a (pre-)reproducible (open) science workflow at an institutional level is very challenging, as it requires that researchers do not only have the sufficient skills (e.g. programming) but also know a myriad of tools (e.g. version control software, virtual machines, containers, metadata, repositories).
While learning some these skills/tools can be sometimes outsourced to elearning plattforms (e.g. DataCamp, Open Science MOOC, The Turing Way), knowledge exchange at an institutional level (Who? Where? What? When?) is often informal. Consequently staff fluctuation leads to a loss of knowledge as processes and decisions are documented insufficiently.
Creating a knowledge repository helps formalising the documentation process. We tested a workflow for creating a knowledge repo within our BMBF funded project FAKIN. Information from different sources (e.g. DataCamp, GitHub, Zenodo and Endnote) are collected, links between different objects (e.g. code, projects, people, publications, tools) are generated and stored at one place. The structure is simple and allows adding content in the form of text file templates. Everything is based on open-source tools and services such as R(Studio), Hugo and GitHub/GitLab.
We would like to present our approach, compare it to similar tools we know (AirBnB knowledge repo, TIB Vivo) and discuss with you whether you experience similar challenges and how you are trying to solve these at your (small) research institute.