Progress in scientific research is dependent on the quality and accessibility of software at all levels and it is now critical to address many new challenges related to the development, deployment, and maintenance of reusable software. In addition, it is essential that scientists, researchers, and students are able to learn and adopt a new set of software-related skills and methodologies.
Im Januar 2013 hat der Wissenschaftsrat Empfehlungen zu einem Kerndatensatz Forschung verabschiedet. Der Kerndatensatz ist ein Angebot an Hochschulen und außeruniversitäre Forschungseinrichtungen, um bereits bestehende Aktivitäten bei der informationstechnischen Erfassung ihrer Forschungsaktivitäten zu unterstützen. Er stellt einen Standard zur Eigenverwaltung dieser Daten bereit, eine zentrale Datensammlung erfolgt nicht.
The Australian National Data Service (ANDS) is a program funded by the Australian Government to develop research data infrastructure and enable more effective use of Australia's research data assets.
If you've spent much time in open source projects, you have probably seen the term "copyleft" used. While the term is quite commonly used, many people don't understand it. Learn more about copyleft in this article.
VIMMP provides an easily accessible, user-friendly hub to access all tangible and intangible components, such as information, knowledge, services and tools to support the efficient decision making, uptake and effective use of materials. At the core of VIMMP will be a metadata enriched data environment that eases the tasks of all actors. In particular it will facilitate the translation of a scientific problem into modelling workflows, ready for simulation using a range of software tools integrated into an open simulation platform and deployed on cloud services. The VIMMP platform is open, so that any provider can easily integrate and deploy their software codes as well as services.
N. Micic, D. Neagu, I. Campean, and E. Habib Zadeh. (2017)Every industry has significant data output as a product of their working process, and with the recent advent of big data mining and integrated data warehousing it is the case for a robust methodology for assessing the quality for sustainable and consistent processing. In this paper a review is conducted on Data Quality (DQ) in multiple domains in order to propose connections between their methodologies. This critical review suggests that within the process of DQ assessment of heterogeneous data sets, not often are they treated as separate types of data in need of an alternate data quality assessment framework. We discuss the need for such a directed DQ framework and the opportunities that are foreseen in this research area and propose to address it through degrees of heterogeneity..