Progress in scientific research is dependent on the quality and accessibility of software at all levels and it is now critical to address many new challenges related to the development, deployment, and maintenance of reusable software. In addition, it is essential that scientists, researchers, and students are able to learn and adopt a new set of software-related skills and methodologies.
The Australian National Data Service (ANDS) is a program funded by the Australian Government to develop research data infrastructure and enable more effective use of Australia's research data assets.
If you've spent much time in open source projects, you have probably seen the term "copyleft" used. While the term is quite commonly used, many people don't understand it. Learn more about copyleft in this article.
It is important to ensure that different copies or versions of files, files held in different formats or locations, and information that is cross-referenced between files are all subject to version control.
The Journal of Open Source Software (JOSS) is an academic journal with a formal peer review process that is designed to improve the quality of the software submitted. Upon acceptance into JOSS, a CrossRef DOI is minted and we list your paper on the JOSS website.
The Data FAIRport initiative is an open movement started as the practical follow up of a Lorentz Workshop in Leiden, The Netherlands, January 2014, named: Jointly designing a Data FAIRport.
The participants of the workshop represented the worlds of research infrastructure and policy, publishing, the semantic web and life sciences research.
swMATH is a freely accessible, innovative information service for mathematical software. swMATH not only provides access to an extensive database of information on mathematical software, but also includes a systematic linking of software packages with relevant mathematical publications.
N. Micic, D. Neagu, I. Campean, and E. Habib Zadeh. (2017)Every industry has significant data output as a product of their working process, and with the recent advent of big data mining and integrated data warehousing it is the case for a robust methodology for assessing the quality for sustainable and consistent processing. In this paper a review is conducted on Data Quality (DQ) in multiple domains in order to propose connections between their methodologies. This critical review suggests that within the process of DQ assessment of heterogeneous data sets, not often are they treated as separate types of data in need of an alternate data quality assessment framework. We discuss the need for such a directed DQ framework and the opportunities that are foreseen in this research area and propose to address it through degrees of heterogeneity..