Open Science is Currently in a Phoenix Phase, Having Swept Through Research Institutions

Scientific research in an interdisciplinary environment brings many challenges, especially regarding data availability and management. In an interview with Associate Professor Hana Tomášková, Vice-Rector for Research, Development, and Knowledge Transfer at the University of Hradec Králové, we discussed how open science operates in the Czech Republic today, where it encounters the biggest obstacles, and why data management is sometimes more challenging than it seems.

8 Jul 2025 Lucie Skřičková

No description

Your research connects systems analysis, process modelling, and applications in various fields, from healthcare to industry. What challenges does working with data in such an interdisciplinary context bring?

The biggest challenge is data availability and sharing. Often, data either doesn’t exist or isn’t publicly accessible. In such cases, I work with estimates and focus mainly on the model itself, which is meant to demonstrate the principle rather than specific numbers. Where data is available, it is crucial for validating and comparing different modeling approaches. Visualisations also play a significant role—especially Business Process Modeling Notation (BPMN), which helps communicate system complexity understandably across roles.


How is the perception of open science changing in this context? And what does it mean for researchers themselves?

Open science has changed how we approach data—today we have much better opportunities for access and sharing, which is a huge step forward compared to when I started. At the same time, it places higher demands on data management and research organisation. I have personally realised how important a well-thought-out data management plan is. I used to treat data as "consumables", and later regretted not thinking their management through. Repositories are now very helpful even for retrieving one's own data in a standardised form. Sharing remains a challenge; it's not always possible, but when it is, it's a valuable tool for verification and comparison.


When working with process models in hospitals and companies, you collect data in the field and then analyse it. How do you handle their limited availability or quality?

The advantage of process modeling is that it can serve simply to illustrate current operations and procedures, and to compare them with potential future scenarios. This allows me, for example, to use only publicly available or shared data from repositories, or just a description of operations from public documentation, without needing to request permissions or handle sensitive data. Very often, process analysis reveals that for some parts of the system, data does not yet exist. In that case, the data is estimated by experts, but remains a constant so it does not influence the efficiency of the process change.


“Sharing remains a challenge; it's not always possible, but when it is, it's a valuable tool for verification and comparison.”

In your work, you have modeled the impacts of Alzheimer's disease, which requires working with sensitive data. Are the conditions in the Czech Republic sufficient for securely storing and meaningfully using such data in research?

In this case, we were fortunate to use publicly available data, specifically statistics from the Czech Statistical Office and reports from health insurance companies. This data is valid nationwide and was ideal for system dynamics modeling.

On the other hand, working with truly sensitive patient data for research purposes was almost impossible, especially during my PhD studies. There were no precise mechanisms to obtain exceptions or permission to use such data, nor infrastructure to enable its secure processing. Many research projects ended before they even began. Additionally, it would have required enormous effort just to clean and standardize the data, often more than was needed for the modeling itself.

Today, the situation is much better. In some areas, such as the development of learning algorithms, there is now active data sharing and anonymization, which opens up new possibilities. I also see a change in mindset. Where individualism and reluctance to share used to dominate, there is now a greater emphasis on finding better solutions, such as more accurate detection or prediction, in many cases thanks to open access to data.


How would you characterize the current state of open science at Czech universities and research institutions? Where do you see the most significant progress, and what areas remain problematic?

I would say that over the past year, Open Science has been in a Phoenix phase, having risen from the ashes and flown through research institutions with a radiant appearance. It has planted the seeds of good ideas everywhere, which we are working on, have embraced, and are building an environment for an "ornamental garden with the future tree of Open Science." I may have gotten a bit poetic, but I really do perceive significant progress in adoption and in reducing resistance.

What is still missing? I would mention the uncertainty of choosing a suitable repository for data sharing. There are many recommended, supported, well-known, and widely used repositories. This diversity can actually be problematic. I worry that if the state, ministries, the European Union, or other higher authorities were to make an insensitive choice or impose regulations, it could undermine the trust and stability of the whole community.


“Today, the situation is much better. In some areas, such as the development of learning algorithms, there is now active data sharing and anonymization, which opens up new possibilities.”

What systemic obstacles do you think researchers most commonly encounter when trying to share their data?

Lack of time is a problem for almost everyone, and it’s true that infrastructure and system support could always be better. On the other hand, I often hear that these reasons are more of an excuse than a real obstacle. What truly works in practice is the support of a competent person—someone calm, persistent, highly knowledgeable, who can guide the researcher through the entire process.

Trust plays a big role. If a functional relationship can be created between researchers and the open science department, things become much simpler. At the University of Hradec Králové, this is successfully accomplished by Dr. Lenka Špičanová, who has built this trust very effectively.


Which aspects of research data management do you consider most underestimated?

Thank you for the question. Research data management is a complex process, and the individual aspects often fit so tightly together that it’s hard to separate them. From my own scientific practice, I consider the most underestimated and, at the same time, most postponed aspects to be the standardization and interoperability of data. Data management plans, research data specifications, their organization and storage—researchers now more or less accept these as a normal part of research. But when it comes to formalizing data, structuring them, and aligning them with generally recognized standards, it is often seen as unnecessary extra work. Yet these are precisely the steps that determine whether data will be reusable, understandable to others, and thus sustainable.

If any system is to work, we must put in the effort and data that we need to draw from it. Any shortcuts or half-measures undermine the entire framework on which trustworthy science is based.


“Trust plays a big role. If a functional relationship can be created between researchers and the open science department, things become much simpler.”

What has been your experience implementing the FAIR principles, and what challenges do you see in the Czech environment in their adoption?

When I first encountered the FAIR principles, I thought it was a great idea. I immediately thought about what data I could gain and how to use them for modeling or analysis. As a user of the system, I was excited. The real shock came when I found myself in the position of a data provider. Only then did I understand the complexity and necessity of some of the less popular steps in the process.

I set myself the challenge of establishing conditions, auxiliary steps, and support that precisely define the process so it cannot be “gamed.” We Czechs are masters at finding shortcuts and making things easier for ourselves. That’s why, for comprehensive functioning, it’s essential to strictly adhere to the principles and implement them in the purest form possible.


If you could recommend one concrete measure that a research institution could take to significantly support the development of open science, what would it be?

That’s a very important question. My answer may not be very groundbreaking, but I believe that automation would currently have the greatest practical impact. I know how diverse and complex data, formats, and data structures can be, often even within a single field, let alone across disciplines. But that’s exactly where the strength of automation lies. It can significantly simplify the routine and unpopular parts of processes, which are often seen at first as unnecessary administrative burdens.

Every step that makes entry into the system easier, whether filling out metadata, choosing a repository, or validating data, reduces resistance to the entire open science concept. And as soon as people stop “complaining” about details, space opens up for positive acceptance of the whole system.


“We Czechs are masters at finding shortcuts and making things easier for ourselves. That’s why, for comprehensive functioning, it’s essential to strictly adhere to the principles and implement them in the purest form possible.”

Doc. Ing. Hana Tomášková, Ph.D.


is Vice-Rector for Research, Development and Knowledge Transfer at the University of Hradec Králové. She teaches at the Faculty of Informatics and Management, covering courses in Object-Oriented Modelling, Operations Research, Process Modelling, Systems Thinking, and others. She graduated in Informatics at UHK, earning her PhD in Information and Knowledge Management, and was habilitated in 2014 with a thesis focused on modelling and simulation of discrete systems. Her research focuses on systems analysis, process modelling, and dynamics in energy and healthcare. She is the author of more than 90 academic publications and is active in the private sector.


More articles

All articles

You are running an old browser version. We recommend updating your browser to its latest version.