We all understand the need to evaluate. Evaluation is a process that allows us to improve, criticize our work, establish continuous improvement plans, detect problems, and encourage progress. Where different instruments are used to obtain metrics and evaluations (something is "good" or "bad") of the product or service to be evaluated.
Unfortunately, academic assessment systems rarely use validated assessment instruments. Also, very few respond to the product's characteristics to be evaluated, its contribution to the research area, the advancement and development of science, the practical utility, the potentially benefited population, and the promotion of dialogue, discussion, debate, and analysis.
Before adopting a policy for hiring and/or promoting academic staff based on business metrics. The funders (universities, research centers, and government agencies) must analyze their environment (local and social), reflect on their mission and vision, and read and/or create academic assessment instruments based on their needs. This will allow your staff to develop high-quality academic products with ethical elements and the highest creative processes. It is impossible to make a research product without a creative process. That is born out of inspiration. As a result, you can have an academic product as diverse as the inspiration itself.
Some of the academic products that can be used to evaluate the activity of academics and/or researchers are: monographs, book chapters, books, seminars, database design and development, specialized and/or dissemination web pages, videos, posters, Journals Club, academic and/or social programs, curriculum design, exam design, conferences, software design, prototype design and development, educational program criticism, peer review, repository development, brochures, newspapers, notebook, newsletter informative, technical reports, thesis, articles, among many others.
When evaluating an academic and/or researcher, other activities, such as participation in courses, human resources training, mentoring programs, and management of research groups, should be considered.
We must face "publish or perish" if we adopt this erratic policy of academic evaluation. Far from having significant advances in science, we can have many publications with very poor contributions.
Universities and research centers must eliminate the fear of betting on large projects. For those who need years of research, numerous research groups, and graduate students participate. In these projects, it is difficult to obtain a separate publication because if so, the article would be divided. In these big research projects, evaluation instruments must be articulated for each member. Their contribution to the work must be analyzed.
Both researchers and universities and/or research centers, we must design adequate evaluation instruments. Appropriate for our area of expertise, that affects our mission and vision. Also, promoting our research groups' professional and personal development allows collaboration between other groups and/or research areas.
That is, to develop a research work with significant impact (measured and qualified in many ways), the time must be what it really is, "relative." And will depend on the research question to be answered.
The development of appropriate academic evaluation instruments is our responsibility. We must decide the publication schemes that we will use and the guidelines that must be covered to ensure the quality of the publication. The comfort that commercial metrics give us only makes us dependent on them. It is important to note that we adopted them for the academic evaluation process and that their nature had a different purpose.
Suppose we are really determined to work for a comprehensive evaluation system. In that case, we should not seek to make it "easy" or "difficult" to develop. It must be necessary and sufficient for the purpose we seek and the result we desire.
One way to start developing evaluation instruments for our organization is to find the relationship between what we want to achieve and what we are doing to achieve it. To do this, we can use a qualitative research method, such as focus groups, to gather information and give us a comprehensive perspective of the problem. Subsequently, develop instruments to evaluate the quality and relevance of the different research products, for which we can use: rubrics (analytical or holistic), evaluation scales, and checklists. Also, collegiate groups should be formed to evaluate the research products, which can be opened or closed in pairs. Seeking that entrepreneurs' most important thing is the quality of the research product, its social, technological, and/or scientific application.
The paradox of institutions that reward the number of publications without evaluating individual works' quality is that they want to be leaders in an area without knowing the results or findings.
It is exceptionally complex to have complete and comprehensive academic evaluation processes. Suppose we limit ourselves to relying solely on metrics and forget about the evaluation's qualitative component, which gives us the rating (in terms of quality, relevance, reproducibility) and which is part of the evaluation process. In that case, we do not have an academic evaluation process; we have a metric system for research and/or researchers. In that case, we do not have an academic evaluation process; we have a metric system for research and/or researchers.
Most evaluation systems are currently based on measurements (number of appointments, downloads, visits, among others). Not all are bad; what is wrong is to believe that they are synonymous with quality at work. The big mistake is that, as scientists, we do not question it and accept it blindly. The big mistake is that, as scientists, we don't examine it and harvest it blindly.
Designing a comprehensive academic evaluation process is in our hands. It can be as complicated or easy as we want to see it and as fast or slow as we want it to happen. But it is something that we must do to have "appropriation of science", "added value to research," and "teaching-learning communities". More importantly, new and better researchers' training with innovative ideas generates meaningful hypotheses to promote science and better social environments.