Over the next three days the 18th Conference on Science and Technology Indicators (STI) is being held at the Berlin-Brandenburg Academy of Sciences and Humanities. The STI conference is organized under the auspice of the European Network of Indicator Designers (ENID) and hosted by the Institute for Research Information and Quality Assurance (iFQ). STI 2013 is supported by Elsevier and Stifterverband für die Deutsche Wissenschaft. The conference is a European and worldwide forum for presenting and discussing advances in constructing, using and interpreting STI indicators. In contrast to previous years, more and more qualitative indicators were discussed. Perhaps evaluation research policy is going back to its roots, ancient Greece, where Plato said “A good decision is made on knowledge, not numbers”. A position that challenges the hard core positivists here at the STI. Hopefully, by combining methods, a better understanding will be gained with regard to indicators applied in different contexts which range from understanding institutional structures, developmental processes and contexts of science itself to their use as analytical tools in knowledge management and science policy decision making. The theme of this year’s STI is “Translational Twists and Turns: Science as a socio-economic endeavor” or more simply put, how we can gain more knowledge of the flows between research and funding agencies, and assess if society is getting value for money. Perhaps “Lost in translation” would have been a better title, as the discussions point toward the need for good practice standards for bibliometric analyses because bibliometrics has shifted from a sub-discipline in Library Science to an evaluation instrument. The limitations of the statistical analyses of publication and citation patterns appear to be lost on the consumer. The attendee list is a who’s who of the scientometric world, hardcore statisticians Thed van Leeuwen, Wolfgang Glänzel, Ludo Waltman, Rodrigo Costas to name but a few. Michael Schreiber entertained us with his sardonic criticism of the (mis)use of the H-index as a statistical model by policy makers as a predictor of a researcher’s future performance, quote: “with four parameters I can fit an elephant, with five, I can make him wriggle his trunk”. But there was also room for pragmatic bibliometricians, who focused on the effect indicators used in research policy decisions have on the humanities researcher, rather than the mathematical foundations of the indicator. Alesia Zuccala, from the University of Amsterdam, tackled the problems of ranking and weighting book publishers in research evaluation policy and problematized the underrepresentation of the humanities publishers on policy lists. Book publishers included on authority lists are both underrepresentative of the wide variety of humanities research and disproportionate to the amount of citations and attention books receive. Selecting the best publisher for the researcher’s work is imperative for the career of the researcher. Inge van der Weijden from CWTS continued the theme of measuring academic career trajectories through publication and citation analysis. Interestingly changes in archiving data means, we now have enough recorded data to investigate the impact of receiving a research grant on a career. Quantitatively by analyzing publication and collaborations, but also qualitatively as symbolic values can be investigated in interviews to understand the prestige, expectations and reputation a grant brings to the researcher. Such changes in information and communication technology also provide new ways for researchers to create and cooperate, as well as opening up new fields of research or increasing specialization in others. The question is, and one that remains unanswered, is if this increased specialization will be to the detriment of diversity. The majority of research funding comes from the public sector and rewards “smart” specialization, namely, research that is of course immediately relevant to society. Do researchers now have to conform to be funded? Of course the ACUMEN project, in which IVA is an active contributor, has an opinion on this matter, and on the afternoon of the 5th of September we presented our experiences with individual bibliometric assessment. We suggested that by combining bibliometrics with a short narrative, researchers have the opportunity to contextualize the activities reported on their CV: both documenting and supporting their publication choices and enriching the interaction between the reader of their CV and themselves. Next year’s STI conference will be in Leiden on the 3rd to the 5th of September, where the theme “Context Counts” continues the contextualization of bibliometrics at the individual level. Any researcher interested in the effects research policy and evaluation have on their career, is invited to take part in this very relevant conference, partly to understand how proper use of metrics can enhance their profiles in an evaluation situation but more importantly to make their views of evaluation practices heard by policymakers, indicator designers and practitioners of bibliometric evaluations. Lorna Wildgaard