Much of our science is transferred to clients and/or stakeholders as software products; usually of the form of decision support systems (if a large, integrated study), but also online reports, standalone models, knowledge systems and the like. While the initial scope and style of these products is usually defined through iterative engagement with the client and/or other stakeholders, it is not as easy to define, or measure, how these systems are to be evaluated 'in the field'. This can be for several reasons, including: lack of control over the fate of the system; dependence on middle agents to broker the deployment of the system; political and economic vagaries; changes in the product over time. Papers are solicited that describe activities (based on case studies, or theoretical) that attempt to evaluate the usefulness and/or uptake of modelling tools, from one or more stakeholders' perspectives. Papers should include descriptions of the deployment and adoption strategies underpinning uptake (or lack thereof). The intention of this session is to build a community of practice around stakeholder evaluation metrics.