The Second International Workshop on
Crowd-Based Requirements Engineering


September 4, 2017
Lisbon, Portugal
Collocated with RE 2017

Submissions

Submission site: https://easychair.org/conferences/?conf=crowdre2017

Submission deadline: 9 June 2017 (Anywhere on Earth time, strict!)

We welcome original submissions from research and practice in the following categories (4 to 6 pages).

  • Technical solution papers describing original research results
  • Experience reports providing insights from RE practice and potential for applications in settings that involve a crowd
  • Competition papers describing a solution idea to the problem scenario described below
  • Problem statements explaining industry problems in settings with a large group of stakeholders
  • Vision statements explaining strongly explorative ideas

Submissions must describe original works that have not been previously published, are not currently submitted elsewhere, and address at least one of the workshop topics listed below.

Submissions must be written in English and formatted according to the IEEE formatting instructions. All accepted papers will be published in the joint RE 2017 workshop proceedings.

At least one author of every accepted manuscript is expected to attend the entire workshop and present their research.

Important Dates
  • Abstract Submission: 9 June 2017
  • Full Paper Submission: 16 June 2017, Noon CET (Extendend)
  • Notification: 30 June 2017 3 July 2017
  • Camera Ready: 16 July 2017

All deadlines at 23:59:59 AoE, unless otherwise stated

Problem Scenario for Competition Papers

The software company Greensoft, is an SME that develops innovative software, which allows their customers to monitor their household’s energy consumption. As they have a large and highly motivated user base, they would like to adapt their RE and SE approach and enable their users to contribute to the evolution of their software application. Greensoft is now looking for solution ideas that make use of CrowdRE, and asks you for help. They have come up with some initial ideas on what aspects such a solution idea should cover. They ask you to describe your ideas, which should fully or in part cover each of the four aspects of the cycle they have described below.

  1. Collect: gathering of data both from the end-user and the context. This includes solution ideas describing novel approaches to data gathering, for example approaches which emphasise on gathering multi-modal information (e.g., text, screenshots, audio). Furthermore, such solution ideas can focus on novel monitoring techniques, for example to increase the flexibility and configurability of available monitoring components, and able to efficiently collect big data from context.
  2. Analyse: reasoning about the collected data. This includes solution ideas which allow different stakeholders (the crowd) to be involved in the analysis and negotiation of feedback, but also focuses on automated approaches for feedback analysis. Such solution ideas can include approaches for extracting user intentions from their feedback in order to understand perceived QoE and exploring data mining techniques to derive interesting patterns of usage. Another aspect is to investigate ontologies to derive a model of the collected data on which automated reasoning can be performed. These solution ideas ideally should be scalable.
  3. Decide: evolving or adapting the software in order to reach a desirable state. This includes solution ideas that enable automated and semi-automated decision-making based on user feedback and big data. Such solution ideas could support the identification of issues to be solved through software maintenance or evolution or support software adaptation, e.g. personalization to user’s characteristics.
  4. Act: implementing the decided changes at the right moment. This includes solution ideas for the operationalization of the decisions made, both at design time and runtime. At design time, evolution decisions need to be scheduled according to available resources, deadlines, organizational priorities, etc. At runtime, adaptation needs to be undertaken when changes in the context require it.

Workshop Topics

The following themes of interest for paper submission include, but are not limited to, the following topics. However, each paper should address at least one of these topics:

  • CrowdRE
  • Analysis of user feedback for RE using Big Data
  • Natural language processing, Information Retrieval, Machine Learning, ontologies
  • Crowd-based monitoring and usage mining approaches
  • Integration of RE and crowd analysis approaches borrowed from other disciplines
  • Application scenarios of CrowdRE
  • The contribution of CrowdRE to prioritization, software adaptation, testing and other software engineering aspects
  • The intersection of RE and domains such as sociology, psychology, human factors, and anthropology
  • Approaches to motivate, steer, and boost creativity in the crowd
  • Automated RE and the role of the requirements engineer
  • Automated RE and data (safeguarding rollback, privacy, traceability and data integrity; measuring validity, reliability, source quality; processing of rejected data)
  • Platforms and tools supporting CrowdRE

Key Questions and Themes of Interests

Submitted papers should ideally provide contributions relevant to answering one or more of the following key questions that CrowdRE will mainly focus on:

  • What are the achievements and contributions of CrowdRE approaches thus far? How do they contribute to improving RE?
  • What are the risks of going beyond the borders of the 'brown field' domain of RE? To what extent are these risks acceptable? What can be done to mitigate these risks?
  • In which parts of the software development lifecycle can CrowdRE play a vital role? Which parts are less suited, and why?
  • How can data from such a large group of stakeholders be obtained and interpreted?
  • Can a sufficient sample size be reached? In what way can crowd members be motivated to contribute the user feedback we require of them? How can the reliability of individual crowd members and of the data in general be determined?
  • Assuming that the stakeholders form a crowd, how are requirements best elicited, documented, validated, negotiated and managed? How are data from the crowd best obtained and interpreted?
  • Which limitations and risks are associated with proposed alternatives, and how can they be overcome?
  • In what way could techniques from Big Data Analytics be leveraged to analyse heterogeneous and large datasets as a new source for new/changed requirements?
  • What are common denominators of existing and emerging approaches to make RE more suitable for CrowdRE? How can these approaches complement one another? What are the gaps that have not yet been covered by these solutions?
  • Where do the opportunities to collaborate lie? To what extent can the various fields of work be integrated, and where will approaches remain different?