This session will offer several quick and dynamic presentations covering technical topics and ideas too interesting to ignore:
Is Speech More Productive than MT? – Peter Reynolds (memoQ)
There is consensus within localization that neural machine translation (MT) offers the most productive approach to translation in most circumstances. Is this true? In this MicroTalk we will show the productivity that can be gained by speech recognition and suggest that this could be the most productive technology available for translation, and is certainly the most under-used.
Takeaways: Attendees will get an understanding of the benefits of voice recognition, show how it is used and suggest how it could be used more.
SEER — A Bug Prediction System with Machine Learning – Mike Fang (VMware)
Software quality is one of the most important research areas of software engineering. Engineers spend a lot of time figuring out the risk and bug components, trying to fix them each day. The technology of bug prediction has been created to improve the software quality, reliability, efficiency and to reduce software development costs to make the sequential testing more targetable in the early phases. In this session, we will discuss using machine learning (ML) to automate bug prediction for check-in requests. We use historical bug data to generate bug models and provide comparative analysis of different ML techniques for software bug prediction. Developing a robust bug prediction model is a challenging task and many techniques have been proposed, however, adoption of such techniques remains low. Tools will make it easier to adopt ML models in the future. We want to present a tangible internationalization and functional bug prediction tool based on ML algorithms. Three supervised ML algorithms are used in our proposal to predict future software bugs based on historical data. These classifiers are Naive Bayes (NB), Decision Tree (DT) and Artificial Neural Networks (ANNs). We train our classifier to identify certain types of problem areas (like encode or decode issues for non-ASCII string) based on coding background of the developer or other metrics. When a new check-in request comes in, we compute a score based on these same metrics to suggest how likely that the check-in might introduce the same issues.