Home //

Adam Tornhill

Founder of CodeScene & Author of "Your Code as a Crime Scene"

Adam Tornhill is a programmer who combines degrees in engineering and psychology. He’s the founder of CodeScene where he designs tools for code analysis. Adam is also the author of multiple technical books, including the best selling Your Code as a Crime Scene and Software Design X-Rays. Adam’s other interests include modern history, music, retro computing, and martial arts.

Presentations

Code quality fails to gain traction at the business level, leading software companies to prioritize new features over maintaining a healthy codebase. This trade-off results in technical debt that consumes up to 40% of developers' time, causing stress, frustration, and costly delays in product delivery. Despite its importance, it's hard to build a business case for code quality: how do we quantify and communicate the benefits to non-technical stakeholders? Or even inside our own engineering team?

In this mini-keynote, Adam presents groundbreaking industry benchmarks and innovative metrics that, for the first time, enable organizations to compare their performance with top industry players. By leveraging statistical models, he demonstrates how you can predict the business gains of technical improvements in terms of increased development velocity and bug reduction. With these actionable recommendations, your organization can ship software faster and gain a competitive edge.

Code quality is an abstract concept that fails to get traction at the business level. Consequently, software companies keep trading code quality for new features. The resulting technical debt is estimated to waste up to 42% of developers' time, causing stress and uncertainty, as well as making our job less enjoyable than it should be. Without clear and quantifiable benefits, it's hard to build a business case for code quality.

In this keynote, Adam takes on the challenge by tuning the code analysis microscope towards a business outcome. We do that by combining novel code quality metrics with analyses of how the engineering organization works with the code. We then take those metrics a step further by connecting them to values like time-to-market, customer satisfaction, and road-map risks. This makes it possible to a) prioritize the parts of your system that benefit the most from improvements, b) communicate quality trade-offs in terms of actual costs, and c) identify high-risk parts of the application so that we can focus our efforts on the areas that need them the most. All recommendations are supported by data and brand new real-world research. This is a perspective on software development that will change how you view code. Promise.

Prioritizing technical debt is a hard problem as modern systems might have millions of lines of code and multiple development teams — no one has a holistic overview. In addition, there's always a trade-off between improving existing code versus adding new features so we need to use our time wisely.

What if we could mine the collective intelligence of all contributing programmers and start making decisions based on information from how the organization actually works with the code?

In this workshop, you'll learn how easily obtained version-control data lets you uncover the behavior and patterns of the development organization. This language-neutral approach lets you prioritize the parts of your system that benefit the most from improvements so that you can balance short- and long-term goals guided by data.

In this session, you’ll learn:

To prioritize technical debt in large-scale systems
Balance the trade-off between improving existing code versus adding new features
Visualize long time trends in technical debt
Take a data-driven approach to technical debt.

During this workshop, you get access to CodeScene – a behavioral code analysis tool that automates the analyses – which we use for the practical exercises. We’ll do the exercises on real world codebases in Java, C#, JavaScript and more to discover real issues.

Participants are also encouraged to take this opportunity to analyze their own codebase to get actionable take-away information.

As AI accelerates the pace of coding, organizations will have a hard time keeping up; acceleration isn't useful if it's driving our projects straight into a brick wall of technical debt. This presentation explores the consequences of AI-assisted coding, weighing its potential to improve productivity against the risks of deteriorating code quality.

Adam delivers a fact-based examination of the short and long-term implications of using AI assistants in software development. Drawing from extensive research analyzing over 100,000 AI-driven refactorings in real-world codebases, we scrutinize the claims made by contemporary AI tools, demonstrating that increased coding speed does not necessarily equate to true productivity. Additionally, we also look at the correctness of AI generated code, a concern for many organizations today due to the error-prone nature of current AI tools.

Finally, the talk offers strategies for succeeding with AI-assisted coding. This includes introducing a set of automated guardrails that act as feedback loops, ensuring your codebase remains maintainable even after adopting AI-assisted coding.

Key insights include:

Novel Quality Metrics: Introduction and application of innovative metrics designed to act as guardrails, ensuring that AI contributions maintain high standards of code quality.
Balancing Speed and Quality: Strategies to leverage AI for increased efficiency while avoiding the pitfalls of technical debt.
Real-World Data: Fact-based presentation from comprehensive research on real-world codebases.

ARE YOU READY TO GET STARTED?

Two and a half days of insightful sessions, inspiring ideas, and meeting your peers. Learn the skills and methods that will take your organization to the next level.

REGISTER NOW