Home //

Brian Sletten

Forward Leaning Software Engineer @ Bosatsu Consulting

Brian Sletten is a liberal arts-educated software engineer with a focus on forward-leaning technologies. His experience has spanned many industries including retail, banking, online games, defense, finance, hospitality and health care. He has a B.S. in Computer Science from the College of William and Mary and lives in Auburn, CA. He focuses on web architecture, resource-oriented computing, social networking, the Semantic Web, AI/ML, data science, 3D graphics, visualization, scalable systems, security consulting and other technologies of the late 20th and early 21st Centuries. He is also a rabid reader, devoted foodie and has excellent taste in music. If pressed, he might tell you about his International Pop Recording career.

Presentations

I personally believe that the success of an API initiative is at
least largely influenced by the selection and development of the
team that will build and maintain it.

Code-focused developers are not the right starting point. People who
think about information and long-term value capture are more likely
to produce better results. People who actually care about the API
beyond a thin-layer above their code are a great start.

We often say that we are trying to “Build the Right Thing” and
“Build the Thing Right”. But how do you know you are building the
right thing? How are you validating the implementation and the
behavior of the system? How are your business analysts supposed to
verify the API is doing what it is supposed to if your testing
strategy is complicated and code-focused?

In this talk, I will discuss these ideas and more. I will tell you
about a team I put together with no API background that built
what became a $300 million revenue stream in its first few years.

It’s inescapable. The capabilities that ChatGPT and Large Language Models provide have become discussion topics on the news, in social gatherings, online, at work. Things that would have seemed impossible a few years ago are now nearly pedestrian in how ubiquitous they are becoming on a daily basis. While they show very well, very few people actually understand what is going on, and worse, what is or isn’t possible.

How then should we evaluate these achievements as we make decisions on how to adopt and adapt to powerful new technologies? What will they mean for us as a society and as individual knowledge workers? In addition to a discussion specifically about ChatGPT and its peer technologies and what they portend, we will also discuss critically evaluating new technology as make decisions in the future.

Learning Objectives:

After attending this talk, you will be able to:
Explain what Large Language Models (LLMs) are and how they are used
Understand intuitively how they are built and work
Understand where they fit into the overall history of natural language processing
Understand the use cases where they are effective and appropriate in modern Enterprises
Understand the limitations of these models and how they can go wrong
Understand the moral, ethical, and legal complications that surround the development and use of these models
Understand the externalities of developing and operating these models which are often not priced into the fancy demos

Do your software developers feel responsibility for the security of the systems they build? If so, are they designing security in? One of the reasons this is difficult is that they are incentivized to demonstrate that the system does what it is supposed to do.

How often do we make sure it doesn't do what it is not supposed to do?

By the way, what is security? Can you and they even define it? How will you know it when you see it? How will you know you have done enough?

How do we instill deep, meaningful, incremental improvements to an organization's security posture? How do we convince our executives to spend enough on security? By the way, what's enough?

In this talk I will help give you a tangible set of steps to do just this.

Since the Scientific and Industrial Revolutions, there has been more to know every day. No individual can know it all and we have seen the entrenchment of the specialist for the past hundred or so years. When all of this tacit knowledge was locked in our heads, the specialist was rewarded for knowing details.

In our industry we have seen professionals gravitate to specific languages, specific tiers in the architecture (e.g. front-end vs backend), and specific libraries or frameworks. Sometimes they will even go so far as to list specific versions of specific technologies on their resume.

All of this specialization can be beneficial when you need resources that are deep within narrow confines. The ubiquitous glut of available information no longer requires us to know topics to this level of detail. Market realities are also such that nobody has the budget to employ only specialists any more. Developers have needed to learn to become designers, testers, data-experts, security-aware, AI-cognizant, and capable of communicating with various stakeholders.

When your industry epitomizes unfettered change, you need to rely on generalists, not specialists; synthesizers, not knowledge keepers. How can you attract, hire, and benefit from technologists who identify as problem solving value adders rather than programmers of a specific language? How can you encourage their growth and measure success? Even more, how do you lead them yourself?

In this talk we will discuss the rise of the generalist knowledge worker who creates value even in the face of information overflow and AI.

In the last 30 years, our industry has been upended by advancements that unlock previously unimaginable capabilities. It still seems like there is far too much failure and not enough success in IT systems though. To be successful in the 21st Century, you will need to understand where we are and where we are going. It is a complex amalgamation of developments in hardware, computer languages, architectures and how we manage information. Very few people understand all of the pieces and how they connect.

In this talk we will cover how technology changes are enabling longer term capture of business value, modernization of legacy systems, resilience in the face of increased mobile user bases, IT sovereignty and distributed, layered, heterogeneous architectures.

Given the horrifying state of tech journalism and the rapid pace of technological advancement, it is not surprising that people feel a bit overwhelmed about the state of Artificial Intelligence. On the one hand, you do not want to miss out on the opportunity of increasing productivity and lowering costs. On the other, you don't want to wake up to some embarrassing incident on the front pages of the news industries.

Fortunately, there are some common trends and helpful guidance that can help you make decisions about how and when to engage with AI.

In this half-day workshop, I will present an unhyped account of where we are and where we are going with an eye toward decision-making in this rapidly changing field. Where are you likely to get the biggest benefits and how to avoid the largest risks.

  • A brief history of AI
  • Generative AI
  • Large Language Models and RAG
  • Multi-modal Systems
  • Bias, Costs, and Environmental Impacts
  • AI Reality Check

In today’s environment, the rapidly accelerating pace of artificial intelligence (AI) development has left many tech leaders feeling overwhelmed by both the potential benefits and the lurking risks. As the media often fuels misconceptions and sensationalism, navigating the real-world impact of AI on business strategy becomes challenging. You’re likely balancing two key priorities: seizing the opportunity to boost productivity and reduce costs, while ensuring your company avoids the kinds of embarrassing missteps that could end up splashed across the headlines.

Fortunately, you don’t need to be swayed by hype or fear. This full-day workshop will provide practical insights into the current state of AI, helping you make informed, strategic decisions about when and how to engage with these powerful technologies.

Topics Covered:

  1. A Brief History of AI
    Understanding AI’s evolution helps to put current developments in context, making it easier to discern hype from genuine innovation. We’ll cover how AI has developed over time and what key milestones have shaped the technologies we see today.

  2. Generative AI
    Generative AI has the potential to transform industries with its ability to create new content, from text and images to software code. We’ll explore how companies are leveraging generative models to boost creativity and efficiency, as well as the potential risks around intellectual property and ethical concerns.

  3. Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG)
    Learn about the latest developments in LLMs like GPT and how they are reshaping everything from customer support to internal knowledge management. We’ll also discuss Retrieval-Augmented Generation (RAG), a technique that combines the power of LLMs with specialized data retrieval systems to provide more accurate and relevant results.

  4. Multi-Modal Systems
    As AI systems evolve to process and generate content across multiple formats (text, images, audio, and video), we will look at how multi-modal AI is pushing the boundaries of what's possible and the opportunities and challenges it presents.

  5. Bias, Costs, and Environmental Impacts
    No discussion of AI is complete without addressing its ethical dimensions. We'll talk about algorithmic bias, the hidden costs of AI development (including financial and resource use), and the growing concerns around AI's environmental footprint. Understanding these trade-offs is crucial for tech leaders who want to make sustainable, ethical choices.

  6. AI Reality Check: Separating Hype from Practical Applications
    Finally, we will take a step back and conduct an AI reality check. What are AI’s genuine capabilities versus what is being exaggerated in media or sales pitches? How do you sift through the noise to make strategic decisions that add true value to your business?

Everyone knows security is important. Very few organizations have a robust and comprehensive sense of whose responsibility it is, however. The consequence is that they have ducttapped systems and a Policy of Hope that there will be no issues. (Spoiler: there will be)

We will review the various roles that most organizations need to fill and probably are currently not doing so. We will also focus on how the roles overlap and what should and can be expected from each of them.

Come gain insight on how an organization can start with what you have and move in the direction of strengthened security postures with tangible and practical guidance. You will find both direction and means of measurement to make sure you neither over nor undershoot what is is required.

As a tech leader, how can you help your developers take ownership of security without slowing down innovation? Developers are incentivized to prove that systems work as expected—but how often do we ensure they don’t do what they shouldn’t?Security isn’t just a checklist; it’s a mindset. But can your team define what security truly means? How do you measure enough security? More importantly, how do you drive meaningful, incremental improvements in your organization’s security posture without overwhelming your developers?

This session will provide you with a practical, actionable framework to embed security into your development process. You’ll walk away with concrete strategies to help your teams proactively design security in—without sacrificing velocity.

The typical path to senior tech leadership involves learning the
tools, tips, tricks, and artistry of using technology to further an
organization's business goals while satisfying the needs of its
customers. The collective experience of a leadership team benefits the
entire organization by providing them the vision and capacity to make
decisions in the face of technical and business change.

AI has emerged like a rocketship of disruption across our industry and
around the world. It has undermined the foundations on which this
collective wisdom has been forged with both promises of inconceivable
productivity and the fears of wide-scale obsolescence. The problem is
exacerbated by the wholesale failure of tech journalism to hold AI
companies and their advocates accountable for the wild claims they
have put out into the world.

This day-long workshop will help technology leaders evolve their thinking
to absorb these new realities into their collective wisdom. With a grounded
position on the realities of both the promises and pitfalls, I believe I can help
shape this discussion by facilitating discussion around the following topics.

  • AI Reality Check
  • The 21st Century Corporation
  • How Your Vision of AI Shapes the Role It Will Take
  • Making the Case for Adoption via Business Alignment
  • Tech Trends Influenced By and Influencing AI
  • The Role of Chief AI Officer
  • The Path from Automation to Autonomy
  • Augmenting Employees vs Replacing Them
  • Extracting the Value of Institutional Memory
  • Adopting Agents with Agency
  • The Future of Work and Education
  • The Security of AI Systems
  • AI Legal Risks and Protections
  • Ethics of Adoption
  • Proactive AI Governance
  • Measuring Success
  • The Impact on Decision Making
  • Preparing for What Is Next

Come have a deep, rich, and valuable discussion about AI that isn’t couched in greed
and fear. We will give you the tools to evaluate and select AI strategies that are reasonable,
profitable, lower risk, and based in reality.

There are certain tech trends people at least know about such as Moore's Law even if they don't really understand them. But there are other forces at play in and around our industry that are unknown or ignored by the ever diminishing tech journalism profession. They help explain and predict the pressures and influences we are seeing now or soon will.

In this talk, I will identify a variety of trends that are happening at various paces in intertwined ways at the technological, scientific, cultural, biological, and geopolitical levels and why Tech Leaders should know about them. Being aware of the visible and invisible forces that surround you can help you work with them, rather than against them. You will also be more likely to make good choices and thrive rather than being buffeted uncontrollably.

Somewhere between the positions of “AI is going to change everything” and “AI is currently an overhyped means of propping up silicon valley unicorn valuations” lives a useful reality: AI research is producing tools that can be exploited safely, meaningfully, and responsibly. They can save you money, speed up delivery, and create
new opportunities that might not otherwise exist. The trick is understanding what they can do well and what is a big, red flag.

In this talk I will lay out a framework for considering a range of technologies that fall under the umbrella of AI and highlight the costs, benefits, and risks to help you make better choices about what to pursue and what to avoid.

ARE YOU READY TO GET STARTED?

Two and a half days of insightful sessions, inspiring ideas, and meeting your peers. Learn the skills and methods that will take your organization to the next level.

REGISTER NOW