<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.3.4">Jekyll</generator><link href="https://www.talkingquickly.co.uk/rss.xml" rel="self" type="application/atom+xml" /><link href="https://www.talkingquickly.co.uk/" rel="alternate" type="text/html" /><updated>2025-12-29T14:38:58+00:00</updated><id>https://www.talkingquickly.co.uk/rss.xml</id><title type="html">talkingquickly</title><subtitle>Ben Dixon, Co-founder of Sona. Writing about AI, Startups, Elixir and Small Steps Taken Quickly.</subtitle><author><name>Ben Dixon</name></author><entry><title type="html">AI and the Future of Software: Thoughts on LLMs and What’s Next</title><link href="https://www.talkingquickly.co.uk/ai-and-the-future-of-software/" rel="alternate" type="text/html" title="AI and the Future of Software: Thoughts on LLMs and What’s Next" /><published>2025-10-15T00:00:00+00:00</published><updated>2025-10-15T00:00:00+00:00</updated><id>https://www.talkingquickly.co.uk/ai-and-the-future-of-software</id><content type="html" xml:base="https://www.talkingquickly.co.uk/ai-and-the-future-of-software/"><![CDATA[<p>There’s lots of talk about a bubble vs not but unless you happen to be fundraising right now, it’s probably one of the less important questions in startups and AI.</p>

<p>This is in addition to <a href="/unreasonable-things-i-believe-about-llms/">unreasonable things I believe about large language models</a>.</p>

<p>My rough beliefs as of September 2025 are:</p>

<ul>
  <li>LLM’s represent a fundamental shift in technology (e.g. long term this isn’t all hype)</li>
  <li>A medium term impact is that the market for software gets much bigger because software can now do things currently being done by people (see <a href="https://www.sequoiacap.com/article/10t-ai-revolution/">Sequoia video</a> or <a href="https://open.spotify.com/episode/4QCjZlEzijkVWEQY8L4sIL?si=07e7059da5674c1a">A16Z podcast</a>)</li>
  <li>The other facet of this is that a large number of software businesses that were already possible but not viable on a CAC:LTV basis before are now viable</li>
  <li>Long term impacts are probably much bigger and incredibly hard to reason about</li>
  <li>Everything only really got started in March of this year (2025) with models like Opus, everything before that was practice, so we haven’t really seen “real” AI products in the wild yet outside of developer tooling</li>
  <li>The “super fast to $100m ARR with 1 person” thing is a distraction, over time they will get more competition and so have to invest in sales, marketing and more fundamental product areas so burn multiples and fundamentals generally will return to the levels we expect</li>
  <li>Of course there’s a bubble, it’s a blip in the long term but still a large amount of capital will be lost where there have been high valuations for fairly simple AI wrappers. It’s not that they’re bad products, it’s just that there’s a big economic difference between “it’s fast if you’re the first to do something useful and there’s nobody else in the market” and “there’s a persistent first mover advantage once there are competitors in the market”</li>
  <li>The next wave is how to actually apply this in the Enterprise to generate a real ROI, we’ve barely seen any of this yet. This is taking fundamental systems of record and core workflow systems and imagining them completely</li>
  <li>Chat and voice will be an unreasonably large point of this, the death of chat as a UI is not just over-stated, it’s wrong</li>
  <li>One of the next big frontiers will be the availability of training data, e.g. in robotics there’s a lack of structured training data for “doing basic tasks” and even in Enterprise workflows there’s a lack of structured training data on “what does doing x workflow well look like”. So finding ways to create or access this will be as or more impactful as improvements in foundation models. Companies that have proprietary data of this kind will have an unfair advantage</li>
  <li>Over the next 12 months attention will shift from huge foundation models to fine tunes of smaller models</li>
  <li>There’s a huge amount of innovation coming down the pipe in terms of model size and inference efficiency</li>
  <li>Foundation models are in a precarious position given the trajectory of open models. There’s a risk of a “Docker” type situation here where un-intuitively the technology turns out to be too foundational for it to be provided by a company. For this to be true there would have to be a meaningful shift in the economics of training models or a plateau in performance that leads to all focus being on fine tuning</li>
</ul>

<p>For me that all adds up to “we’ve barely scratched the surface of what LLM’s can do and how much better they can make the world”.</p>]]></content><author><name>Ben Dixon</name></author><summary type="html"><![CDATA[There’s lots of talk about a bubble vs not but unless you happen to be fundraising right now, it’s probably one of the less important questions in startups and AI. This is in addition to unreasonable things I believe about large language models. My rough beliefs as of September 2025 are: LLM’s represent a fundamental shift in technology (e.g. long term this isn’t all hype) A medium term impact is that the market for software gets much bigger because software can now do things currently being done by people (see Sequoia video or A16Z podcast) The other facet of this is that a large number of software businesses that were already possible but not viable on a CAC:LTV basis before are now viable Long term impacts are probably much bigger and incredibly hard to reason about Everything only really got started in March of this year (2025) with models like Opus, everything before that was practice, so we haven’t really seen “real” AI products in the wild yet outside of developer tooling The “super fast to $100m ARR with 1 person” thing is a distraction, over time they will get more competition and so have to invest in sales, marketing and more fundamental product areas so burn multiples and fundamentals generally will return to the levels we expect Of course there’s a bubble, it’s a blip in the long term but still a large amount of capital will be lost where there have been high valuations for fairly simple AI wrappers. It’s not that they’re bad products, it’s just that there’s a big economic difference between “it’s fast if you’re the first to do something useful and there’s nobody else in the market” and “there’s a persistent first mover advantage once there are competitors in the market” The next wave is how to actually apply this in the Enterprise to generate a real ROI, we’ve barely seen any of this yet. This is taking fundamental systems of record and core workflow systems and imagining them completely Chat and voice will be an unreasonably large point of this, the death of chat as a UI is not just over-stated, it’s wrong One of the next big frontiers will be the availability of training data, e.g. in robotics there’s a lack of structured training data for “doing basic tasks” and even in Enterprise workflows there’s a lack of structured training data on “what does doing x workflow well look like”. So finding ways to create or access this will be as or more impactful as improvements in foundation models. Companies that have proprietary data of this kind will have an unfair advantage Over the next 12 months attention will shift from huge foundation models to fine tunes of smaller models There’s a huge amount of innovation coming down the pipe in terms of model size and inference efficiency Foundation models are in a precarious position given the trajectory of open models. There’s a risk of a “Docker” type situation here where un-intuitively the technology turns out to be too foundational for it to be provided by a company. For this to be true there would have to be a meaningful shift in the economics of training models or a plateau in performance that leads to all focus being on fine tuning For me that all adds up to “we’ve barely scratched the surface of what LLM’s can do and how much better they can make the world”.]]></summary></entry><entry><title type="html">Software is Eating Labor: How LLMs Are Transforming the Economics of Work</title><link href="https://www.talkingquickly.co.uk/software-is-eating-labor/" rel="alternate" type="text/html" title="Software is Eating Labor: How LLMs Are Transforming the Economics of Work" /><published>2025-10-15T00:00:00+00:00</published><updated>2025-10-15T00:00:00+00:00</updated><id>https://www.talkingquickly.co.uk/software-is-eating-labor</id><content type="html" xml:base="https://www.talkingquickly.co.uk/software-is-eating-labor/"><![CDATA[<p>The “Software is eating labor” thesis is driving a lot of discussion and investment at the moment. For more context, see <a href="https://www.sequoiacap.com/article/10t-ai-revolution/">Sequoia’s analysis</a> or this <a href="https://open.spotify.com/episode/4QCjZlEzijkVWEQY8L4sIL?si=07e7059da5674c1a">A16Z podcast episode</a>.</p>

<p>The underlying idea is that LLMs mean that software can do a bunch of stuff that previously only people could do, and it can do it at a lower cost.</p>

<p>This means the market for software gets bigger because it becomes more economically sensible to have an AI do it than having a person do it.</p>

<p>You can sort of think of what people do as having three angles:</p>

<ol>
  <li>Stuff which is basically words (e.g., talking and putting words into a computer)</li>
  <li>Stuff which is moving things around in the physical world (everything from waiting tables to brain surgery)</li>
  <li>Stuff which is forming relationships with other human beings</li>
</ol>

<p>Previously, software could do a little bit of stuff which is basically words and very little moving around of things in the real world or forming relationships with other human beings.</p>

<p>It could facilitate moving stuff around in the world (e.g., the admin and planning component) and make it easier for humans to form relationships (e.g., CRM), but did very little beyond that.</p>

<p>It could do a little bit of stuff which is basically words as long as it didn’t need to really understand those words. So examples of this include executing computer code that a human has written, managing support tickets that a human was dealing with, or storing HR data about people which a human created and interpreted.</p>

<p>In the medium term, the big shift large language models drive is that software can now understand words.</p>

<p>So where a task is heavily made up of understanding words and taking other actions which are basically manipulating words, software can do far more of the overall task.</p>

<p>Outside of software engineering, customer support is a good example. Whether it’s via email or phone, agents can now — in many situations — do as good a job as a person at conversing with someone asking for support, taking actions to solve their problem, and managing that back and forth.</p>

<p>Where they can’t take the actions or the action needs human oversight, they can delegate this to a person, so it’s not a full replacement, but maybe 90% of the work can be done by agents. Which effectively means one human can handle 10x the load if they use an agent.</p>

<p>So one way to categorize work done by humans at the moment is “how much of what they do is made up of manipulating words.”</p>

<p>This will typically give you two components:</p>

<ol>
  <li>The job itself (answering support tickets, brain surgery, waiting tables)</li>
  <li>The admin that goes with the job (planning, coordinating, incident reports, TPS reports, etc.)</li>
</ol>

<p>For answering support tickets, you can imagine that 10% of your time is spent on admin and 90% on answering tickets. Then perhaps 90% of the admin and 90% of the tickets fall into the category of “an AI can do this by manipulating words,” which then means that both “admin” and “the job” can be reduced by 90%, so one support person can now handle about 10x the workload.</p>

<p>Customer Support and Law are the two examples that get used a lot at the moment because they are somewhat outliers in that they are mainly about manipulating words and there are an awful lot of people doing them. So there’s a big opportunity to build LLM-powered software there.</p>

<p>These get noticed first primarily because they also have a very minimal component of “moving things around in the real world.” And without a big leap forward in robotics, AI still struggles to move stuff around in the real world.</p>

<p>But probably the more exciting opportunity is the much higher volume of jobs which have an essential component of moving things around in the real world but where a disproportionate amount of time is not spent doing this — it’s spent on either admin or sub-tasks that are essentially moving words around.</p>

<p>Nursing is a great example. Studies show that nurses spend upwards of 25% of their time on generic administrative tasks. In hospitality, roles which you’d typically consider primarily frontline can spend upwards of 33% of their time on such administrative tasks.</p>

<p>This is where I think we’ll see some of the most impactful and economically significant advances from LLMs over the next 2-4 years.</p>

<hr />

<p><em>If you found this interesting, you might also enjoy:</em></p>
<ul>
  <li><em><a href="/ai-and-the-future-of-software/">AI and the Future of Software: Thoughts on LLMs and What’s Next</a> — My broader thoughts on where AI is heading and what’s coming next</em></li>
  <li><em><a href="/unreasonable-things-i-believe-about-llms/">Unreasonable Things I Believe About LLMs</a> — Some contrarian takes on large language models</em></li>
</ul>]]></content><author><name>Ben Dixon</name></author><summary type="html"><![CDATA[The “Software is eating labor” thesis is driving a lot of discussion and investment at the moment. For more context, see Sequoia’s analysis or this A16Z podcast episode. The underlying idea is that LLMs mean that software can do a bunch of stuff that previously only people could do, and it can do it at a lower cost. This means the market for software gets bigger because it becomes more economically sensible to have an AI do it than having a person do it. You can sort of think of what people do as having three angles: Stuff which is basically words (e.g., talking and putting words into a computer) Stuff which is moving things around in the physical world (everything from waiting tables to brain surgery) Stuff which is forming relationships with other human beings Previously, software could do a little bit of stuff which is basically words and very little moving around of things in the real world or forming relationships with other human beings. It could facilitate moving stuff around in the world (e.g., the admin and planning component) and make it easier for humans to form relationships (e.g., CRM), but did very little beyond that. It could do a little bit of stuff which is basically words as long as it didn’t need to really understand those words. So examples of this include executing computer code that a human has written, managing support tickets that a human was dealing with, or storing HR data about people which a human created and interpreted. In the medium term, the big shift large language models drive is that software can now understand words. So where a task is heavily made up of understanding words and taking other actions which are basically manipulating words, software can do far more of the overall task. Outside of software engineering, customer support is a good example. Whether it’s via email or phone, agents can now — in many situations — do as good a job as a person at conversing with someone asking for support, taking actions to solve their problem, and managing that back and forth. Where they can’t take the actions or the action needs human oversight, they can delegate this to a person, so it’s not a full replacement, but maybe 90% of the work can be done by agents. Which effectively means one human can handle 10x the load if they use an agent. So one way to categorize work done by humans at the moment is “how much of what they do is made up of manipulating words.” This will typically give you two components: The job itself (answering support tickets, brain surgery, waiting tables) The admin that goes with the job (planning, coordinating, incident reports, TPS reports, etc.) For answering support tickets, you can imagine that 10% of your time is spent on admin and 90% on answering tickets. Then perhaps 90% of the admin and 90% of the tickets fall into the category of “an AI can do this by manipulating words,” which then means that both “admin” and “the job” can be reduced by 90%, so one support person can now handle about 10x the workload. Customer Support and Law are the two examples that get used a lot at the moment because they are somewhat outliers in that they are mainly about manipulating words and there are an awful lot of people doing them. So there’s a big opportunity to build LLM-powered software there. These get noticed first primarily because they also have a very minimal component of “moving things around in the real world.” And without a big leap forward in robotics, AI still struggles to move stuff around in the real world. But probably the more exciting opportunity is the much higher volume of jobs which have an essential component of moving things around in the real world but where a disproportionate amount of time is not spent doing this — it’s spent on either admin or sub-tasks that are essentially moving words around. Nursing is a great example. Studies show that nurses spend upwards of 25% of their time on generic administrative tasks. In hospitality, roles which you’d typically consider primarily frontline can spend upwards of 33% of their time on such administrative tasks. This is where I think we’ll see some of the most impactful and economically significant advances from LLMs over the next 2-4 years. If you found this interesting, you might also enjoy: AI and the Future of Software: Thoughts on LLMs and What’s Next — My broader thoughts on where AI is heading and what’s coming next Unreasonable Things I Believe About LLMs — Some contrarian takes on large language models]]></summary></entry><entry><title type="html">Some unreasonable things I believe about large language models</title><link href="https://www.talkingquickly.co.uk/unreasonable-things-i-believe-about-llms/" rel="alternate" type="text/html" title="Some unreasonable things I believe about large language models" /><published>2025-08-26T00:00:00+00:00</published><updated>2025-08-26T00:00:00+00:00</updated><id>https://www.talkingquickly.co.uk/unreasonable-things-i-believe-about-llms</id><content type="html" xml:base="https://www.talkingquickly.co.uk/unreasonable-things-i-believe-about-llms/"><![CDATA[<p>The more I work with large language models, the more blown away I am by how fundamentally I think they’re going to change everything about how we build software.</p>

<p>Belief and intuition are words engineering and science often object to.</p>

<p>But startups are as much about belief and intuition as they are data; by the time it’s proven with data, the opportunity is gone.</p>

<p>So I can’t prove any of the below, but based on a year of being deeply immersed in building with AI, I’d happily bet large sums of money that these things turn out to be true:</p>

<ol>
  <li>Every time you say “models can’t / won’t be able to do this” you’ll be proven wrong within 12 months</li>
  <li>OK, not every time, just 99% of the time, but there’s no upside in being the person who says “this won’t work”, and lots of upside in being the person who proves that it can. So if you want to do interesting work that makes a difference in the world, be that person</li>
  <li>“What if I just let the model figure this out” should be the mantra of everyone building software today</li>
  <li>LLMs aren’t actually much less deterministic than regular software or at least regular software development, so a good answer to a lot of the “how do we make sure the model…” is just “you don’t” (but yes, you still need evals!)</li>
  <li>A disproportionate number of tasks we use traditional ML for will turn out to be replaced by LLMs</li>
  <li>The ML tasks that aren’t will be largely replaced by LLMs writing and maintaining their own ML sub agents, this will happen bit by bit then all at once</li>
  <li>Large language models are already “better” at writing code than people (quality, maintainability, understandability, convention following etc.)</li>
  <li>Pretty much every example of “models can’t write this type of code” is just an instance of “the user hasn’t learned how to use models for this yet”</li>
  <li>The actual speed up available to developers using LLMs as they are today with no further improvement is closer to 10x than 2x, pretty much irrespective of task, the difference is purely down to the level of investment that’s been made in learning the tools</li>
  <li>The exception to that is that some codebases need re-engineering to be optimised to prioritise LLMs working on them over people. We should embrace and prioritise this optimisation. Often this will look like moving towards smaller standalone services earlier than we otherwise would have</li>
  <li>Chat + on demand UIs are going to replace an awful lot of special purpose software</li>
  <li>Weirdly that will lead to way more software and software engineering jobs not less</li>
  <li>A disproportionate number of “rules engines” will be replaced with plain text descriptions and LLM harnesses; and they’ll work far better than what they replaced</li>
</ol>

<p>I’m not making any attempt to refute that the above is somewhat crazy, I just think it’s probably true.</p>

<p>If you believe equally crazy things, I’d love to chat, please get in touch.</p>]]></content><author><name>Ben Dixon</name></author><summary type="html"><![CDATA[The more I work with large language models, the more blown away I am by how fundamentally I think they’re going to change everything about how we build software. Belief and intuition are words engineering and science often object to. But startups are as much about belief and intuition as they are data; by the time it’s proven with data, the opportunity is gone. So I can’t prove any of the below, but based on a year of being deeply immersed in building with AI, I’d happily bet large sums of money that these things turn out to be true: Every time you say “models can’t / won’t be able to do this” you’ll be proven wrong within 12 months OK, not every time, just 99% of the time, but there’s no upside in being the person who says “this won’t work”, and lots of upside in being the person who proves that it can. So if you want to do interesting work that makes a difference in the world, be that person “What if I just let the model figure this out” should be the mantra of everyone building software today LLMs aren’t actually much less deterministic than regular software or at least regular software development, so a good answer to a lot of the “how do we make sure the model…” is just “you don’t” (but yes, you still need evals!) A disproportionate number of tasks we use traditional ML for will turn out to be replaced by LLMs The ML tasks that aren’t will be largely replaced by LLMs writing and maintaining their own ML sub agents, this will happen bit by bit then all at once Large language models are already “better” at writing code than people (quality, maintainability, understandability, convention following etc.) Pretty much every example of “models can’t write this type of code” is just an instance of “the user hasn’t learned how to use models for this yet” The actual speed up available to developers using LLMs as they are today with no further improvement is closer to 10x than 2x, pretty much irrespective of task, the difference is purely down to the level of investment that’s been made in learning the tools The exception to that is that some codebases need re-engineering to be optimised to prioritise LLMs working on them over people. We should embrace and prioritise this optimisation. Often this will look like moving towards smaller standalone services earlier than we otherwise would have Chat + on demand UIs are going to replace an awful lot of special purpose software Weirdly that will lead to way more software and software engineering jobs not less A disproportionate number of “rules engines” will be replaced with plain text descriptions and LLM harnesses; and they’ll work far better than what they replaced I’m not making any attempt to refute that the above is somewhat crazy, I just think it’s probably true. If you believe equally crazy things, I’d love to chat, please get in touch.]]></summary></entry><entry><title type="html">Punchcards and why there are now only apprenticeships and management roles in software engineering</title><link href="https://www.talkingquickly.co.uk/punchcards-apprenticeships-and-management-in-software-engineering/" rel="alternate" type="text/html" title="Punchcards and why there are now only apprenticeships and management roles in software engineering" /><published>2025-08-25T00:00:00+00:00</published><updated>2025-08-25T00:00:00+00:00</updated><id>https://www.talkingquickly.co.uk/punchcards-apprenticeships-and-management-in-software-engineering</id><content type="html" xml:base="https://www.talkingquickly.co.uk/punchcards-apprenticeships-and-management-in-software-engineering/"><![CDATA[<p>A job that used to exist is creating punch cards for computers to read. Then keyboards and magnetic storage became a thing and fairly rapidly there wasn’t really this job anymore.</p>

<p>Likewise, due to the level LLM-powered coding assistants have gotten to, there used to be this job of writing code by hand. But now there sort of isn’t anymore.</p>

<p>The job that remains is—to me at least—10x more fun, and that’s managing LLM agents as they write code.</p>

<p>But management is famously hard.</p>

<p>There are many different failure modes when people start managing, but there’s one that gets basically everyone when they start out:</p>

<blockquote>
  <p>I’ll just do it myself, it’ll be faster</p>
</blockquote>

<p>New managers almost without fail make this mistake at least a few times. Rather than coaching others, they decide to do it themselves.</p>

<p>Each time they do this, a few things happen:</p>

<ol>
  <li>The would-be manager doesn’t learn more about managing</li>
  <li>The person they were managing doesn’t learn how to do something</li>
</ol>

<p>Eventually—if they avoid the disillusionment that goes with this phase—they realize this doesn’t scale.</p>

<p>Commonly they then move to micromanaging.</p>

<blockquote>
  <p>I won’t do it for you, but I will tell you exactly how I want it done to an incredible level of detail</p>
</blockquote>

<p>This is the most dangerous phase. They’re still doing most of the work, but it sort of feels like delegation. This is where new managers are at the greatest risk of burnout.</p>

<p>It’s the job of whoever is coaching and managing that new manager to guide them through these phases to the level where they’re really managing.</p>

<p>The same curve applies to managing LLM agents.</p>

<p>People start with autocomplete and move on to giving incredibly specific briefs where they’re still doing all of the thinking.</p>

<p>This feels sort of like using LLM coding agents. But it’s as far away from agentic coding as micromanagement is from true management.</p>

<p>It also comes with the same style of problems:</p>

<ul>
  <li>Every time someone does it, they sacrifice an opportunity to learn how to better collaborate with an agent to do it</li>
  <li>Every time someone does it, they’re less likely to iterate on their environment and tools for collaborating with agents, they’re also less likely to optimize the codebase for agentic changes</li>
</ul>

<p>So the single most important thing if we want to maximize the quality and rate at which we can deliver software is to work out the playbook for ramping engineers who previously thought of themselves as individual contributors to the point where they’re effectively managing LLM coding agents.</p>

<p>We know they are there when their process of building software looks much more like managing and collaborating with a team than it does writing code in an editor.</p>

<p>But it’s a strange dynamic now because you’re effectively either an apprentice who’s learning to do this, or a manager who’s established a base layer of competence in doing this. The “individual contributor” layer that’s existed for so long sort of doesn’t anymore.</p>]]></content><author><name>Ben Dixon</name></author><summary type="html"><![CDATA[A job that used to exist is creating punch cards for computers to read. Then keyboards and magnetic storage became a thing and fairly rapidly there wasn’t really this job anymore. Likewise, due to the level LLM-powered coding assistants have gotten to, there used to be this job of writing code by hand. But now there sort of isn’t anymore. The job that remains is—to me at least—10x more fun, and that’s managing LLM agents as they write code. But management is famously hard. There are many different failure modes when people start managing, but there’s one that gets basically everyone when they start out: I’ll just do it myself, it’ll be faster New managers almost without fail make this mistake at least a few times. Rather than coaching others, they decide to do it themselves. Each time they do this, a few things happen: The would-be manager doesn’t learn more about managing The person they were managing doesn’t learn how to do something Eventually—if they avoid the disillusionment that goes with this phase—they realize this doesn’t scale. Commonly they then move to micromanaging. I won’t do it for you, but I will tell you exactly how I want it done to an incredible level of detail This is the most dangerous phase. They’re still doing most of the work, but it sort of feels like delegation. This is where new managers are at the greatest risk of burnout. It’s the job of whoever is coaching and managing that new manager to guide them through these phases to the level where they’re really managing. The same curve applies to managing LLM agents. People start with autocomplete and move on to giving incredibly specific briefs where they’re still doing all of the thinking. This feels sort of like using LLM coding agents. But it’s as far away from agentic coding as micromanagement is from true management. It also comes with the same style of problems: Every time someone does it, they sacrifice an opportunity to learn how to better collaborate with an agent to do it Every time someone does it, they’re less likely to iterate on their environment and tools for collaborating with agents, they’re also less likely to optimize the codebase for agentic changes So the single most important thing if we want to maximize the quality and rate at which we can deliver software is to work out the playbook for ramping engineers who previously thought of themselves as individual contributors to the point where they’re effectively managing LLM coding agents. We know they are there when their process of building software looks much more like managing and collaborating with a team than it does writing code in an editor. But it’s a strange dynamic now because you’re effectively either an apprentice who’s learning to do this, or a manager who’s established a base layer of competence in doing this. The “individual contributor” layer that’s existed for so long sort of doesn’t anymore.]]></summary></entry><entry><title type="html">Why you should try vibe coding from your phone</title><link href="https://www.talkingquickly.co.uk/vibe-coding-from-your-phone/" rel="alternate" type="text/html" title="Why you should try vibe coding from your phone" /><published>2025-08-25T00:00:00+00:00</published><updated>2025-08-25T00:00:00+00:00</updated><id>https://www.talkingquickly.co.uk/vibe-coding-from-your-phone</id><content type="html" xml:base="https://www.talkingquickly.co.uk/vibe-coding-from-your-phone/"><![CDATA[<p>My secondary coding environment is a Proxmox server I use to spin up LXC dev containers which are just Debian containers with tmux and Neovim and all my dev dependencies on them.</p>

<p>I can get to that using Termius on a phone or tablet and do stuff. My stack is already terminal based so the change vs a laptop is pretty minimal.</p>

<p>Up until recently this has existed mainly as a backup, it means I can handle emergencies without a laptop.</p>

<p>I’ve always imagined it “freeing” me from a laptop but it just hasn’t stuck. Don’t get me wrong, Vim works using an iPhone keyboard but only in the same way a bicycle “works” for off-roading.</p>

<p>But recently I added Claude Code to this configuration.</p>

<p>Increasingly I’ve found myself not bothering to get my laptop because whatever I need to do, I could just fire up a Claude instance inside tmux and ask it to do it for me.</p>

<p>Side projects I’ve been meaning to continue for years are coming along again, I’ve redesigned my blog, revamped my local LLM setup and—for reasons I’ll explain in another post—built a simulator of an LLM meditating.</p>

<p>I’ve created more and had more fun doing it than any time I can remember.</p>

<p>All from a phone.</p>

<p>In the same way I learned Vim by disabling the mouse and arrow keys on my computer; using a coding agent in an environment where using Vim is incredibly painful has forced me to get really good at that interaction model.</p>

<p>I never looked back from Vim and I can’t imagine ever looking back from this.</p>

<p>If you haven’t tried it, I thoroughly recommend having a go banning yourself from your editor and working only through an agent for a few weeks.</p>]]></content><author><name>Ben Dixon</name></author><summary type="html"><![CDATA[My secondary coding environment is a Proxmox server I use to spin up LXC dev containers which are just Debian containers with tmux and Neovim and all my dev dependencies on them. I can get to that using Termius on a phone or tablet and do stuff. My stack is already terminal based so the change vs a laptop is pretty minimal. Up until recently this has existed mainly as a backup, it means I can handle emergencies without a laptop. I’ve always imagined it “freeing” me from a laptop but it just hasn’t stuck. Don’t get me wrong, Vim works using an iPhone keyboard but only in the same way a bicycle “works” for off-roading. But recently I added Claude Code to this configuration. Increasingly I’ve found myself not bothering to get my laptop because whatever I need to do, I could just fire up a Claude instance inside tmux and ask it to do it for me. Side projects I’ve been meaning to continue for years are coming along again, I’ve redesigned my blog, revamped my local LLM setup and—for reasons I’ll explain in another post—built a simulator of an LLM meditating. I’ve created more and had more fun doing it than any time I can remember. All from a phone. In the same way I learned Vim by disabling the mouse and arrow keys on my computer; using a coding agent in an environment where using Vim is incredibly painful has forced me to get really good at that interaction model. I never looked back from Vim and I can’t imagine ever looking back from this. If you haven’t tried it, I thoroughly recommend having a go banning yourself from your editor and working only through an agent for a few weeks.]]></summary></entry><entry><title type="html">Conversations as Programming Primitives</title><link href="https://www.talkingquickly.co.uk/conversations-as-programming-primitives/" rel="alternate" type="text/html" title="Conversations as Programming Primitives" /><published>2025-08-24T00:00:00+00:00</published><updated>2025-08-24T00:00:00+00:00</updated><id>https://www.talkingquickly.co.uk/conversations-as-programming-primitives</id><content type="html" xml:base="https://www.talkingquickly.co.uk/conversations-as-programming-primitives/"><![CDATA[<p>Over the last few years a reliable way to be wrong has been to predict “chat won’t work as an interface for that” and then wait a few weeks.</p>

<p>So it increasingly looks like conversations as a general purpose user interface are here to stay.</p>

<p>Conversations as a fundamental primitive of programming and data storage have been getting less attention.</p>

<p>An LLM-powered CRM agent which makes recommendations about deals could be built by giving an agent access to tools that query certain data, carefully constructing input prompts and then building checking logic which ensures the agent isn’t repeating itself.</p>

<p>This is the traditional model of programming, the LLM is a special purpose tool (like any method) and then we write methods which do other things and chain these together in highly structured ways to achieve (pseudo) predictable outputs.</p>

<p>Another way to build that CRM agent is to write a system prompt along the lines of:</p>

<blockquote>
  <p>You’re going to receive a stream of messages which are updates on a sales deal. Your job is to provide recommendations by calling the set_recommendations tool, don’t repeat yourself too often</p>
</blockquote>

<p>Where the set recommendations tool shows the user a list of recommendations. This step being entirely optional, we could just ask for a bulleted list.</p>

<p>This approach has some interesting properties:</p>

<ul>
  <li>There’s not a whole lot of programming going on, we just sort of let the LLM do it</li>
  <li>We defer the structuring of data until the tool call to turn it into something structured (which we could skip completely and just use bullets)</li>
  <li>As a result of deferred structuring, our data model is “just a bunch of text”</li>
</ul>

<p>Deferred structuring has some interesting implications for portability, both within a loose application boundary and across unrelated or competing applications.</p>

<p>Since the structuring is deferred until the point of output, putting the data through an alternative process or into another system is relatively easy as long as they can accept conversation text as an input.</p>

<p>If there is some standardization as to a structured or semi-structured representation of a conversation, it becomes entirely trivial.</p>

<p>Which leaves me with two main takeaways:</p>

<ul>
  <li>Engineers should increasingly be asking themselves “do I really need anything more than a conversation to solve this problem”</li>
  <li>Ownership and portability of conversation data is going to become as - probably more - important as ownership of structured data</li>
</ul>

<p>“Let the model do the work” is increasingly becoming a very good rule of thumb.</p>]]></content><author><name>Ben Dixon</name></author><summary type="html"><![CDATA[Over the last few years a reliable way to be wrong has been to predict “chat won’t work as an interface for that” and then wait a few weeks. So it increasingly looks like conversations as a general purpose user interface are here to stay. Conversations as a fundamental primitive of programming and data storage have been getting less attention. An LLM-powered CRM agent which makes recommendations about deals could be built by giving an agent access to tools that query certain data, carefully constructing input prompts and then building checking logic which ensures the agent isn’t repeating itself. This is the traditional model of programming, the LLM is a special purpose tool (like any method) and then we write methods which do other things and chain these together in highly structured ways to achieve (pseudo) predictable outputs. Another way to build that CRM agent is to write a system prompt along the lines of: You’re going to receive a stream of messages which are updates on a sales deal. Your job is to provide recommendations by calling the set_recommendations tool, don’t repeat yourself too often Where the set recommendations tool shows the user a list of recommendations. This step being entirely optional, we could just ask for a bulleted list. This approach has some interesting properties: There’s not a whole lot of programming going on, we just sort of let the LLM do it We defer the structuring of data until the tool call to turn it into something structured (which we could skip completely and just use bullets) As a result of deferred structuring, our data model is “just a bunch of text” Deferred structuring has some interesting implications for portability, both within a loose application boundary and across unrelated or competing applications. Since the structuring is deferred until the point of output, putting the data through an alternative process or into another system is relatively easy as long as they can accept conversation text as an input. If there is some standardization as to a structured or semi-structured representation of a conversation, it becomes entirely trivial. Which leaves me with two main takeaways: Engineers should increasingly be asking themselves “do I really need anything more than a conversation to solve this problem” Ownership and portability of conversation data is going to become as - probably more - important as ownership of structured data “Let the model do the work” is increasingly becoming a very good rule of thumb.]]></summary></entry><entry><title type="html">You won’t get people excited about AI by shouting at them</title><link href="https://www.talkingquickly.co.uk/you-wont-get-people-excited-about-ai-by-shouting-at-them/" rel="alternate" type="text/html" title="You won’t get people excited about AI by shouting at them" /><published>2025-08-05T00:00:00+00:00</published><updated>2025-08-05T00:00:00+00:00</updated><id>https://www.talkingquickly.co.uk/you-wont-get-people-excited-about-ai-by-shouting-at-them</id><content type="html" xml:base="https://www.talkingquickly.co.uk/you-wont-get-people-excited-about-ai-by-shouting-at-them/"><![CDATA[<p>Over the last 12 months I’ve fallen back in love with software engineering.</p>

<p>LLM-assisted coding is everything I’ve always wanted from programming. The craft remains but I can create things at something closer to the rate I can conceive of them, rather than the 10x delta there’s always been between the two.</p>

<p>But when it comes to technology I enjoy shiny new things, I always have. And I like to learn new things by being thrown in the deep end without an instruction manual and left to “figure it out”.</p>

<p>For anyone who — like me — experiences nothing but excitement and optimism when confronted with AI, it’s easy to be bemused by people who react differently.</p>

<p>But for a lot of people, including people who are <em>also</em> excited by the change, it’s not all upside.</p>

<p>Individual jobs, entire companies and even countries are overnight coming to terms with the idea that over the next decade, success will be determined heavily by the level of mastery achieved with AI.</p>

<p>For many people — software engineers included — this means a fundamental shift in the skills needed to earn a living.</p>

<p>This is unsettling.</p>

<p>A dynamic which has unfolded across several corners of the internet is one of hostility. Where those embracing AI shout at those who aren’t yet about how wrong they are, and are then surprised when this doesn’t change their minds!</p>

<p>There aren’t many rules that turn out to be consistently always right (apart perhaps from “nothing good ever came from staying out after midnight”) but “shouting at people and telling them they’re stupid never made them agree with you” is one that has.</p>

<p>Different people adapt to change at different rates. Many of the concerns about AI are valid (if often overstated). It’s broadly beneficial in any group to have a mixture of approaches to change, it acts as a useful smoothing function.</p>

<p>The people who embrace change rapidly make sure we don’t miss out on opportunities, the people who adapt more slowly help us avoid whiplash and discarding valuable parts of what came before.</p>

<p>AI has the potential to make a lot of people’s lives and jobs a lot better.</p>

<p>But we’ll persuade people of that by bringing them on the journey, not by shouting at them and telling them they’re wrong.</p>

<p>And AI is way too much fun to waste time fighting about it.</p>]]></content><author><name>Ben Dixon</name></author><summary type="html"><![CDATA[Over the last 12 months I’ve fallen back in love with software engineering. LLM-assisted coding is everything I’ve always wanted from programming. The craft remains but I can create things at something closer to the rate I can conceive of them, rather than the 10x delta there’s always been between the two. But when it comes to technology I enjoy shiny new things, I always have. And I like to learn new things by being thrown in the deep end without an instruction manual and left to “figure it out”. For anyone who — like me — experiences nothing but excitement and optimism when confronted with AI, it’s easy to be bemused by people who react differently. But for a lot of people, including people who are also excited by the change, it’s not all upside. Individual jobs, entire companies and even countries are overnight coming to terms with the idea that over the next decade, success will be determined heavily by the level of mastery achieved with AI. For many people — software engineers included — this means a fundamental shift in the skills needed to earn a living. This is unsettling. A dynamic which has unfolded across several corners of the internet is one of hostility. Where those embracing AI shout at those who aren’t yet about how wrong they are, and are then surprised when this doesn’t change their minds! There aren’t many rules that turn out to be consistently always right (apart perhaps from “nothing good ever came from staying out after midnight”) but “shouting at people and telling them they’re stupid never made them agree with you” is one that has. Different people adapt to change at different rates. Many of the concerns about AI are valid (if often overstated). It’s broadly beneficial in any group to have a mixture of approaches to change, it acts as a useful smoothing function. The people who embrace change rapidly make sure we don’t miss out on opportunities, the people who adapt more slowly help us avoid whiplash and discarding valuable parts of what came before. AI has the potential to make a lot of people’s lives and jobs a lot better. But we’ll persuade people of that by bringing them on the journey, not by shouting at them and telling them they’re wrong. And AI is way too much fun to waste time fighting about it.]]></summary></entry><entry><title type="html">Vibe coding is real, and that’s a good thing</title><link href="https://www.talkingquickly.co.uk/vibe-coding-is-real/" rel="alternate" type="text/html" title="Vibe coding is real, and that’s a good thing" /><published>2025-05-06T00:00:00+00:00</published><updated>2025-05-06T00:00:00+00:00</updated><id>https://www.talkingquickly.co.uk/vibe-coding-is-real</id><content type="html" xml:base="https://www.talkingquickly.co.uk/vibe-coding-is-real/"><![CDATA[<p><strong>tldr;</strong> AI Coding (“vibe coding”) is real and it has fundamentally changed what it means to be a software engineer forever.</p>

<p>The single thing that will define which businesses and which engineers are successful over the coming years is how quickly they are able to adapt to this.</p>

<p>This post is a summary of the topics that are coming up again and again when discussing this with other engineers and engineering leaders.</p>

<!--more-->

<h2 id="what-is-ai-coding-its-not-auto-complete">What is AI Coding (it’s not auto-complete)</h2>

<p>By AI coding I’m talking about a dynamic where an engineer builds software by talking in natural language to some sort of software agent; asking it to perform actions that result in the modification of one or more files without specifying exactly what those modifications should be or how they should be made.</p>

<p>So I’m explicitly excluding:</p>

<ol>
  <li>AI powered auto complete, even if it’s completing whole files</li>
  <li>Stubbing function heads and having AI fill in the blanks</li>
</ol>

<p>From this definition of AI coding.</p>

<p>So prompting an AI:</p>

<blockquote>
  <p>“Could you modify this to spawn tasks for each item and keep track of the spawned tasks, I’ve attached a screenshot of what the UI should look like and given you access to both the UI and the underlying interface”</p>
</blockquote>

<p><strong>is</strong> AI coding.</p>

<p>But for the purposes of this post, defining a function head <code class="language-plaintext highlighter-rouge">do_thing_using_async_tasks</code>, writing comments that explain how it should work then having an AI “fill in the blanks” is <strong>not</strong>.</p>

<h2 id="the-mental-model-is-more-important-than-the-tool">The mental model is more important than the tool</h2>

<p>When evaluating AI coding, people are spending a lot of time talking about “which tool” and “which model”.</p>

<p>For all practical purposes the leading foundation models perform similarly and most of the tools use these foundation models in approximately similar ways. Different tool and model combinations eek out different advantages daily but mastery of a model tool combination is - excluding any as yet unannounced step changes - far more important than picking a model-tool combination.</p>

<p>The mental model I’ve found to be most effective for AI coding is that each and every engineer is now pair programming with a new joiner.</p>

<p>This new joiner is incredibly smart, knows the language and libraries almost perfectly and can <strong>ship code at approximately 50x the speed of a regular engineer</strong>. Importantly we’re not talking about typing speed here - very few programmers are constrained by how fast they can type - we’re talking about the time taking to move from an abstract idea of wanting to instruct a computer to do something, to having written, iterated on and debugged the code to make it do that thing.</p>

<p>They are exceptionally good at following instructions, matching existing style and reasoning about complex pieces of existing code.</p>

<p>They have never seen your codebase before and don’t know the business domain.</p>

<p>When given new information, they assimilate it quickly, act on it rationally, and then <strong>forget it when they move onto the next task</strong>.</p>

<p>Think Groundhog day meets the best engineer you’ve ever met.</p>

<h2 id="everything-is-architecture--context-management">Everything is architecture &amp; context management</h2>

<p>If you imagine working with this Groundhog day trapped new joiner, how would you get the most out of them?</p>

<p>What they bring to the table is that they can conceptualise and write code 50x faster than you, rarely have to Google anything and do not need to sleep.</p>

<p>What you - as an experienced engineer in that business with that codebase - bring to the table is that you know the codebase and the domain.</p>

<p>So you are positioned to reason about the big picture in the way that they can’t.</p>

<p>Specifically your job is:</p>

<ol>
  <li>Guide the new joiner towards which parts of the codebase are relevant and which aren’t for the current task (constrain context)</li>
  <li>Think through the current task in the context of the wider system architecture, domain and future plans and constrain options accordingly</li>
  <li>Provide all of the implicit context around the evolution of the codebase and business which is rarely - if ever - documented anywhere</li>
  <li>Help the new joiner to understand where existing documentation is and create new documentation where appropriate (build re-usable context)</li>
</ol>

<p>This is already what most senior engineers are doing when working on code themselves, it’s just that currently there’s a fifth step; “write and ship code”.</p>

<p>AI coding effectively reduces the time and cognitive load of that “write and ship code” step to zero or close to zero allowing for more iterations in a day.</p>

<h2 id="some-tools-drive-the-mental-model-better-than-others">Some tools drive the mental model better than others</h2>

<p>In particular tools like Cursor and Claude Code have tried to solve the “context discovery” problem as well as the shipping code problem. So they have tried to provide closer to the magic experience of “given a codebase, point me at it, tell me what you want to do and I’ll figure out the rest”.</p>

<p>This works brilliantly for extremely small codebases with minimal business context and “history driven complexity” encoded into them (and does truly feel like magic when it works).</p>

<p>But most of the complexity of developing good software in the medium term is accurate understanding &amp; communication of product objectives and managing the domain + business history driven complexity, not writing code.</p>

<blockquote>
  <p>“Order forms and order sheets are similar right?” Oh no you see actually in this business domain they’re completely unrelated concepts, so the <code class="language-plaintext highlighter-rouge">OrderForms</code> context and the <code class="language-plaintext highlighter-rouge">OrderSheets</code> contexts have nothing to do with each other. Except…</p>
</blockquote>

<p>How well would you expect a senior engineer to do if on day one you gave them access to a git repo, your company wiki and a big feature brief and said “see you in two weeks, don’t talk to anyone”?</p>

<p>AI Coding tools broadly can’t currently really do things human engineers can’t do. They can just do some of the things human engineers can do MUCH faster.</p>

<p>Because tools like Cursor and Claude code TRY to solve this context discovery problem, people tend to try and use it (because it would be cool if it worked right?).</p>

<p>When it doesn’t work, it generates frustration - because learning new tools IS frustrating - and so people are disproportionately likely to give up.</p>

<p>I <em>think</em> this has led to a lot of the the “AI Coding doesn’t work on large codebases” myths.</p>

<p>Cursor is capable - and extremely good at - allowing the engineer to manage context themselves, it just doesn’t make that the path of least resistance.</p>

<p>Tools which force you to manage context yourself (e.g. Aider) are therefore in my opinion much better for learning the mental model initially.</p>

<h2 id="shipping-faster-shipping-more-shipping-betterer">Shipping faster? Shipping more? Shipping Better(er)?</h2>

<p>AI coding will mean we build more and better software given the same amount of effort but I don’t think it’s clear yet the balance between:</p>

<ol>
  <li>Shipping the same things faster</li>
  <li>Shipping the same things at higher quality</li>
  <li>Shipping more complete things earlier</li>
  <li>Shipping more different things</li>
</ol>

<p>So it’s hard to make blanket statements about efficiency. My early intuition is that it’s probably more about items 2-4 above than it is item 1.</p>

<p>There’ll be some “shipping the same things faster” effect, e.g. maybe it on-average halves the time it currently takes to ship a given roadmap item.</p>

<p>But some large proportion of shipping stuff is thinking and this is also where 90% of the value of a senior engineer sits.</p>

<p>So if you spend two weeks mainly thinking and two weeks mainly building (obviously a gross over simplification), you save most of the two weeks building and the two weeks thinking remains largely intact.</p>

<p>But while you were thinking you probably didn’t just think through the first iteration, you thought through some of the first ten iterations then scope hammered heavily to keep the building down to two weeks. You’d probably done enough thinking for ten weeks of building.</p>

<p>As the building time approaches zero, you “might as well” include some of iterations two to four, so polish that might otherwise be delayed, sometimes indefinitely, will now be included in version ones.</p>

<p>Similarly the cost (time investment) of refactoring, adding complex detailed test coverage etc is reduced dramatically which will tend to drive up code quality and product quality for the same or lower engineering investment.</p>

<p>But there’s some danger that because it’s easy to talk about 3-5x productivity improvements - and I think those are achievable with just what’s available now - we equate that with “shipping 3-5x of what we currently ship”.</p>

<p>And those two things are not the same.</p>

<p>In practice we’ll create more (by some multiple) higher quality software.</p>

<p>Some of that will be by creating what we already create faster, but probably the majority of it will be by creating more things that we otherwise wouldn’t have done or by creating higher quality versions of these things.</p>

<h2 id="who-writes-code-and-code-as-a-communication-tool">Who writes code and code as a communication tool</h2>

<p>One of the fundamental (THE fundamental?) challenges of building software businesses is communicating a cohesive vision of a large objective in such a way that many people can work on it in parallel so that it can be achieved far faster than any one person could do it alone.</p>

<p>This is a problem shared across founders, product managers, engineers and solutions range from “talking to each other” to product requirements documents, clickable prototypes and a thousand other tools.</p>

<p>As a technical founder who’s spent the last fifteen years building technology companies, probably my single greatest frustration is having spent hundreds of hours with customers and prospects and being able to see in my head the full outline of the next version of the thing we’re trying to create and knowing how woefully inadequate conversation and memo’s will be as tools to communicate this.</p>

<p>AI Coding is especially efficient at the POC stage, so a POC which might have taken 3 months in the past may well be achievable in a week.</p>

<p>This includes time spent iterating in realtime on a POC as “using” it helps your thinking to evolve.</p>

<p>This makes code as a communication tool far more viable.</p>

<p>So rather than weeks of meetings and memos, creating a POC - sometimes with the intention of throwing it away - will increasingly be the most efficient way for technical - and eventually product - leaders to communicate concepts.</p>

<p>This probably means technical leaders become “more technical” insofar as they return to being more involved with code.</p>

<h2 id="interlude-it-gets-more-speculative-from-here">Interlude; it gets more speculative from here</h2>

<p>Up until here the majority of the points I’m making are just observations, e.g. what I believe the current state based on what currently exists and is happening. The remainder of this piece is more speculative.</p>

<h2 id="pocs-as-the-new-communication-standard-generally">POC’s as the new communication standard generally</h2>

<p>In the same way AI pair programmers make POC’s a far more viable communication tool for engineering leadership, UI based AI programming tools such as Windsurf, DataButton, Lovable etc make this type of POC accessible to none technical folks.</p>

<p>So it will probably become standard that product managers create clickable prototypes in these tools first and iterate on them with designers and engineers and test these with customers, replacing the more traditional memo + designs type model.</p>

<p>So far nobody (that I know of?) has successfully bridged the gap between these UI based tools and complex codebases (probably for the context reasons mentioned above) so there’s likely to remain separation in the tools used here until somebody makes progress on that.</p>

<h2 id="codebases-will-have-to-adapt-not-the-other-way-around">Codebases will have to adapt, not the other way around</h2>

<p>Realistically most companies - or at least startups - should expect that the amount of their new code which is written by AI will cross 50% in the next 6 months and 80% in the next 12 months. Any company not on this trajectory risks being left behind by competitors who do manage to adapt.</p>

<p>So practically the primary user of the codebase becomes AI tooling with humans a secondary consumer.</p>

<p>This means that in situations where there is a tension between something that makes the codebase better for humans and better for AI, we should choose the thing that makes it better for AI. So the trend will be as much about patterns in codebases developing to support the tooling as it will the other way around.</p>

<p>The good news is that in the vast majority of cases, things that make codebases better for AI’s also make them better for humans so this conflict will be rare.</p>

<p>But it may accelerate certain optimisations. E.g. a small team of REALLY good engineers can often paper over a lot of technical debt just by virtue of being incredibly good at reasoning about something which is becoming hard to reason about.</p>

<p>The improved performance of AI once the technical debt is paid back, combined with it being faster to pay it back using AI, may make these types of projects viable sooner.</p>

<p>A specific example of that is clear separation of concerns and enforced boundaries. Separation of concerns and enforced boundaries are to an extent just a formalised way of constraining context, and AI performs far better when context is constrained (as do people).</p>

<p>So we may well see startups prioritising refactoring, technical debt payback and putting in strict rules about code boundaries earlier than they otherwise and historically would.</p>

<h2 id="beware-intuition-over-data">Beware intuition over data</h2>

<p>AI coding is so new and so different that it breaks most of the existing mental models for what’s possible.</p>

<p>So if someone has spent less than 100-200 hours exclusively writing code by collaborating with an AI agent, most of their mental models of what will work and what won’t are simply wrong by virtue of lack of information.</p>

<p>A disproportionate quantity of the objections about “why AI coding won’t work” come from people in the 0-10 hour range.</p>

<p>So it’s worth building a culture of having people disclose their level of exposure early in conversations and discounting the “haven’t really tried it yet” groups views heavily. Of course do so compassionately, but don’t confuse fear based reactions with valid data.</p>

<p>Counterintuitively the main reason to do this is so that you DO hear about the valid objections. There are plenty of things that genuinely don’t work well yet and understanding and discussing those limitations is an important part of adoption. But often they are lost in the noise of broadly incorrect objections from people without sufficient data.</p>

<h2 id="push--pull-people-up-the-adoption-curve">Push &amp; Pull people up the adoption curve</h2>

<p>In practice the job of every engineer has now changed.</p>

<p>The part that was about writing code (and that <strong>IS</strong> only part of it) is now mainly about instructing AI tools to write code.</p>

<p>I don’t think it’s realistic that there are many engineering jobs two years from now where the expectation is anything other than this e.g. that it’s possible to have those jobs without being an expert in instructing AI tools to write code.</p>

<p>So assuming people want to continue to be software engineers - and I hope they do because coding with AI is SO MUCH more fun than coding without it - learning this skill isn’t really optional.</p>

<p>Some people will naturally dive into this head first and enthusiastically, some will need a nudge, some will completely refuse.</p>

<p>It’s important to create an environment where it’s easy for those who want to experiment to try things and share their experiences.</p>

<p>Having clear policies (and budget) around which tools and which models helps a lot. As does encouraging people to explain their experiences, both good and bad to wider teams.</p>

<p>For the people who need a nudge and those who completely refuse to engage, it’s important to be transparent about what’s at stake.</p>

<p>In the same way most people will be fairly reluctant to hire an engineer who wants to build a typical webapp purely in C, people will soon be reluctant to hire engineers who don’t know how to leverage these tools. So there’s a definite risk of being left behind.</p>

<h2 id="the-craft-lives-on">The craft lives on</h2>

<p>Whenever there is a new abstraction invented for creating software people lament the end of writing software as a craft.</p>

<p>As with every other iteration, it wasn’t true then and it isn’t true now.</p>

<p>I’ve always written code for fun as well as for work. I write slightly different code for fun than for work and these days spend more of my time at work collaborating with other engineers than I do shipping code myself.</p>

<p>But using AI tools to write code has made both worlds more fun.</p>

<p>Moving from Basic to Delphi to PHP to Ruby to Elixir over the last 25 years has at each stage, allowed me to realise the visions for things I wanted to exist in the world more fully and often more quickly.</p>

<p>AI coding is just one more step on this journey and I’ve never been more excited.</p>

<p>Every engineer deserves to experience the moment when by collaborating with an AI, something that would have taken them a week, takes an hour.</p>

<p>It’s genuinely the closest thing to magic I’ve felt in decades.</p>]]></content><author><name>Ben Dixon</name></author><summary type="html"><![CDATA[tldr; AI Coding (“vibe coding”) is real and it has fundamentally changed what it means to be a software engineer forever. The single thing that will define which businesses and which engineers are successful over the coming years is how quickly they are able to adapt to this. This post is a summary of the topics that are coming up again and again when discussing this with other engineers and engineering leaders.]]></summary></entry><entry><title type="html">Non Fiction Books</title><link href="https://www.talkingquickly.co.uk/books" rel="alternate" type="text/html" title="Non Fiction Books" /><published>2024-12-15T01:00:00+00:00</published><updated>2024-12-15T01:00:00+00:00</updated><id>https://www.talkingquickly.co.uk/books</id><content type="html" xml:base="https://www.talkingquickly.co.uk/books"><![CDATA[<p>This is a page I update periodically with key takeaways from non-fiction books I’ve read. My current top 25 are:</p>

<ol>
  <li>Four thousand weeks</li>
  <li>Fooled by randomness</li>
  <li>Atomic Habits</li>
  <li>Make Time</li>
  <li>Thanks for the feedback</li>
  <li>Radical Candor</li>
  <li>High output management</li>
  <li>The 4 hour work week</li>
  <li>The 4 hour body</li>
  <li>The hard thing about hard things</li>
  <li>The lean startup</li>
  <li>Deep Work</li>
  <li>Digital Minimalism</li>
  <li>The power of now</li>
  <li>Why we sleep</li>
  <li>In defence of food</li>
  <li>The tipping point</li>
  <li>Breath</li>
  <li>Outlive</li>
  <li>Getting things done</li>
  <li>Algorithms to live by</li>
  <li>Thinking in systems</li>
  <li>Zen and the art of motorcycle maintenance</li>
  <li>The 48 laws of power</li>
  <li>Inspired</li>
  <li>The dip</li>
</ol>

<p>It’s mainly a tool for me to skim through and remind myself of what I’ve read as a way to jog my thinking. The summaries are a mixture of notes I wrote when reading them, reminders from Shortform summaries and what I remember. So they’ve been heavily filtered by my interpretation and what I was thinking about at the time I read them. They are almost certainly <em>not</em> accurate summaries of the books themselves!</p>

<!--more-->

<h2 id="four-thousand-weeks">Four thousand weeks</h2>

<p>This is the book that gave me a framework for carving out dedicated, non negotiable time usually early in the morning for an ongoing passion project. For a long time I used the “make time” app for this.</p>

<p>Four thousand weeks focuses on the shortness of life and how we spend it. It drives home that the deciding factor in how we spend our life is where and how we direct our attention.</p>

<p>One of the quotes that has never left me is about how apt the idea of “spending” your time is, because once each moment is spent, you can never have it back:</p>

<blockquote>
  <p>Pay attention, because you are paying with your life</p>
</blockquote>

<p>It emphasises how time will pass no matter what, and so it is our decisions about where to direct our attention that govern the quality of our life.</p>

<p>The core thread of the book is around the extent to which you really control your time. In essence that we never have full control over our time, or at least we will never be able to do all of the things we want. There are probably a variety of reasons for this, not least that as we get more efficient, we come up with more things we want to do. So we will never get to the end of the list.</p>

<p>This resonates with one of the most important lessons I learned from my father. That many people imagine if they could just have a day to get to the end of their todo list, they’d be caught up and “on top of things”. But in reality this would be true for mere hours until the todo list started to grow again. So the real skill in life is doing the right things from the list and learning to live peacfully with the fact it will never end.</p>

<p>The book turns to some practical considerations for making the most of the little time we have:</p>

<ol>
  <li>Force time for the things that matter, never wait for time to “open up”. Because it never will.</li>
  <li>Limit your work in progress projects. Probably to 3-4.</li>
  <li>Become comfortable with and embrace discomfort. Especially the type of discomfort that might cause you to deviate from your project. The main approach suggested is noticing the distraction and discomfort and directing constantly increasing attention to it rather than shying away from it.</li>
  <li>Stop expecting the future to unfold in a particular away. In particular reflect on how little of your life to date your eally controlled and stop expecting a far greater degree of control over the future.</li>
  <li>Develop patience for how long things really take. An interesting tactic here is time boxing heavily and refusing to allow yourself more than that amount of time to work on something. Whenever you run out of time, your forced to become a little more comfortable with the feeling of impatience.</li>
  <li>Align your free time with your friend. As someone who obsessively tries to control my schedule, this one came from left field. The goal being to align your schedule to maximise the chances you can spend time with those you love.</li>
</ol>

<p>The second part of the book delves more deeply into the idea of life being finite. This comes with the - to me at least - calming sentiment that you can never have all of the experiences you want, because every choice you make implicitly takes the space of something else. Because it is fundamentally finite, the thing that matters is choosing things that matter, not wasting time on todo lists.</p>

<p>The book suggests four tactics in particular:</p>

<ol>
  <li>Make and strongly commit to life choices. The essence of this is that since you’ll never be able to do everything, you’ll be happier if you commit to something and do it well. The underlying principle is that more happiness is created by committing and doing well than by keeping options open.</li>
  <li>Focus on the present not the future. This hit quite hard, that it’s tempting to spend lots of time on actions with future payoffs, to control the future. But generally these payoffs are highly uncertain, but the payoffs now are much clearer. An example would be working on a marketing campaign for the future vs going outside to enjoy the good weather. The happiness from the good weather is certain, the payoff from the marketing campaign is very much not. I didn’t interpret this as not investing in the future, just that many people - me included - have a tendancy to over-invest in the future, at the expense of realising a return on the present.</li>
  <li>Incorporate purposeless time. I loved this one. In a world where it’s fashionable to try and turn every hobby into a profit making “side hustle”, the emphasis here is on the importance of doing things purely for the sake of doing them, with no expectation of a return.</li>
  <li>Don’t live only for changing the world, because you probably won’t. Nobody has that much impact on the world in the grand scheme of things. Humanities impact will be insignificant on many timescales. Once you let go of a need to make a grand change to the world, you open yourself up to being able to make the little changes which are possible and actually matter.</li>
</ol>

<p>On a tangent, this is the book that led me to have a poster on my wall where I mark off the weeks, counting down to 4000. Tosome people this seems morbid. For me it aligns nicely with the parts of Stoicism I relate to, acting as a reminder to enjoy each moment.</p>

<h2 id="reading-list">Reading list</h2>

<ol>
  <li>Five Dysfunctions of a team</li>
</ol>]]></content><author><name>Ben Dixon</name></author><category term="reading" /><summary type="html"><![CDATA[This is a page I update periodically with key takeaways from non-fiction books I’ve read. My current top 25 are: Four thousand weeks Fooled by randomness Atomic Habits Make Time Thanks for the feedback Radical Candor High output management The 4 hour work week The 4 hour body The hard thing about hard things The lean startup Deep Work Digital Minimalism The power of now Why we sleep In defence of food The tipping point Breath Outlive Getting things done Algorithms to live by Thinking in systems Zen and the art of motorcycle maintenance The 48 laws of power Inspired The dip It’s mainly a tool for me to skim through and remind myself of what I’ve read as a way to jog my thinking. The summaries are a mixture of notes I wrote when reading them, reminders from Shortform summaries and what I remember. So they’ve been heavily filtered by my interpretation and what I was thinking about at the time I read them. They are almost certainly not accurate summaries of the books themselves!]]></summary></entry><entry><title type="html">Integrations are hard</title><link href="https://www.talkingquickly.co.uk/integrations" rel="alternate" type="text/html" title="Integrations are hard" /><published>2024-02-07T15:40:00+00:00</published><updated>2024-02-07T15:40:00+00:00</updated><id>https://www.talkingquickly.co.uk/integrations</id><content type="html" xml:base="https://www.talkingquickly.co.uk/integrations"><![CDATA[<p>Best in breed procurement, where many systems - the best in class for each system - are procured independently then integrated with one another has had a tricky decade.</p>

<p>The underlying principle was good. Take lots of vendors who do just one thing really well and connect them to one another to form one fully integrated super system.</p>

<p>But multiple studies suggest that 70-85% of the integration projects which are essential to make these systems work together fail to achieve their objectives.</p>

<p>This post looks at what it’s necessary to consider to avoid these failures, especially in the workforce management space and breaks out specific things to explore when considering an integration project.</p>

<!--more-->

<h2 id="which-system-owns-which-data-source-of-truth">Which system owns which data (“source of truth”)?</h2>
<p>If you’ve ever been part of a project where there was no central project tracking, instead everyone kept their own todo lists and then met periodically to talk about what had been done and what needed to be done then you’re probably experienced the source of truth problem.</p>

<p>Different people have different ideas about the state of individual tasks, in-between alignment sessions several people may try and work on the same task or change the same document and it becomes incredibly difficult to work out what the true state of the project is at any given time.</p>

<p>Integrations can suffer from the same problem. If there are two places where someone can book holiday and they have different data about one persons holiday bookings, which one is true?</p>

<p>So one of the most fundamentals requirements for a successful integration strategy is being completely clear on which systems are the “source of truth” or owners of each piece or category of data.</p>

<p>Only the source of truth system generally stores this data and is responsible for defining the interface by which other systems can alter and perform workflows against it.</p>

<h2 id="which-system-owns-which-workflows">Which system owns which workflows?</h2>
<p>In the same way data being in multiple places can cause problems, workflows spanning multiple systems or worse being duplicated across system can cause substantial problems.</p>

<p>Imagine a system where people are asked to go to one place to build their roster and another to approve timesheets. To approve timesheets they need to open the roster in another system and manually compare timesheets to the roster.</p>

<p>In this example it’s not impossible, but it is painful. If your goal is to get company wide engagement with a technology programme, it’s friction like this that will stop it from happening.</p>

<p>If you take it one step further you can imagine a system where people are asked to put availability into one system, holiday into another and sickness into another.</p>

<p>People being people, they will forget which system they need to go to for what, especially for workflows they don’t have to do very often.</p>

<p>In the best case scenario here this drives up queries to internal support desks. In the worst case scenario it will drive down compliance and usage of the newly rolled out technology, directly preventing the technology programme from reaching and demonstrating it’s goals.</p>

<p>A good rule of thumb is that somebody shouldn’t need to change systems part way through a workflow and generally shouldn’t need to overtly access multiple systems to complete a single task.</p>

<h2 id="big-pieces-vs-small-pieces">Big pieces vs small pieces</h2>
<p>It can be tempting to read the above and think “great, so we’ll just create a 1000 line spreadsheet of all our data and all our workflows and decide on the owner, job done”. I’ve seen this approach multiple times at an impressive level of detail!</p>

<p>The problem with this is that data has dependencies. Yes it’s theoretically possible to store a users national insurance number and address in two separate systems. But in practice the unit of data you’ll most often want to work with is “user data”, not “national insurance numbers”. And changing address or national insurance number may mutually have implications for the payroll system.</p>

<p>In general when defining sources for truth for both data and workflows, try and work with big pieces not small pieces.</p>

<p>Another example is holiday &amp; sickness. It’s tempting to think of these as separate things which could sit in different systems. In practice sickness can impact holiday and vice versa so splitting them is generally a challenging process. It’s far better to think in terms of “absence” as an overall concept.</p>

<h2 id="user-interface-vs-data-integrations">User Interface vs Data Integrations</h2>
<p>So far we’ve primarily talked about data. About which systems own data and perform workflows against that data.</p>

<p>Another type of integration is a user interface integration. Where you want to take an existing system and include the user interface of another system in it so that the user feels like they’re doing everything in one piece of software.</p>

<p>An important thing to be aware of is that this is very hard to do at the level of “parts” of a screen.</p>

<p>There are no standard or easy ways of modifying the user interface of a piece of software to include the user interface of another alongside it.</p>

<p>If a piece of software already has a box where some information is displayed and you want the information in that box to come from another system, that’s fine, that’s a data integration, the box is already there.</p>

<p>If you want to take the box from another piece of software, maybe including some buttons and “embed” it in the interface of another piece of software alongside their existing interface, expect this to be hard and expensive and for the results to feel somewhat clunky.</p>

<p>The main exception to this is where vendors have a deep partnership with one another and so typically one vendor has actually replicated the UI of the other product in their own. This is expensive for the vendor to do and so is rarely done for any one customer, instead it is done when those vendors have some form of ongoing strategic partnership.</p>

<p>There are two intermediate steps which are possible:</p>

<ol>
  <li>“Iframing”  Is a way to define a rectangular area of a webpage or web app and have another webpage or web app appear in that rectangular box. This can work as an approach where your goal is “I want someone to access a different system from within the first system without needing to change pages”. There’s no deep UI integration here, it just saves people going to a different page or screen.</li>
  <li>Single Sign On allows you to have users click on a link to another piece of software and be automatically signed in rather than having to login manually. This can have an extremely positive impact on engagement as people drop off surprisingly heavily when asked to login. If all systems are white labelled this can lead to a near seamless experience if the goal is simply to make moving between systems easier.</li>
</ol>

<p>Importantly both iframeing and SSO can be good solutions, it’s just important to be clear what you’re getting when discussing a UI integration because there’s far more variability in what this could mean than there is for data integrations.</p>

<p>It’s an unfortunate truth that deep UI integration between multiple systems remains a largely unsolved problem in software. It’s a technically hard problem and it’s not for lack of trying that it hasn’t been solved in the industry.</p>

<h2 id="read-vs-write-one-way-vs-two-way">Read vs write, one way vs two way</h2>
<p>A simple but important distinction is read vs write integrations.</p>

<p>In a read integration, one system needs to get data from another system for storage or display. It has no need to ever change this data and push those changes back into the original system.</p>

<p>It either needs to do this once (e.g. an Applicant Tracking  System integration) or repeatedly (e.g. when pulling daily forecasts in).</p>

<p>This is typically simpler and less error prone than an integration that needs to pull data in, manipulate it and then push data back out again (write / two way).</p>

<h2 id="data-warehouse-integrations">Data Warehouse Integrations</h2>
<p>An important sub-category of integration are data warehouse integrations.</p>

<p>A data warehouse is when an organisation has a single central location they collate all of their data, typically for the purposes of reporting and analytics.</p>

<p>If an organisation has a data warehouse initiative, it’s common for it to be a requirement that all vendors can provide a way to get raw data out of the vendors system and into the</p>

<p>Generally the onus is on the vendor to provide a standard method for accessing this data, typically via either direct access to a database or API.</p>

<p>How this gets from this standard interface to the customer data warehouse generally sits with the customer.</p>

<p>Typical options for this include:</p>

<ol>
  <li>Many vendors will build a further bridge between their standard interface and the customer data warehouse for a fee</li>
  <li>Many data warehouse vendors offer some form of integration service</li>
  <li>There are third parties which specialise entirely in building data warehouse connectors</li>
  <li>There are third parties who maintain huge libraries of data warehouse connectors for common vendors</li>
</ol>

<p>It is a huge red flag if a vendor refuses to provide data warehouse access to customers.</p>
<h2 id="push-vs-pull">Push vs Pull</h2>
<p>Push vs pull is terminology that gets used to describe whether one vendor “pushes” data in or the other vendor “pulls” it out. It’s helpful as a short-hand when combined with read vs write above but it tends to mix a few different concepts:</p>

<ol>
  <li>Who builds &amp; owns the integration</li>
  <li>Which company’s interfaces are used to build the integration</li>
  <li>How is the data transfer actually triggered</li>
</ol>

<p>We’ll look at each of these individually.</p>
<h2 id="who-builds-and-owns-the-integration">Who builds and owns the integration?</h2>
<p>A typical integration will involve somebody writing some “glue” code to link the two systems together, the options for this are typically:</p>

<ol>
  <li>The vendors already have a “deep” integration which they commit to supporting. It’s worth asking more about this relationship as generally one vendor will have assumed responsibility for the technical work of maintaining the integration and so be your point of contact if something goes wrong</li>
  <li>The integration is being built by one of the vendors specifically for this client. In this case it’s worth drilling into whether the other vendor has committed to providing the resources and technical functionality and being looped into this process as most integrations require mutual co-operation and commitment of resource.</li>
  <li>The integration is being build by a third party company commissioned by the client. This gives the client more control but it’s essential to ensure costs and resource commitments from the vendors are agreed upfront because a third party building the integration does not mean no costs or resource requirements from the vendors.</li>
  <li>The integration is being built in-house. If an organisation has the capability to do this, this is extremely powerful subject to the maintenance point below.</li>
</ol>

<p>In all of these situations it’s essential to be clear on where the responsibility for maintaining the integration over-time sits.</p>

<p>As with any software initiative, more of the lifetime cost will sit in maintenance rather than implementation so understanding how this will work is as important as understanding how it will get built to begin with.</p>

<h2 id="which-companys-interfaces-are-used-to-build-the-integration">Which company’s interfaces are used to build the integration?</h2>
<p>Companies generally talk about having API’s - Application Programming Interfaces - which are tools for software systems to communicate with each other. There are broadly two core models for an integration:</p>

<ol>
  <li>One vendor uses the others API</li>
  <li>The two vendors API’s are linked together with “glue” code</li>
</ol>

<p>Both are valid approaches, increasingly (2) is preferred because of the standardisation which this enables.</p>

<p>It’s important before commencing a project to understand if the vendors have the required interfaces for the integration request and where they don’t, to have commitments to build these.</p>

<h2 id="what-about-csvs">What about CSV’s</h2>
<p>CSV’s are one of the oldest data transfer methods still in-use. A CSV is essentially a human readable text file of data. There can be some snobbery about CSV’s along the lines of “but it’s not an API”.</p>

<p>CSV’s are incredibly powerful, it’s an integration method that is disproportionately well supported across many systems and fairly easy to automate and debug.</p>

<p>So especially for simple one way integrations, CSV’s should not be overlooked or excluded on the basis that API’s are in some way “better”.</p>

<h2 id="how-is-the-data-transfer-actually-triggered-technical">How is the data transfer actually triggered (technical)</h2>
<p>There are three technical concepts which come up and cause confusion.</p>

<ol>
  <li><strong>API</strong>: This is the grouping for the endpoints and webhooks which make up a vendors interface for building integrations</li>
  <li><strong>Endpoints</strong>: API Endpoints are web addresses that a third party can request data from or send data to</li>
  <li><strong>Webhooks</strong>: These allow one system to “notify” another system when something happens instead of that system having to “ask”. Put the other way, this allows one system to “subscribe” to be told about changes from the other.</li>
</ol>

<p>A typical integration will use both endpoints and webhooks and a Webhook from Vendor 1 may be configured to “call” an “Endpoint” from Vendor 2. The distinction is not important from the perspective of agreeing an integration and this is only covered here because mis-use of this terminology drives a surprising amount of confusion.</p>

<h2 id="conclusion">Conclusion</h2>
<p>In the end it’s up to software vendors to be both flexible and honest to facilitate successful integration programmes.</p>

<p>Deep cross vendor UI integration is still a largely unsolved problem and so we should exercise skepticism when anyone claims to have solved it.</p>

<p>Best of breed is by no means dead, but the pieces are going to be bigger and so the number of vendors smaller as we learn more and more about where it’s practical to draw integration boundaries and where it isn’t.</p>]]></content><author><name>Ben Dixon</name></author><category term="culture" /><summary type="html"><![CDATA[Best in breed procurement, where many systems - the best in class for each system - are procured independently then integrated with one another has had a tricky decade. The underlying principle was good. Take lots of vendors who do just one thing really well and connect them to one another to form one fully integrated super system. But multiple studies suggest that 70-85% of the integration projects which are essential to make these systems work together fail to achieve their objectives. This post looks at what it’s necessary to consider to avoid these failures, especially in the workforce management space and breaks out specific things to explore when considering an integration project.]]></summary></entry></feed>