The pitch was impressive. The portfolio looked strong. The senior team in the room asked intelligent questions about your business, and by the time you left the meeting, you were already mentally drafting the announcement email to your board.
Six weeks later, the senior team has been replaced by two developers you’ve never met, the scope document you were promised still doesn’t exist, and the question “how’s it going?” produces a response you could read three different ways. You are now in the most expensive position in software procurement: too far in to exit cleanly, and not far enough in to have anything worth keeping.
According to the 2025 UK Software Project Outcomes Report published by the British Computer Society, 58% of custom software projects in the UK experienced significant scope, timeline, or budget deviation in the first six months. The majority of those projects showed warning signals before the contract was signed. The buyers just didn’t know what the signals meant.
Red flags in software agency evaluation are not random. Each one is a symptom of a specific structural problem in the way the agency operates, the way they price engagements, or the way they handle the inevitable moments when a project stops going according to plan. Understanding what each red flag reveals about the agency’s operating model is more useful than a list of things to watch for. This article gives you both.
These are not equal concerns. Some are deal-breakers. Some are negotiating points. The distinction matters, and it’s addressed for each one.

Red Flag 1: They Skip the Business Questions and Jump Straight to Tech
The first meeting with a software agency tells you almost everything you need to know about how they solve problems. What they ask in the first thirty minutes reveals whether they build software that solves business problems or software that fulfils a technical specification.
An agency that opens the discovery conversation with technology preferences, team structure, and sprint methodology before asking about your users, your market, your competitive constraints, or your definition of success is demonstrating its operating model in real time. That model is: take the requirements as given, build them correctly, and deliver the product. The problem with that model is that requirements as given are almost never the requirements that produce the outcome the client actually needs.
The best development partners treat the first two meetings as diagnostic sessions rather than sales calls. They ask questions like: what problem are your users trying to solve that they can’t solve today? What does success look like for this product in twelve months, and what data will prove it? What have you already tried, and why didn’t it work? These questions are not pleasantries. They are the evidence that the agency builds software to produce business outcomes rather than to fulfil scopes.
Ask directly: “Walk me through how you’d approach our project in the first two weeks.” An agency that describes only sprints, standups, and technical onboarding is telling you the business context lives outside their process. An agency that describes structured discovery work, user journey mapping, and assumption validation before writing a line of code is telling you the business context lives inside it.
Verdict: Deal-breaker. An agency without genuine curiosity about your business in the sales process will not develop it during delivery.
Red Flag 2: A Fixed-Price Quote Without a Detailed Scope of Work Behind It

Fixed-price contracts feel like protection. They are the opposite of protection unless they are backed by a specification so detailed that two people reading it independently would build the same product.
Here is the structural problem: a fixed-price contract without a detailed scope of work creates misaligned incentives from the moment it is signed. The agency has committed to a price, which means every ambiguity in the specification becomes a cost they bear if interpreted in the client’s favour and a change order if interpreted in theirs. Commercially rational agencies in this position will interpret ambiguity in their favour, which means every undocumented feature becomes a change order, every assumption you made that they didn’t becomes billable, and the collaborative relationship you expected transforms into an adversarial negotiation about what “was in scope.”
The fixed-price contract risks in software development are well documented. The Standish Group’s longitudinal CHAOS research, tracking software project outcomes across more than 50,000 projects, has consistently found that fewer than 20% of software projects are delivered on time and on budget. The primary driver is scope change during development, which is not a sign that clients are difficult or that developers are incompetent. It is the structural reality of building something that didn’t exist before.
Consider the scenario that plays out consistently: a London SME signs a £65,000 fixed-price contract for a customer portal. Week two produces a change order for multi-role permissions that the client assumed were standard. Week four produces a conversation about the email notification system, which the agency quoted as basic and the client envisioned as configurable. By week ten, the change orders total £18,000 and the relationship has shifted from collaborative to transactional. The product that eventually ships is technically correct. It solves a narrower problem than the one the client hired the agency to solve.
The right model for any project where requirements will evolve which is most projects is time-and-materials with milestone checkpoints and a transparent change management process. If an agency insists on fixed price for a project with genuine unknowns, ask why. The honest answer is almost always that it makes the sale easier. That is not a reason that benefits you.
Verdict: Negotiating point if backed by a detailed SOW. Deal-breaker if the price is fixed but the scope is vague.
Red Flag 3: No Documented Discovery Phase Before Development Begins
A software agency that moves directly from proposal to sprint planning without a structured discovery phase is building without a map. That is not a metaphor. It is a literal description of what happens when development begins before the architecture, data model, integration dependencies, and edge cases have been defined on paper.
Discovery is the structured work that transforms a product idea into a buildable specification. It surfaces the decisions that need to be made before the first line of code is written rather than the decisions that are discovered mid-sprint and have to be made under delivery pressure. The software agency discovery process, when done well, produces an artefact: a functional specification, a data model, an architectural decision record, or a detailed scope document that both parties sign before development begins.
Without that artefact, the project proceeds on shared assumptions rather than shared understanding. Shared assumptions look like alignment until they’re tested by a specific implementation decision, at which point two people discover they meant different things by the same words. That discovery usually happens at week six, when the cost of course correction is eight times what it was at week two.
Ask every agency on your shortlist: what do you deliver at the end of discovery, and who signs it? The answer tells you whether discovery is a genuine phase in their process or a sales talking point used to describe the first meeting.
Verdict: Deal-breaker. An agency that doesn’t produce a written, agreed specification before development is structurally positioned to dispute scope rather than deliver it.
Already working through your shortlist? Start a conversation with Empyreal Infotech here or keep reading to understand the remaining red flags that most buyers discover too late.
Red Flag 4: They Say Yes to Everything in the Sales Process
An agency that agrees with everything you say during the pitch is not being agreeable. It is demonstrating an absence of professional judgment, or the presence of commercial pressure that overrides it.
A competent software partner pushes back. When a founder describes fifteen features for an MVP, the right response is: “Which three of these directly test the assumption that determines whether this product succeeds?” When a client proposes a custom-built solution to a problem that three established platforms already solve well, the right response is: “Let me show you why building this from scratch costs more and delivers less flexibility than the alternatives.” When a timeline is unrealistic for the scope described, the right response is: “Here is what we can deliver in that window, and here is what that requires cutting.”
An agency that tells you what you want to hear rather than what the project requires has priced its commercial relationship above its technical judgment. That prioritisation does not reverse after the contract is signed.
Test this explicitly: describe a technically questionable decision or an implausibly optimistic timeline during the evaluation conversation. An agency with genuine product judgment will raise a concern. An agency optimising for the close will tell you it sounds great.
Verdict: Deal-breaker. An agency without the confidence to challenge your assumptions before you’re a client will not find that confidence when challenging assumptions costs them the relationship.
Red Flag 5: Ambiguous Intellectual Property Terms in the Contract
IP ownership in software development is not automatic, and the gap between “we’ll transfer the code” and a properly structured IP assignment clause costs clients significant leverage, legal risk, and operational freedom when it matters most.
In the UK, work created by an independent contractor or agency belongs to the creator by default unless a written agreement explicitly assigns ownership to the client. Many agency contracts use language that is technically accurate but commercially problematic: IP transfers “upon final payment” (which means non-payment disputes freeze your ownership), IP assignment covers “deliverables” rather than the entire codebase (which may exclude infrastructure code, internal libraries, or components used across multiple client projects), or the contract grants a licence rather than ownership (which means you can use the software but the agency retains the right to reuse its components).
The moment when IP ambiguity becomes expensive is not during the project. It is when you want to bring in a new development partner, when you raise investment and lawyers conduct IP due diligence, or when you need to audit your own codebase for security purposes. An agency that created IP uncertainty in the contract is the agency you need to negotiate with at exactly those moments.
Ask the question plainly in the first contract conversation: does the agreement provide full IP assignment to us upon project completion, with no conditions beyond payment? Are there any third-party components, open-source libraries, or internal tools embedded in the deliverables that we cannot own outright? The agency that answers this question clearly and immediately has thought through IP as a standard client concern. The agency that creates ambiguity or requires negotiation on something this foundational is revealing something about how it protects its own commercial interests relative to yours.
Verdict: Deal-breaker if IP assignment is absent or conditional. Negotiating point if the structure is non-standard but transparently explained.
Red Flag 6: Developers Testing Their Own Code with No Independent QA
Quality assurance is not a luxury. It is the difference between a production system that fails reliably before it reaches users and one that fails unpredictably after it does.
An agency whose testing process consists of developers reviewing their own code is not doing QA. It is doing a self-review with a different name. The cognitive limitation is structural, not personal: a developer who knows what a piece of code is supposed to do will test it for the behaviour they designed rather than the behaviour that produces failures in production. Independent testing exists precisely because the person who built the system is the least reliable person to validate it.
Ask specifically how the agency structures its dedicated QA testing process: are QA engineers separate from the development team? At what stage in the sprint cycle does QA involvement begin? Does the agency maintain automated test suites, and who is responsible for keeping them current? What is the process for regression testing before a release?
The financial case for independent QA is clear and consistent: bugs identified during development cost approximately £100 to £300 to fix. The same bug identified in production, after real users have encountered it, costs £1,500 to £4,000 in diagnosis, prioritisation, fixing, redeployment, and client communication. An agency that has saved £5,000 by skipping a proper QA process has potentially exposed you to £25,000 in post-launch remediation costs.
Verdict: Deal-breaker for any product handling sensitive data, financial transactions, or regulated information. Serious concern for any production product.
Red Flag 7: No Formal Process for Managing Scope Change
Every software project changes during development. Not because the client is disorganised or the agency is careless, but because building a product that didn’t exist before involves discovering things you couldn’t have known before the build began. A professional agency has a structured process for managing these changes. An agency without one is structurally positioned to handle scope evolution badly.
The scope creep and change order failure mode has two versions, both predictable. The first: the agency absorbs undocumented scope changes silently, building what the client asks for without recording or pricing it, until the original budget is exhausted and the project is incomplete. The second: the agency escalates every undocumented feature, however minor, as a formal change order, and the project grinds to a halt over billing disputes about two-hour additions to the backlog.
Both versions destroy the working relationship. Both are symptoms of the same structural absence: a documented change request process that defines what constitutes a scope change, how changes are estimated and approved, who has authority to authorise additional budget, and how changes affect timeline.
Ask to see the change request template before you sign. If the agency doesn’t have a documented template, they haven’t systematised the management of the thing that breaks most client relationships.
Verdict: Serious concern. A missing change management process is a direct predictor of budget disputes.
Evaluating agencies against these criteria right now? Talk to Empyreal Infotech we’ll walk you through how we handle every one of these questions before you commit to anything.
Red Flag 8: Communication Quality Drops Between the Sales Meeting and Project Kickoff
The sales process is when an agency is at its most attentive. Senior people are involved. Responses are fast. Preparation is visible. Watch what happens immediately after the contract is signed and before the project formally kicks off.
This is the transition moment that reveals the most about the agency’s actual operating culture. The senior team that ran the pitch hands the engagement to the delivery team. Your day-to-day contact becomes a project coordinator rather than a technical director. Response times extend. The specificity of answers decreases. The energy that characterised the sales process is, suddenly, somewhere else.
Not every agency experiences this transition poorly. The agencies that don’t are the ones that have built communication infrastructure rather than relying on relationship management: defined sprint reporting formats, committed response time SLAs, direct access to the technical lead throughout the engagement, and escalation paths that don’t require you to wait for the account manager.
When you’re evaluating whether this red flag applies, ask: after the contract is signed, who specifically is my primary contact? Will I have direct access to the lead developer, or will communication pass through a project coordinator layer? What is the committed response time for non-urgent questions? For production issues? And, critically, ask for a sample of the sprint reports they’ve sent to current clients. What you see in that report is what you’ll receive throughout the engagement.
Verdict: Serious concern. If communication is already lagging during pre-contract due diligence, the trajectory after signature is well-established.
Red Flag 9: Portfolio Depth Without Measurable Outcomes
A portfolio page with screenshots, client logos, and case study summaries is the minimum investment any agency will make in their sales infrastructure. It is not evidence of outcome quality. It is evidence of marketing capability.
Ask the question that the portfolio page is designed to prevent you from asking: what specifically did the software achieve after it shipped? What business metric improved, by how much, and within what timeframe? A customer portal that reduced support call volume by 34% within ninety days of launch is evidence. A customer portal described as “improving the client’s digital customer experience” is a description that could apply to any software that worked at all.
The agency that has measured outcomes has clients whose outcomes were worth measuring. The agency that cannot produce outcome data for their most successful project either didn’t measure or didn’t produce results worth documenting. Both are informative.
Ask for three specific case studies with measurable results. Then ask whether you can contact the client directly. An agency that deflects live client references to written testimonials is managing the evidence rather than presenting it. The best agencies make that introduction without hesitation.
Verdict: Serious concern if the entire portfolio is description-only. Negotiating point if some projects have NDA constraints but the agency can demonstrate a methodology for measuring outcomes.
Red Flag 10: Post-Launch Support Is an Afterthought or an Upsell
How an agency talks about post-launch support in the sales process reveals how they’ve structured their business. An agency built around project delivery generates revenue from starting new engagements. Maintaining products that have already shipped is overhead in that model. An agency built around ongoing partnership generates revenue from the continued value of the product it created. Post-launch maintenance and support is core to that model, not a premium line item on a renewal proposal.
The failure mode that results from a project-delivery model is consistent: the agency ships the product, completes the engagement, and moves on to the next client. Three months later, a dependency update creates a vulnerability in your production system, a third-party API that the product relies on changes its authentication model, or real user behaviour produces an edge case that nobody tested for. You now need the team that built the product. They’re deep in another client engagement, and you’re being priced at emergency rates or told to book into a support queue with a two-week lead time.
A product is not infrastructure at deployment. It becomes infrastructure the moment users depend on it. Infrastructure requires ongoing maintenance, security monitoring, and responsive support. Any agency that treats launch as the end of their obligation has fundamentally misunderstood what custom software is for.
Ask directly: what is your standard post-launch support model, what SLA is associated with critical production issues, who handles support requests and what is their seniority, and is the team that built the product the team that maintains it? An agency that has thought through post-launch as a genuine service rather than an optional add-on will answer these questions specifically. An agency that redirects the conversation toward the build will answer them vaguely.
Verdict: Deal-breaker for any product that becomes operationally critical. Serious concern for all others.
The Honest Case: When Some of These Signals Are Acceptable
Not every red flag ends the conversation. Intellectual honesty requires distinguishing between conditions that make an agency unsuitable and conditions that require negotiation before commitment.
A startup with a clearly bounded, fixed scope that can be fully specified before development begins can work productively under a fixed-price contract. An agency whose post-launch support model is limited but whose build quality is demonstrably high may be the right choice for a client who has internal technical resources to manage ongoing maintenance. A new agency with limited public case studies but verifiable client references and strong technical credentials may be a better fit for a founder’s budget than an established agency with premium rates.
The risk calculation is not whether red flags exist. It is whether the risks they represent are ones your specific situation can absorb. A seed-stage startup with six months of runway has a lower tolerance for scope disputes and post-launch abandonment than an established SME with an internal technical team. Calibrate accordingly.
The red flags that cannot be negotiated away regardless of circumstance are the ones rooted in misaligned incentives rather than process gaps: an agency that says yes to everything in the sales process will not develop the judgment to say no when it costs them the relationship. An agency with ambiguous IP terms did not create that ambiguity by accident. An agency with no independent QA process has priced quality testing as optional.
Process gaps can be filled with the right contract language and the right conversations before signing. Incentive misalignment requires finding a different partner.
FAQ: Hiring a Custom Software Company in the UK
What Are the Most Important Questions to Ask Before Hiring a Software Agency?
Ask these five questions before signing any contract: (1) How is your discovery phase structured and what does it produce? (2) What is the change request process and can I see the template? (3) Who specifically will work on my project and at what allocation? (4) Who owns the IP and what are the exact terms? (5) What does post-launch support include as a standard service? The answers to these five questions reveal the agency’s operating model, commercial incentives, and communication culture more directly than any portfolio review. For a more complete framework, the full guide on questions to ask before hiring a software agency covers each dimension with evaluation criteria.
How Do I Identify a Bad Software Agency Before the Contract Is Signed?
Bad software agency signs London founders should watch for include: technology discussions that precede business context questions, fixed-price quotes without a supporting specification, an inability to describe a past project failure, reluctance to provide direct client references, and communication response times that are already slow during pre-contract due diligence. The sales process is when an agency is at its most responsive. If they’re slow or vague before they have your commitment, they have established the baseline for how they’ll operate after it.
What Are the Risks of a Fixed-Price Contract in Software Development?
Fixed price contract risks in software emerge specifically when the specification is not detailed enough to prevent interpretation disputes. An agency committed to a fixed price has a commercial incentive to interpret every ambiguity in their favour, which transforms every undocumented assumption into a potential change order. For projects with evolving requirements, which describes most software builds, time-and-materials with milestone checkpoints produces fewer disputes and better alignment than a fixed price backed by an incomplete specification.
How Do I Find a Reliable Software Development Partner in the UK?
Finding a reliable software development partner in the UK requires evaluating operating model rather than portfolio presentation. Look for: a structured discovery phase with documented deliverables, independent QA with a named process, explicit IP assignment terms, transparent post-launch support, and direct client references whose projects are comparable in complexity to yours. Review platforms like Clutch and GoodFirms provide verified starting points, but the evaluation questions that reveal operating culture can only be answered in direct conversation.
What Should I Look for in a Software Agency’s Post-Launch Support Model?
A credible post-launch maintenance and support model includes: a defined SLA for critical production issues (typically four-hour response time for severity-one issues), a named support contact with technical seniority rather than a ticketing queue, confirmation that the team that built the product is the team maintaining it, and a clear distinction between what is included in standard support versus what is billed additionally. Agencies that discuss post-launch support only when asked, rather than proactively, are structurally oriented toward project delivery rather than ongoing partnership.
What Is Bespoke Software Development Vetting and Why Does It Matter for UK Businesses?
Bespoke software development vetting is the structured process of evaluating a custom software agency’s operating model, delivery track record, and contractual terms before committing to an engagement. It matters because bespoke software, unlike off-the-shelf products, is built specifically for your business requirements and architectural constraints. The decisions made in the first sprint of a bespoke build constrain what is possible in month eighteen. The quality of the vetting process determines whether those foundational decisions are made by a partner who understands your business or by a team executing a scope document without that context.
What the Red Flags Are Really Telling You
Each red flag in this list is a symptom of something structural. An agency that skips business questions has built its delivery model around specification execution rather than outcome creation. An agency with ambiguous IP terms has structured its contracts to protect its own commercial interests rather than yours. An agency with no post-launch support model has built a revenue model around project starts rather than ongoing partnership.
The signals are not random. They are consistent expressions of the operating model, commercial structure, and incentive alignment of the agency you’re evaluating. Understanding what each signal reveals rather than just what it looks like gives you a more useful evaluation framework than a checklist.
The agencies that pass this evaluation are not the ones that have never encountered a difficult project. They are the ones whose process, commercial model, and client relationships are structured to handle difficult projects honestly rather than defensively.
When you’re looking at the top software development companies in London on your shortlist, the question is not whether they can build your product. Most credible agencies can. The question is whether their operating model is aligned with what your project requires and what your risk tolerance allows. That question can only be answered through direct evaluation.
The red flags tell you which agencies to remove from the shortlist. The questions that follow tell you which agency to choose.
If you’re building custom software for a startup, SME, or growth-stage business and want a partner whose process, IP terms, and post-launch commitment hold up to direct scrutiny, book a free 30-minute discovery call with Empyreal Infotech. No pitch deck. No pressure. Just a direct conversation about whether your project is a fit.