Skip to content

Junior Cloud & AI Engineer

Location: Bedford, UK (hybrid) · Type: Full-time · Level: Graduate / Early career


About us

We're a UK-based technology consulting firm that's expanding into building and operating its own products — roughly half consulting, half in-house technology and engineering. The consulting side has a long track record of helping clients solve real technology problems. The product and engineering side, where this role sits, is where we build and operate the modern web applications, AI-powered tools, and cloud infrastructure that increasingly support both our customers and ourselves.

We're owner-led with hands-on, technically literate leadership. Decisions are made quickly, the work is visible, and every engineer's contribution counts. The engineering practice is being grown deliberately as a peer to the consulting side over the coming years — joining now means joining at the formative stage.


About the role

We're a fast-moving company running a modern technology estate — public-cloud infrastructure, self-hosted open-source platforms, custom-built business applications, and AI-powered tooling. As we grow, we're hiring a graduate or early-career engineer who genuinely wants to understand how all of this works under the hood.

This is an exceptional learning opportunity. You will see and contribute to every layer of a real production environment — from networking and Linux servers, through container orchestration and identity providers, to CI/CD pipelines and AI/LLM-integrated applications. You'll be mentored hands-on by an experienced engineering lead and develop genuine, transferable engineering skills that few graduate programmes give full visibility into.

Our mission for this hire: help build, automate, and harden the systems that the wider business depends on — with a focus on automation, cost efficiency, and resilient design.


What you'll do

You'll work across the full stack of modern infrastructure and applications. In a typical month you might:

Cloud infrastructure - Operate and improve workloads on a major public cloud platform — compute, networking, managed databases, object storage, identity and access management. - Manage Linux servers running containerised workloads. - Configure reverse proxies, automate TLS certificates, set up VPNs.

Application platform - Use code repositories with built-in CI/CD to deploy applications. - Operate a self-hosted platform-as-a-service that makes deployments routine. - Use container-management and workflow-automation tools to orchestrate environments.

Identity, secrets & security - Operate a secret-management system that holds the credentials the business depends on. - Maintain a single sign-on identity provider that federates corporate identities to internal applications. - Configure cloud and SaaS app registrations for SSO and bot integrations.

Custom business applications - Help operate and improve the low-code business applications that serve our paying customers. - Help operate the CRMs, content sites, and internal tools the wider team uses every day.

AI / LLM integration - Use commercial AI APIs from inside developer environments to accelerate everyday work. - Help operate a self-hosted AI chat front-end that lets staff use AI privately. - Help build the AI-assistant capability we're integrating into our internal tooling — a chatbot that performs real actions against production applications, generates marketing content, and acts as a productivity layer for the team.

Documentation & operational discipline - Source-controlled infrastructure (configuration in code, runbooks for incident recovery). - Disaster-recovery design and testing. - Maintenance windows and restore drills.

This is not a "fix the printer" role. From early on you will own real production work and see how decisions actually get made.


About you

Mindset (most important)

  • Genuine curiosity about how things work. When you read about TLS, you want to know what the handshake is actually doing. When you use containers, you want to understand the kernel features they sit on. When you talk to an LLM, you want to understand attention and tokenisation, not just prompt-engineer.
  • A bias to learning fast and shipping. You can pick up a new tool over a weekend if it solves a real problem. You're comfortable being wrong on Monday and right by Friday.
  • Comfort with ambiguity. Real infrastructure is messy. You won't always have a perfect spec. You can read documentation, prototype, and confirm what works.
  • Care about doing things properly. You appreciate that "I forgot to write it down" is what causes outages at 3 AM.

Technical foundations we expect (or that you can learn fast)

  • Computer fundamentals: how an operating system works, what a process is, how memory is laid out, why disk I/O matters.
  • Networking: TCP/IP, DNS, TLS, HTTP — not necessarily mastery, but enough to debug a "site isn't loading" problem methodically.
  • Linux command line: comfortable in a shell. Can read a container build file, edit a config file, follow a system log.
  • At least one programming language well enough to be productive — Python, JavaScript / TypeScript, Go, or similar.
  • Git fluency — branches, merge requests, rebases.
  • Curiosity about how Large Language Models actually work — not memorising buzzwords; an interest in tokenisers, embeddings, transformer attention, why temperature matters, what retrieval-augmented generation is and why it's useful.

Bonus skills (great if you have them, learnable on the job)

  • Cloud platform experience — any major provider.
  • Containerisation experience.
  • Database fundamentals (SQL, schemas, transactions).
  • Reverse proxies / web servers.
  • A side project or contribution to open source you can talk about.
  • Anything you've built that uses an LLM API in a non-trivial way.

What we're not looking for

  • Five years of experience. This is genuinely a graduate / early-career role.
  • A specific degree title — Computer Science is helpful, but we'll seriously consider any STEM background, and self-taught engineers with a strong portfolio.
  • Vendor certifications. Useful, but we'll teach you whatever specific vendor knowledge the role needs.
  • Encyclopaedic memorisation of frameworks. We care about how you think.

What we offer

  • Direct mentorship with an experienced engineering lead across the full stack — from cloud billing dashboards to application internals to security recovery procedures. The kind of cross-layer exposure that's hard to find in larger organisations.
  • Real ownership. You'll own meaningful pieces of work early. Within months you'll be the primary owner of one or two of our internal services and running improvement projects yourself.
  • A chance to shape the modernisation roadmap. We have a concrete plan to upgrade the estate from "running" to "production-grade", and you will deliver a meaningful chunk of it.
  • Exposure to commercial conversations — paying customer relationships, supplier contracts, the operational economics of self-hosting vs. SaaS. You'll understand the business of what we build, not just the code.
  • Modern tools — including hands-on use of commercial AI APIs as part of your everyday work.
  • A realistic on-call structure. We're putting structured on-call discipline in place, and you'll help design it.

Practicalities

  • Location: Bedford, UK — hybrid working. Some days in Bedford for in-person collaboration; the rest remote. UK-based candidates only (visa sponsorship not available at this point).
  • Reporting line: the engineering lead, who is also working towards a broader engineering-leadership remit. Indirect exposure to company leadership.
  • Working hours: flexible within reason. Maintenance windows are scheduled out-of-hours; everyone takes their share over time, with notice.
  • Probation: standard three-month probation period, with structured monthly review conversations so you know exactly how you're tracking.
  • Compensation: competitive for a UK graduate / early-career role, with regular reviews. Discussed at offer stage.

How to apply

Send us:

  1. A short note (max 500 words) about a project you've worked on — paid, academic, or hobby — that involved infrastructure, automation, or AI/LLMs. Tell us what you built, what was hard about it, and what you'd do differently.
  2. A CV — short and honest is better than long and embellished.
  3. (Optional but appreciated) Links to a personal site, portfolio, or anything you've built that we can look at.

We don't expect a slick portfolio. We do expect you to have something — even a rough one — that you can talk about thoughtfully.


Our interview process

  1. 30-minute conversation with the engineering lead — about what you've built, what you want to learn, and the role. This is genuinely two-way; if it's not a fit, we'll tell you, and we encourage you to ask hard questions.
  2. Practical exercise (take-home, 2-3 hours) — a small, real-world infrastructure problem. We respect your time; it won't be a large unpaid project.
  3. Technical conversation (60 minutes) — walking through your take-home and discussing how you think about infrastructure, automation, and AI integration. We won't surprise-quiz you on syntax.
  4. A meeting with company leadership to confirm fit and discuss commercial context.

Total elapsed time from first conversation to offer is typically 2–3 weeks.


A note on what working here is actually like

We're honest about the state of things. Our infrastructure is a mix of well-engineered foundations and known operational debt — there is a documented register of issues, an active improvement plan, and a track record of fixing things rather than ignoring them. You'll see how a small company actually operates technology, including the parts that aren't yet pretty. That visibility is a rare and valuable thing early in a career.

If that sounds like a place where you can learn fast, build serious skills, and have real impact — we'd love to hear from you.