contact us

SDET vs QA Engineer:
Changing the Pace of Production

The more stages software has to pass through, the more time delivery takes. But in a competitive market, everyone needs to move faster and perform better. That is one of the reasons the SDET role became so valuable.

What an SDET does  |  Where the role matters  |  Why trust in automation must be earned

SDET vs QA Engineer: Changing the Pace of Production

Inside the SDET Role: An Interview on Quality and Release Speed

Softwarium

SDET is not QA. It is a teamplayer with a fuller view of the software. When this role is treated separately, it gives teams a real efficiency boost and changes the pace of production.

In this interview, a Softwarium engineer talks about that shift from Manual QA to SDET, and why the role matters more and more for teams that need to move fast.

From Manual QA to SDET

What was your path? Did you start as a developer, a tester, or something else entirely?


Ruslan
: I came into testing as a Manual QA. Later, I started getting automation-related tasks on projects: writing scripts, automating parts of the workflow, taking on more technical responsibilities. From there, I moved into more of a General QA role, where you handle both manual testing and automation.

At some point I realized that if I wanted to work more closely with developers, I needed to understand their language better. Our stack was .NET, so I studied C# more seriously and started using it as my main language for test automation. Later I saw roles labeled as SDET — positions with heavier automation, less manual testing, and much tighter collaboration with development. That was the natural next step.

 

Did you ever feel caught between two worlds — developers seeing you mainly as a tester, and QA people seeing you as someone who writes too much code?


Ruslan
: Inside the team, not really. For developers, I still usually remain a QA engineer, because my focus is still on tests. And in practice, if you work with testing, people often just call you QA. It is a broader term. It includes quality assurance more generally — not only tests, but also things around releases, integrations, CI/CD processes, and work that is closer to the code.

But when you apply for more manual-heavy roles, sometimes people do look at your background and say: you have done so much programming and automation, and not enough manual testing. Being advanced can work against you too. (smiles)

What an SDET actually does

If you had to explain your work to a non-technical founder, what would you say?


Ruslan
: I usually explain it simply: I am a programmer, but I do not write product code — I write test code. I write the code that tests the product.

I do not build the product itself. I build the tests that validate the product. But the real work starts before I write a single test. First I need to understand the product, what needs coverage, where the risk is, and what should be prioritized first. Then I write the automated tests, and then I integrate them into the CI/CD pipeline so they become part of release decision-making.

If those tests fail, deployment may stop. That is where the impact becomes visible to everyone. A failing suite can mean there is a real product issue, or that the test itself is unstable. And if your tests are unreliable, the whole system starts to lose credibility very quickly.

So a big part of the role is not just test automation. It is making that automation trustworthy. That takes technical skill, but also communication. You need developers and managers to trust the signals your tests produce.

“I do not write product code. I write the code that decides whether product code is safe to ship.”

That is probably the simplest way to explain the job.

SDET vs QA: the real difference

What is the most common misunderstanding people have about what you do versus what a traditional QA tester does?


Ruslan: If we look at how these roles developed, then QA automation grew out of manual testing. Testers wanted to make their work easier, speed up regression testing, and automate repetitive parts of the process. So over time, testers learned tools and became automation engineers. Their strongest side is still the testing foundation. They think like testers first. They think about the product, about how it can break, about what can go wrong.

“The SDET role did not grow out of testing. It grew out of development.”

Developers were already writing code and testing their own code — first unit tests, then integration tests, system tests, and so on. That is where the software development engineer in test idea comes from: a developer who also writes tests.

At some point, both roles may be writing automated tests. But they come to that work from different backgrounds. One starts with testing, the other starts with programming.

 

 

So is SDET just a more advanced version of QA?


Ruslan: I would not put it that way. A tester or automation engineer usually has not only a testing background, but also a testing mindset. You think at a higher level about the product. You do not focus so much on implementation. For you, the product is closer to a black box. You care about the result, about how the product behaves for the user.

An SDET is closer to white-box testing. You know more about implementation. You understand where problems may come from in the code. That gives you advantages, but it also changes your perspective. When you think more about implementation, you naturally spend less time thinking at that higher product level.

So there is a difference, but I would not describe one role as simply “above” the other.

Where the role changes outcomes

Can you walk me through a case where having an SDET on the team changed the outcome?


Ruslan: Yes. One important case is where the tests live.

In some teams, the product code and the test automation framework live separately. Developers own the product code. Automation engineers own the test code. That has one advantage: they are independent. If the test framework breaks, it does not affect the product directly, and vice versa.

But when you work as an SDET, your autotests often live in the same repository and sometimes in the same solution as the product code. They may even reference product types directly. That means the connection is much tighter.

The advantage is that I can write more low-level tests, for example integration tests, and I can also catch issues earlier. My tests get reviewed by developers, and I can review developers’ pull requests too. That means I can sometimes spot bugs already at the code review stage.

In one case, there was no environment available to check a specific scenario, and setting one up would have taken a long time. But the manager needed an answer quickly. I found a similar case, looked through the code, checked my assumption with developers, and gave a reliable answer the same day.

“By looking through the code and confirming the logic with developers, I answered in one day what could have taken a week.”

That is one of the strengths of the role. You are close enough to the code to answer some questions much faster, without waiting for a full test setup.

I also integrated my autotests directly into shared CI/CD pipelines myself. I did not need to hand that off. That also changes how quickly the team gets feedback and how much ownership sits inside the role.

On flaky tests and trust

On flaky tests and trust

What is a flaky test, and why is it such a problem?


Ruslan: A flaky test is one that behaves inconsistently. It passes, then fails, then passes again, without a real change in the product. And the biggest problem with flaky tests is trust.

“My threshold for flaky tests is zero. And I’d rather have fewer autotests that are always stable.”

This is true for both SDET and QA automation engineers. It is a shared problem. If your tests are flaky and they start blocking releases, developers may investigate the first few times. But after that, they begin stepping over the failures. Then nobody really looks at the results anymore. That is when test debt starts growing, and the suite slowly becomes useless.

So my approach is simple: it is better to keep a smaller scope and fix instability fast than to keep adding more and more tests on top of a weak foundation.

Another important thing is shared responsibility. The goal is not only that the team notices when tests fail, but that they are able to react to it. Failing tests should not become something everybody ignores until the automation engineer comes back from vacation. The team should understand that this is not normal and know how to deal with it.

 

 

What are the quick fixes people use for flaky tests?


Ruslan: The easiest one is a retry. Sometimes you really do need a quick fix, and a retry can buy you time. Sometimes you disable a test for a while and come back to it later. But those are temporary solutions.

A common problem is hardcoded waits. Instead of checking properly whether a page or state is ready, someone just adds a fixed delay. A smarter solution is to build logic that checks whether the page is actually loaded and ready. That makes the test much more reliable.

Frameworks, migration, and technical taste

What framework do you use now, and why?


Ruslan: Right now we use Playwright. That is not just a personal preference. It is also where the broader QA community is moving. For us, Playwright has been replacing Selenium because it is more stable, and because it is backed by Microsoft.

In general, I like tools that are supported by a large company with long-term investment behind them. That matters. You are less likely to end up alone with a tool that stops evolving.

I also worked with NUnit, xUnit, MSTest, and with tools like Cypress and Katalon Studio. And that is where another difference shows up. Some tools are easier to start with if you do not know programming that well. But at some point, you hit their limits. You need something more specific, more scalable, faster, or more flexible, and the tool starts restricting you.

For an SDET, starting is harder because you actually have to write code. But once you do, you have many fewer limitations. You can extend the framework, speed up execution, parallelize tests, and shape the solution around the product instead of the other way around.

Audit before automate

Audit before automate

When you join a new project, what do you look at first?


Ruslan: First, I look at the product itself. What is it for? Who is it for? What problem does it solve? You need to understand the business logic first.

“Before you automate anything, you need to understand the product and the business logic behind it.”

Then I look at the development process. How is work organized? Who is responsible for what? Is testing a bottleneck? Or is it naturally built into delivery? What is the current test coverage? What is the automation-to-manual ratio? Where is the actual friction?

That is why the basic rule is: audit before automate.

Sometimes the real issue is not tooling. Sometimes it is a process issue. Sometimes it is capacity. And sometimes the problem is that developers hand over work that is too raw, and QA spends time on issues that should have been caught earlier.

One thing I try to do is build a more shared model. I can design the tests, prioritize them, and write new automation. Developers can review them, understand how they work, and help maintain them. That way, autotests do not become a bottleneck owned by one person. The team gets a more practical, more collaborative model.

Good CI/CD and quality gates

What does a good CI/CD pipeline look like from a quality standpoint?


Ruslan: The biggest difference is speed. Bad pipelines are slow. If tests take too long, nobody can react quickly. Good pipelines give fast feedback.

The exact timing depends on the project. On one heavier platform, twenty minutes was acceptable after optimization. On a smaller product, even that would feel too long. So there is no universal number. What matters is whether the pipeline gives feedback quickly enough for the team to act.

As for quality gates, the most obvious one is the autotest suite itself. If the tests fail, you stop and decide whether the release can continue. There can also be other gates — unit test thresholds, code quality tools, or canary testing before wider rollout.

That is where CI/CD-integrated testing becomes important. It is not just about automation existing. It is about automation being part of delivery.

And this is also where shift-left testing matters. An SDET can often catch issues earlier — not only in test runs, but even by reviewing code before it gets further down the pipeline.

Security, permissions, and test data

Security, permissions, and test data

Many teams still treat security as a separate track. What does it look like when security testing is built into the process?


Ruslan: In my experience, one of the most practical areas is permissions and access. We do not only test administrator scenarios. We also test users with no rights, partial rights, and different permission levels. We check whether APIs and interfaces are accessible to users who should not have access.

That kind of security testing can uncover serious issues. I once found a case where a user without admin rights could create or join a group in a way that would let them get administrator permissions. That was a critical bug.

With test data, ideally you want realistic data, but you cannot use real customer records directly. So you work with anonymized or synthetic test data. You mask names, passwords, and other sensitive fields so the data stays realistic enough for testing, but does not expose real information.

That is not perfect. Sometimes a test passes on synthetic data and still fails in production. That does happen. But if you work in environments where real data cannot be used, that is part of responsible test data management.

The human side — and the future of QA

What do you enjoy most about the work?


Ruslan: What I like is that the IT industry is always ahead — it is always about advanced technologies. Microsoft has been driving that for a long time, and many other companies are driving the IT world as well. And you just have to keep up with that wave of new technologies.

Of course, it would probably be even cooler to create something yourself, or even better, to contribute to open source, support an open-source product, build something unique, be on the front edge so that the world is following you. I don’t have that kind of experience, though. So for me, working with the newest technologies and building advanced products — that in itself is already very cool.

 

 

So you’re like a surfer who catches the wave first?


Ruslan: (laughs) You could say that.

“I’m not the wave — but I’m the surfer who catches it first.”

And if a QA tester told you, “I want to become an SDET” — what would you say?


Ruslan: First, you need patience. It does not happen quickly. It can take a long time, because the knowledge requirements keep growing.

Second, you need both sides of the foundation: testing theory and programming. Without understanding testing, and without knowing at least one programming language well enough, you will not become either an automation engineer or an SDET.

And the biggest gap that usually holds people back is simple: programming.

 

 

Last one. Is the traditional QA tester role disappearing?


Ruslan: No. It is not going anywhere.

That role is built on a different mindset. A good tester thinks critically, thinks about how to break the product, how relevant it is for users, and how usable it is. Even with AI in QA, not everything can be automated. There is still a lot that has to be understood, checked, questioned, and explored by a human.

So no — the profession is still relevant. It is just evolving, like it always has.

Quality is strongest when it is built into the work.

That is what a strong SDET brings: better judgment about what to automate, how to integrate it, and how to make it trustworthy over time. If testing feels like a delivery bottleneck, it may be time for a QA Assessment. And if your team wants quality to sit closer to development, an SDET can help make that work.
Comments