Cost Engineering

Cost Engineering Today: Balancing People, Machines, and Data


When the world's largest motorcycle OEM publicly declared that "European manufacturing is dead," Jürgen Gumpinger, Vice President Strategic Supply Chain Management at KTM, took it as a signal, not a verdict.

With 80% of KTM's cost of goods sold tied to purchased parts, the margin for inaccuracy in product costing was simply too small to ignore. His response was to build one of the most data-intensive, automated cost engineering organizations in the two-wheeler industry.

At the Tset Summit 2025 in Munich, Jürgen Gumpinger reflected on the seven years of firsthand experience building a cost engineering function from the ground up. Below are the key takeaways from his talk.

Start Where You Can Show Results

Seven years ago, KTM's CEO gave Jürgen Gumpinger a single directive: build a cost engineering department. The team started where most successful transformations start: with should cost analysis.

Without should costing, you can't do anything. It's the basement. And on the other side, you can show immediately savings. That's the only reason.

 

Should costing gave the purchasing team a concrete foundation. It moved procurement from reactive negotiation to strategic thinking on a commodity level, and it generated the internal credibility the function needed to grow.

KTM: Building the foundation. Process landscape meets roadmap.

Four Pillars That Made Scaling Possible

Gumpinger structured KTM's evolution around four interdependent pillars: tools, integration, change, and data. Dedicated costing tools formed the calculation backbone, connected to existing ERP, PLM, and PDM systems through a central data lake. Alongside the traditional commodity cost engineers, new roles emerged to support this infrastructure: data analysts, automation engineers, and vehicle costing specialists.

From 3 Calculations a Week to 500 a Day

The introduction of target costing at the start of each new vehicle project changed the scale of the work entirely. Engineers began working weekly with the cost engineering team to meet part-level cost targets, and the number of required calculations increased sharply. What was once 2 to 3 calculations per week became a minimum of 500 per day.

By having this pressure, we need to optimize. That's the fun of it. I love to call it enforced innovation.

 

To handle this volume, KTM introduced automation through similarity search, random forest models for standard parts, and NLP-based parameter prediction. The system identifies a similar historical part, extracts its dimensions, material, and weight, and feeds those parameters directly into a calculation in Tset or Siemens. This works regardless of whether 3D data is available.

Predicting Spend with 2.5% Accuracy

One of the session's most concrete demonstrations of value was KTM's predictive analytics approach. Using a spend cube that connects production plans, bill of materials (BOM) data, SAP material information, and customs data, the team runs sensitivity analyses across approximately 260 should cost calculations per commodity. The model factors in changes to raw material prices, labor costs, energy, and supplier activity.

We had on the material cost a variation from predicted to real of two and a half percent. That was really sharp.

 

Controlling now uses cost engineering data directly to build the midterm budget forecast, a sign that the function has moved well beyond its original scope.

What KTM Learned Along the Way

Alongside the wins, Jürgen Gumpinger shared a honest account of what remains unsolved. Explainability of results remains an open challenge: when an automated whole-vehicle calculation shifts between weeks, tracing the cause across 1,500 parts is far from straightforward. The quality of results also depends heavily on having the right assumptions in place, particularly when comparing costs across geographies with different processes and machine lifecycles. One early ambition had to be reconsidered along the way: finding a single tool to handle everything proved unrealistic, and the team moved toward a more open, integrable architecture instead.

We need open interfaces. Integration must be done by the OEM. There will not be a perfect solution that fits all.

 

Looking ahead, Jürgen Gumpinger outlined six focus points he believes will define the next stage of cost engineering:

  • Embrace change by demonstrating value
  • Invest in the right parameter sets
  • Open up data access across the organization
  • Build explainable comparisons to earn trust
  • Select tools with open APIs to support process integration
  • Work toward cost awareness across the entire development and production cycle

Want to watch the full session?

The complete recording of Jürgen Gumpinger's talk from Tset Summit 2025 is available on our website.

Watch now

 

Interested in attending Tset Summit 2026? We will be announcing the next edition soon. Follow Tset on LinkedIn and subscribe to our newsletter to be among the first to receive your invitation.

What is should cost analysis, and why does it matter in product costing?

Should cost analysis is the process of calculating what a part or product should cost to manufacture, based on materials, labor, overhead, and process assumptions, rather than accepting a supplier's quoted price at face value. It gives procurement and engineering teams an objective cost baseline, which makes supplier negotiations more fact-based and targeted. In product costing, should cost analysis is typically the starting point: it generates immediate, demonstrable savings and builds the internal credibility needed to expand cost engineering across the organization.

How does automation change the scale of product cost analysis?

Manual product cost analysis limits teams to a small number of calculations per week. With automation, similarity search, and machine learning models handling parameter extraction and calculation inputs, organizations can scale to hundreds of calculations per day. This volume shift is what makes whole-vehicle costing, early-phase BOM costing, and continuous target costing tracking feasible. The key enabler is connecting costing tools to existing systems such as PLM, PDM, and ERP, so data flows without manual re-entry at each step.

What is the difference between should costing and target costing?

Should costing answers the question: what does this part cost to make? Target costing answers a different question: what does this part need to cost for the product to be profitable? Target costing starts from the market price, works backward through the required margin, and translates that into part-level cost targets that engineers and purchasers must then meet. The two methods work together: should costing provides the data foundation, and target costing turns that data into actionable cost reduction goals throughout the product development process.

What skills does a modern cost engineering team need?

A cost engineering function today requires more than traditional commodity cost engineers who run calculations and support supplier negotiations. As the volume and complexity of product costing work grows, teams also need data analysts who can process and interpret large datasets, automation specialists who can build and maintain integrations between costing tools and enterprise systems, and vehicle or product-level cost engineers who work directly with R&D to support design-to-cost decisions. The balance between these roles depends on the organization's maturity level and the degree of automation already in place.

 

Continue reading

Get notified — new blog articles, straight to your inbox.

Get monthly updates on engineering, sustainable manufacturing, and smarter product decisions.

Blog updates