When the world's largest motorcycle OEM publicly declared that "European manufacturing is dead," Jürgen Gumpinger, Vice President Strategic Supply Chain Management at KTM, took it as a signal, not a verdict.
With 80% of KTM's cost of goods sold tied to purchased parts, the margin for inaccuracy in product costing was simply too small to ignore. His response was to build one of the most data-intensive, automated cost engineering organizations in the two-wheeler industry.
At the Tset Summit 2025 in Munich, Jürgen Gumpinger reflected on the seven years of firsthand experience building a cost engineering function from the ground up. Below are the key takeaways from his talk.
Seven years ago, KTM's CEO gave Jürgen Gumpinger a single directive: build a cost engineering department. The team started where most successful transformations start: with should cost analysis.
Should costing gave the purchasing team a concrete foundation. It moved procurement from reactive negotiation to strategic thinking on a commodity level, and it generated the internal credibility the function needed to grow.
Gumpinger structured KTM's evolution around four interdependent pillars: tools, integration, change, and data. Dedicated costing tools formed the calculation backbone, connected to existing ERP, PLM, and PDM systems through a central data lake. Alongside the traditional commodity cost engineers, new roles emerged to support this infrastructure: data analysts, automation engineers, and vehicle costing specialists.
The introduction of target costing at the start of each new vehicle project changed the scale of the work entirely. Engineers began working weekly with the cost engineering team to meet part-level cost targets, and the number of required calculations increased sharply. What was once 2 to 3 calculations per week became a minimum of 500 per day.
To handle this volume, KTM introduced automation through similarity search, random forest models for standard parts, and NLP-based parameter prediction. The system identifies a similar historical part, extracts its dimensions, material, and weight, and feeds those parameters directly into a calculation in Tset or Siemens. This works regardless of whether 3D data is available.
One of the session's most concrete demonstrations of value was KTM's predictive analytics approach. Using a spend cube that connects production plans, bill of materials (BOM) data, SAP material information, and customs data, the team runs sensitivity analyses across approximately 260 should cost calculations per commodity. The model factors in changes to raw material prices, labor costs, energy, and supplier activity.
Controlling now uses cost engineering data directly to build the midterm budget forecast, a sign that the function has moved well beyond its original scope.
Alongside the wins, Jürgen Gumpinger shared a honest account of what remains unsolved. Explainability of results remains an open challenge: when an automated whole-vehicle calculation shifts between weeks, tracing the cause across 1,500 parts is far from straightforward. The quality of results also depends heavily on having the right assumptions in place, particularly when comparing costs across geographies with different processes and machine lifecycles. One early ambition had to be reconsidered along the way: finding a single tool to handle everything proved unrealistic, and the team moved toward a more open, integrable architecture instead.
Looking ahead, Jürgen Gumpinger outlined six focus points he believes will define the next stage of cost engineering:
Interested in attending Tset Summit 2026? We will be announcing the next edition soon. Follow Tset on LinkedIn and subscribe to our newsletter to be among the first to receive your invitation.