Cost Engineering

The Future of Cost Engineering: Trends, Risks & Actions for the Next Industrial Decade


Cost engineering stands at a critical turning point. After years of working with cost engineers, procurement teams, and manufacturing leaders across the automotive and machinery sectors, we can clearly see a new pattern has emerged: the bottlenecks holding back today's teams can be solved by unleashing the power of data. 

What Is Cost Engineering, Really? 

When you strip away the complexity, cost engineering always follows the same four-step loop, no matter where it's applied: 

  1. Build a Model: Create your calculation methodology

  2. Analyze Results: Extract insights from the data

  3. Define Measures: Determine actions based on findings

  4. Support Realization: Help teams implement improvements 

This applies everywhere. Whether you're doing should-cost analysis for sourcing, target setting, design-to-cost, or quoting, it's the same four steps with different inputs and outputs.

In our head, cost engineering is not one piece in a process. It's a loop that runs in the background all the time.

 

Three Bottlenecks Blocking Progress 

Each step in this loop faces critical barriers that prevent cost engineering teams from reaching their full potential. 

Bottleneck #1: Model Creation 

Model creation is so time-consuming that it blocks entire applications before they even start. 

Model creation (not calculation) is the first major roadblock. Building a model for how something should be calculated requires significant expert time. Assumptions live in heads and scattered files. New models get created by copy-pasting variants of old ones, creating consistency issues across the organization. 

The time-consuming nature of model creation blocks other applications. Companies universally want to apply cost engineering throughout the entire product lifecycle, especially in early stages. But in reality? Most organizations end up doing 90% sourcing-focused work because model creation takes too long to scale across other use cases. 

Bottleneck #2: Insights Versus Data 

You have all the data, but you can't turn it into answers fast enough. 

Imagine, you've completed the detailed, bottom-up calculation, and the structure is perfect, and the results are accurate. Now you need to generate insights, define measures, and answer questions from different stakeholders. 

The problem that you face is accessibility. Data needs to be sliceable and interpretable not only by the cost engineering expert, but by everyone who consumes it: purchasing, engineering, controlling, sales, management. 

How do you take a very detailed calculation and answer, for example, a management question? Today, it requires too much Excel work, mapping exercises, and manual manipulation to bridge that gap. 

Bottleneck #3: Manual Overhead 

People think this is a software problem, but it's actually a process problem. 

All these steps have inputs and outputs, and every single one involves manual work that creates friction. BOMs arrive missing critical information for calculations. Economic parameters aren't clearly defined or documented. Data lives scattered across multiple systems with no unified access point. The cumulative overhead is substantial and eats into the time cost engineers should spend on analysis. 

Here's the critical insight: "People think of this as a software problem, but that software problem is intrinsically a process topic," explains Sasan Hashemi. "If you want to be cross department, your system has to connect c cross-departmental. It's as simple as that!" The technical challenge is rooted in an organizational reality.  

The Path Forward: One Platform, Strategic Openness 

The solution starts with one central platform - one single source of truth. But not everything should be standardized, and not everything should be customized. 

You need to decide what is standardized (what the software provider should handle) and what requires freedom and openness for innovation (what you need to control internally). 

Two extreme approaches dominate the market, and both fail. Some companies demand fully standardized solutions where the software vendor handles everything from calculations to analytics to reporting. The appeal is obvious: outsource the complexity. The reality is less appealing: roadmaps that take years to deliver, and features that serve everyone adequately but no one perfectly. 

The opposite extreme is equally problematic. Companies build custom systems from scratch, maintaining complete control but also maintaining everything else. They end up dedicating resources to building and updating capabilities that already exist as mature, tested solutions elsewhere. 

The answer sits in the middle: a standard product cost management software for core functionality, surrounded by tools that give you autonomy to build applications specific to your business needs without heavy IT involvement. 

Three Applications Reshaping Cost Engineering 

Application #1: Reusability of Data 

The problem with Excel isn't Excel itself. Don’t get us wrong; you can create great, detailed calculations in Excel. The problem is that the data is not accessible. The value you created sits trapped in that spreadsheet, unable to be reused throughout the company. 

Openness and accessibility of data is the key. Everything should be accessible at the API level. Master data should be presented in a way that it's easily consumable by other systems and use cases.

The fundamental principle: the data belongs to the customer who created it.

The customer created data, and there's a lot of value in it. We need a system that needs to give the data back to the customer, so the customer can get the maximum value out of his data. Data must be usable from anywhere and not restricted to the costing software that generated it.

 

This openness unlocks new possibilities: BI analytics, AI model inputs, shared master data across systems, and the ability to combine costing data with information from other sources. 

Application #2: Custom Calculation Logic 

Not all algorithms should be built from scratch. Industry standards exist for good reason. 

Generic overhead calculations are used by almost everyone. The model itself is well understood. What varies are the specific cost types, overhead bases, and rates. These configurations can and should be handled by standard software capabilities. Customers shouldn't have to build these foundational elements. Software providers like Tset should deliver them ready to configure.

This is where services like Tset's new Master Data Service demonstrate the necessary flexibility while maintaining standardization. 

But there's another category of algorithms: those that are very opinionated or highly specific to producing certain commodities of parts. This is where custom logic becomes necessary. 

The vision is clear: use what is standard where there are standards. Where there are no standards, or where your methodology represents competitive advantage, you should be able to implement your own logic. The software should be extendable to meet specific domain needs without forcing you to choose between "build everything myself" or "stay restricted to the standard."

Application #3: Integration Across Systems 

Cost engineering exists within a connected landscape. Many systems contain information that makes sense to use in a costing system. You want to connect to PLM, ERP, and purchasing systems to consume their data. 

The flow works both ways. Costing systems provide substantial data that can be consumed by other applications: reporting tools, analytics platforms, workflow systems. Costing data becomes more valuable when combined with information from other sources. 

If you're thinking about modular services, this principle extends even further. You can decompose costing systems from monoliths into individual services that deliver standalone value, which customers can use independently. 

The summary is straightforward: it's all about thinking in openness. Open systems, not closed. Open data, not trapped. Open algorithms, not locked. 

Free Guide: Modern Cost Engineering in 2025

Looking to modernize your cost engineering process but unsure where AI fits in? Read our new guide to see how to leverage AI strategically while maintaining control over critical calculations.  

Download now

 

What This Means for Cost Engineering Teams 

The future of cost engineering isn't about replacing expertise with technology. It's about removing the bottlenecks that prevent cost engineers from applying their expertise where it matters most. 

The shift toward openness in data, algorithms, and systems represents more than technical architecture decisions. It reflects a fundamental understanding: cost engineering creates its greatest value when insights flow freely across departments, when models can be built rapidly without sacrificing quality, and when the overhead of moving information between systems disappears. 

The question facing cost engineering teams isn't whether to adopt these principles. It's how quickly they can implement them before the competitive gap widens. 

Watch the Full Session

Sasan Hashemi, CEO & Co-Founder of Tset, and Gerd Sauermann, CPTO at Tset, presented these insights at the Tset Summit 2025 in Munich. Watch the complete recording to hear their full discussion on "The Future of Cost Engineering: Trends, Risks & Actions for the Next Industrial Decade."

Watch now

 

What is the main bottleneck preventing cost engineering teams from scaling across the product lifecycle?

Model creation is the primary bottleneck. Building calculation models requires significant expert time, with assumptions scattered across files and heads. This time-consuming process blocks organizations from expanding cost engineering beyond sourcing applications into early-stage product development, design-to-cost, and other high-value use cases.

How does data accessibility differ from data availability in cost engineering?

Having data available isn't enough. Data accessibility means cost engineering results can be easily sliced, interpreted, and consumed by different stakeholders across purchasing, engineering, controlling, sales, and management. Without proper accessibility, teams waste time on Excel manipulation and manual mapping instead of generating actionable insights.

What does "openness" mean in the context of modern cost engineering software?

Openness refers to three key areas: open data (accessible via APIs and reusable across systems), open algorithms (ability to implement custom calculation logic alongside standard methods), and open systems (seamless integration with PLM, ERP, and other enterprise tools). This approach balances standardization with the flexibility needed for competitive advantage.

Should companies build custom cost engineering systems or buy standardized software?

Neither extreme works well. Fully standardized solutions lock you into slow vendor roadmaps that serve everyone adequately but no one perfectly. Custom-built systems require maintaining capabilities that already exist as mature solutions. The optimal approach combines standard product cost management software for core functionality with tools that provide autonomy for business-specific applications.

How can cost engineering data be used beyond traditional costing applications?

When data is open and accessible, it unlocks multiple applications: business intelligence analytics, AI model inputs, shared master data across enterprise systems, and combination with information from other sources. This reusability transforms cost engineering from an isolated function into a strategic data source that drives decisions across the organization.

 

Continue reading

Get notified — new blog articles, straight to your inbox.

Get monthly updates on engineering, sustainable manufacturing, and smarter product decisions.

Blog updates