When you strip away the complexity, cost engineering always follows the same four-step loop, no matter where it's applied:
Build a Model: Create your calculation methodology
Analyze Results: Extract insights from the data
Define Measures: Determine actions based on findings
Support Realization: Help teams implement improvements
This applies everywhere. Whether you're doing should-cost analysis for sourcing, target setting, design-to-cost, or quoting, it's the same four steps with different inputs and outputs.
Each step in this loop faces critical barriers that prevent cost engineering teams from reaching their full potential.
Model creation is so time-consuming that it blocks entire applications before they even start.
Model creation (not calculation) is the first major roadblock. Building a model for how something should be calculated requires significant expert time. Assumptions live in heads and scattered files. New models get created by copy-pasting variants of old ones, creating consistency issues across the organization.
The time-consuming nature of model creation blocks other applications. Companies universally want to apply cost engineering throughout the entire product lifecycle, especially in early stages. But in reality? Most organizations end up doing 90% sourcing-focused work because model creation takes too long to scale across other use cases.
You have all the data, but you can't turn it into answers fast enough.
Imagine, you've completed the detailed, bottom-up calculation, and the structure is perfect, and the results are accurate. Now you need to generate insights, define measures, and answer questions from different stakeholders.
The problem that you face is accessibility. Data needs to be sliceable and interpretable not only by the cost engineering expert, but by everyone who consumes it: purchasing, engineering, controlling, sales, management.
How do you take a very detailed calculation and answer, for example, a management question? Today, it requires too much Excel work, mapping exercises, and manual manipulation to bridge that gap.
People think this is a software problem, but it's actually a process problem.
All these steps have inputs and outputs, and every single one involves manual work that creates friction. BOMs arrive missing critical information for calculations. Economic parameters aren't clearly defined or documented. Data lives scattered across multiple systems with no unified access point. The cumulative overhead is substantial and eats into the time cost engineers should spend on analysis.
Here's the critical insight: "People think of this as a software problem, but that software problem is intrinsically a process topic," explains Sasan Hashemi. "If you want to be cross department, your system has to connect c cross-departmental. It's as simple as that!" The technical challenge is rooted in an organizational reality.
The solution starts with one central platform - one single source of truth. But not everything should be standardized, and not everything should be customized.
You need to decide what is standardized (what the software provider should handle) and what requires freedom and openness for innovation (what you need to control internally).
Two extreme approaches dominate the market, and both fail. Some companies demand fully standardized solutions where the software vendor handles everything from calculations to analytics to reporting. The appeal is obvious: outsource the complexity. The reality is less appealing: roadmaps that take years to deliver, and features that serve everyone adequately but no one perfectly.
The opposite extreme is equally problematic. Companies build custom systems from scratch, maintaining complete control but also maintaining everything else. They end up dedicating resources to building and updating capabilities that already exist as mature, tested solutions elsewhere.
The answer sits in the middle: a standard product cost management software for core functionality, surrounded by tools that give you autonomy to build applications specific to your business needs without heavy IT involvement.
The problem with Excel isn't Excel itself. Don’t get us wrong; you can create great, detailed calculations in Excel. The problem is that the data is not accessible. The value you created sits trapped in that spreadsheet, unable to be reused throughout the company.
Openness and accessibility of data is the key. Everything should be accessible at the API level. Master data should be presented in a way that it's easily consumable by other systems and use cases.
The fundamental principle: the data belongs to the customer who created it.
This openness unlocks new possibilities: BI analytics, AI model inputs, shared master data across systems, and the ability to combine costing data with information from other sources.
Not all algorithms should be built from scratch. Industry standards exist for good reason.
Generic overhead calculations are used by almost everyone. The model itself is well understood. What varies are the specific cost types, overhead bases, and rates. These configurations can and should be handled by standard software capabilities. Customers shouldn't have to build these foundational elements. Software providers like Tset should deliver them ready to configure.
This is where services like Tset's new Master Data Service demonstrate the necessary flexibility while maintaining standardization.
But there's another category of algorithms: those that are very opinionated or highly specific to producing certain commodities of parts. This is where custom logic becomes necessary.
The vision is clear: use what is standard where there are standards. Where there are no standards, or where your methodology represents competitive advantage, you should be able to implement your own logic. The software should be extendable to meet specific domain needs without forcing you to choose between "build everything myself" or "stay restricted to the standard."
Cost engineering exists within a connected landscape. Many systems contain information that makes sense to use in a costing system. You want to connect to PLM, ERP, and purchasing systems to consume their data.
The flow works both ways. Costing systems provide substantial data that can be consumed by other applications: reporting tools, analytics platforms, workflow systems. Costing data becomes more valuable when combined with information from other sources.
If you're thinking about modular services, this principle extends even further. You can decompose costing systems from monoliths into individual services that deliver standalone value, which customers can use independently.
The summary is straightforward: it's all about thinking in openness. Open systems, not closed. Open data, not trapped. Open algorithms, not locked.
The future of cost engineering isn't about replacing expertise with technology. It's about removing the bottlenecks that prevent cost engineers from applying their expertise where it matters most.
The shift toward openness in data, algorithms, and systems represents more than technical architecture decisions. It reflects a fundamental understanding: cost engineering creates its greatest value when insights flow freely across departments, when models can be built rapidly without sacrificing quality, and when the overhead of moving information between systems disappears.
The question facing cost engineering teams isn't whether to adopt these principles. It's how quickly they can implement them before the competitive gap widens.