Could Someone Guide me with Optimizing TM1 Performance for Large Data Sets?

Hello there,

I have been working with TM1 for a few years now; and I am facing a challenge with optimizing performance when dealing with large data sets. I have noticed significant slowdowns during data loads and processing times; especially when performing complex calculations and consolidations.

Minimizing the use of rules and feeders where possible; but some business requirements necessitate their use.

Ensuring dimensions are as slim as possible and consolidating elements where feasible.

Splitting data loads into smaller chunks and running them in parallel to reduce load times.

Using Performance Monitor and Operations Console to identify bottlenecks and optimize accordingly.

Despite these efforts; I am still encountering performance issues; particularly during peak processing times. I am seeking advice on advanced optimization techniques or best practices that could help improve performance.

Are there specific settings or configurations within TM1 that can help better manage memory usage, especially for large data sets? :thinking:

Any tips on writing more efficient TurboIntegrator processes for data loading and transformation?
Would upgrading the server hardware significantly improve performance; and if so; what specifications should I aim for?

Also; I have gone through this post; https://forum.cubewise.com/t/pulse-consuming-cpu-performance-ccsp/ which definitely helped me out a lot.

Are there effective caching strategies within TM1 that can help speed up data retrieval and calculations? :thinking:

Thanks in advance for your help and assistance. :innocent:

It should be apparent to you that this is not the correct forum. This is for posting questions on Apliqo UX

This question is for Planning Analytics / TM1. If you don’t have access to that forum category then you need to request it.