Report Performance Monitoring & Optimization
Why Performance Matters
Slow reports frustrate users, reduce adoption, and strain database resources.
Performance Expectations
- Small Report: < 1,000 rows → < 5 seconds
- Medium Report: 1,000-10,000 rows → < 30 seconds
- Large Report: 10,000-100,000 rows → < 2 minutes
- Massive Report: > 100,000 rows → Consider data warehouse (not real-time Siebel)
Impact of Slow Reports
- User Frustration: Manager waits 5 minutes → Closes tab → Never runs report again
- Database Load: 50 users run slow report simultaneously → Database CPU spikes to 100% → Entire CRM slows down
- Timeouts: Query runs 10 minutes → Application server timeout (300 sec) → Error message, no results
Performance Monitoring Tools
Report Execution Log
Navigate to Administration → Analytics → Report Performance
| Report Name | Avg Time | Max Time | Runs (30d) | Status |
|---|---|---|---|---|
| Daily Activity | 2.3 sec | 5.1 sec | 847 | ✓ Good |
| Pipeline Analysis | 28.5 sec | 67.2 sec | 234 | ⚠ Slow |
| Historical Trends | 4m 23s | 8m 15s | 12 | ✗ Critical |
Action: "Historical Trends" report needs optimization (4+ min average)
SQL Execution Plan
View the actual database query generated by report.
EXPLAIN PLAN FOR
SELECT s.NAME, s.REVENUE, s.CLOSE_DT, a.NAME as ACCOUNT
FROM S_OPTY s
LEFT JOIN S_ORG_EXT a ON s.BU_ID = a.ROW_ID
WHERE s.CLOSE_DT >= '2024-01-01'
AND s.SALES_STAGE != 'Closed Lost'
ORDER BY s.REVENUE DESC;
Execution Plan:
1. TABLE ACCESS FULL on S_OPTY (Cost: 1500) ⚠ Full table scan
2. INDEX RANGE SCAN on S_ORG_EXT_IDX (Cost: 3) ✓ Using index
3. HASH JOIN (Cost: 1750)
4. SORT ORDER BY (Cost: 200)
Total Cost: 3453
Issue: Full table scan on S_OPTY (millions of rows) → Need index on CLOSE_DT
Common Performance Bottlenecks
1. Missing Indexes
- Problem: Query filters on CLOSE_DT but no index exists → Database scans entire table (2M rows)
- Solution: Create index: CREATE INDEX IDX_OPTY_CLOSE_DT ON S_OPTY(CLOSE_DT)
- Result: Query time drops from 45 sec to 3 sec (15x faster)
2. Too Many Joins
- Problem: Report joins 8 tables (Opportunity → Account → Contact → Product → Territory → Rep → Manager → Department)
- Solution:
- Reduce to necessary joins only (do you really need Department?)
- OR create denormalized view (pre-join tables overnight, query view in report)
3. Calculated Fields
- Problem: Weighted Revenue = Revenue × Probability / 100 (calculated for each row at query time)
- Solution: Store calculated value in database (update trigger recalculates when Revenue or Probability changes)
- Trade-off: More storage (new column) but faster queries
4. Large Result Sets
- Problem: User runs "All Opportunities" → Returns 500,000 rows → Browser crashes
- Solution:
- Add default filter (Close Date within last 12 months) → Reduces to 50,000 rows
- Pagination (show 1,000 rows per page, load more on demand)
- Summary first (show totals, drill-down loads details)
5. No Query Limits
- Problem: Query has no LIMIT clause → Database processes millions of rows unnecessarily
- Solution: Add LIMIT 10000 (or user-specified max rows)
- Example: "Top 100 Deals" should have LIMIT 100 (not retrieve 1M then show 100)
Optimization Techniques
Caching
- Report-Level Cache: Store query results for 15 minutes (subsequent runs read from cache)
- Use Case: "Daily Activity" report run by 100 users at 9 AM → First user triggers query (10 sec), next 99 users get cached result (instant)
- Invalidation: Cache expires after 15 min OR when underlying data changes
Pre-Aggregation
- Scenario: "Monthly Revenue by Territory" report sums 1M opportunities every time
- Solution: Nightly job pre-calculates monthly totals → Stores in summary table (12 months × 4 territories = 48 rows)
- Report: Queries summary table (48 rows) instead of opportunities (1M rows) → Sub-second response
Materialized Views
- Definition: Pre-computed query result stored as table (database manages refresh)
- Example: MV_PIPELINE_SUMMARY (query joins Opportunity + Account + Product, refreshed nightly)
- Benefit: Complex joins done once (overnight), reports query simple view (fast)
Partitioning
- Scenario: S_OPTY table has 5M rows (10 years of data)
- Solution: Partition by year (2015 partition, 2016 partition, ... 2025 partition)
- Query: "Show 2024 opportunities" → Database scans only 2024 partition (500K rows) instead of entire table (5M rows)
- Result: 10x faster queries (fewer rows scanned)
Query Rewriting
Before: Inefficient subquery
SELECT * FROM S_OPTY
WHERE OWNER_ID IN (
SELECT ROW_ID FROM S_USER WHERE TERRITORY = 'Northeast'
)
-- Slow: Subquery runs for each row
After: Efficient join
SELECT o.* FROM S_OPTY o
INNER JOIN S_USER u ON o.OWNER_ID = u.ROW_ID
WHERE u.TERRITORY = 'Northeast'
-- Fast: Single join operation
Monitoring Dashboards
Database Performance
- CPU Usage: < 70% average (spike to 90% during peak hours is OK)
- Memory: < 80% utilized (avoid swapping to disk)
- Disk I/O: < 50 MB/s (high I/O = slow queries, need more RAM for cache)
- Active Connections: < 200 (1,000 users, but not all query simultaneously)
Report Server Metrics
- Concurrent Reports: Max 50 (throttle to prevent overload)
- Queue Depth: If 10 reports waiting → Add more report servers (load balancing)
- Memory per Report: < 500 MB (large reports consume too much memory)
Alerting
- Slow Report Alert: If report > 60 sec → Email to admin (investigate)
- Timeout Alert: If 5 reports timeout in 1 hour → Critical alert (system issue)
- Usage Spike: If report runs 10x more than usual → Investigate (is someone bulk exporting?)
Best Practices
- Design for Performance: Consider performance from the start (not after users complain)
- Limit Data: Default to "Last 90 Days" not "All Time" (users can override if needed)
- Pagination: Show 100 rows per page (not all 50,000 at once)
- Summary First: Show high-level totals (drill-down for details on demand)
- Index Strategy: Index all filter columns (CLOSE_DT, STATUS, OWNER_ID, TERRITORY)
- Regular Maintenance: Weekly analyze tables, rebuild indexes (keeps stats current)
- Archive Old Data: Move 5+ year old data to archive database (reduces active table size)
- Monitor Trends: Track query time over months (if gradually slowing, data growth issue)
- User Education: Teach users to use filters (don't run "All Opportunities" unnecessarily)
- Off-Peak Scheduling: Heavy reports run at night (don't compete with daytime users)
Case Study: Optimizing "Historical Trends" Report
Initial State:
- Performance: 4 min 23 sec average (unacceptable)
- Query: Sums revenue for every month, every territory, every product (10 years × 12 months × 4 territories × 20 products = 9,600 calculations)
- Data: Queries 5M opportunities directly
Optimization Steps:
- Create Summary Table: REVENUE_SUMMARY_MV (pre-aggregated by month/territory/product, nightly refresh) → 9,600 rows instead of 5M
- Add Indexes: Index on (YEAR, MONTH, TERRITORY, PRODUCT) → Fast lookups
- Rewrite Query: Query summary table instead of S_OPTY
- Add Cache: Cache results for 1 hour (data refreshes nightly, no need for real-time)
Results:
- New Performance: 2.1 seconds (124x faster!)
- Database Load: CPU usage during report run: 90% → 5% (minimal impact)
- User Adoption: Report runs increased from 12/month to 200/month (users actually use it now)
Conclusion: Reporting & Analytics Mastery
You've now completed the comprehensive Reporting & Analytics module. You've learned:
- ✓ Standard reports and how to run them effectively
- ✓ Creating custom reports with advanced calculations
- ✓ BI Publisher for enterprise-grade document generation
- ✓ Building interactive dashboards with KPIs
- ✓ Real-time analytics for immediate decision-making
- ✓ Data visualization best practices
- ✓ Scheduled reports for automated delivery
- ✓ Security and compliance requirements
- ✓ Data export strategies for system integration
- ✓ Performance monitoring and optimization
Next Steps: Apply these skills to create reports that drive business decisions and provide actionable insights to your organization.