We have one informatica job which is taking 6 to 7 hrs to process total of 80000 records. We noticed that It took just 12 mins to process first 20000 records and then its slowing going down with throughput
Severity Timestamp Node Thread Message Code Message
INFO 27/11/2012 6:48:21 PM node1_******** READER_1_1_1 RR_4049 RR_4049 SQL Query issued to database: (Wed Nov 28 00:18:21 2012)
INFO 27/11/2012 6:48:31 PM node1_******** READER_1_1_1 RR_4050 RR_4050 First row returned from database to reader: (Wed Nov 28 00:18:31 2012)
Query response time is very quick in 30 seconds source query returning the output. The session starts with read throughput of 89/sec and slowly getting down the throughput to a single digit number.
We tried the following options :
-created indexes on source system and executed the jobs. Still the behavior is same.
Please provide any other suggestion or setting to be done in session properties to resolve the following issue. I can provide any other info required.
Check with your DBA, where your DB is mounted, if mounted on a single node....it gives you problem in access the table data. Its should be mounted a node having multiple partitions.... fixed simialr issue recently...
Hi Venkat ,
From which source DB and how many columns are you pulling from source ..?
Trying removing filter conditions in SQ query and check it once.
If source db is Oracle use oracle hints for parallelisation.
Please post the session log if possible provide the data flow in your mapping. Then it will be helpfull to look through the issue.
it seems that we had the same issue.
Try find out on which SQL Query the Workflow hangs and create a SQL Profile (Profiling).
I had a workfow which takes 11 hours but afterwards it takes only 18 seconds!
Interesting thing you reduced the time from 11 hours to 18 seconds.Just to learn could you please tell me what are the steps and tips you applies to solve the issue.
take a look into your sessionlog and identify the SQL statement which will be issued to the database by Informatica.
Then create a execution plan:
And here some additional Infos about SQL Plan Management and / SQL Plan Baselining:
Take care that your SQL statement is the same if you plan to change the Informatica Mapping in the future. If you don't do this the performance will be as bad as before!
Check with your DBA, where your DB is mounted, if mounted on a single node....it gives you problem in accessing the table data. Its should be mounted a node having multiple partitions.... fixed simialr issue recently...
I can confirm that it is mounted on multiple databse nodes.Gives me sometime for me to post the session log.
Basically the job will process the Purchase order info for last three years from 2010 to 2012 (total of 85000 records ),100 columns in each record .It is taking around 5 hours .We tried an option of duplicating the samme job 3 times and kept the filter in each session with one year each and ran the three jobs in sequence one after the other.The overall time took just 58 mins for three jobs and processed all the 85000 records.The time has decreased from 5 hrs to 58 mins.
still not sure about the rootcause of the issue and fix.We use the standard informatica version.
From the information you posted in your first post, its very clear that DB is fine as the time taken to fetch first row was just 10 secs!. I feel the only issue with your transformations thier busy time.
Please check your session statistics for busy time and idle time of each transformation from which you can confirm which transformation is bottle neck.
So its the same as you mentioned in the first post- the process time for 20000 records is fast comparatively ....so when you divided the data for 3 yrs (1 set of data each for 3 yrs) its obvious that its processing somewhere or more around 20k+ records in less time....thats the reasons its faster when processing less data.
I just came across such situation, processing less data faster unlike a little more data. So when checked with DBA it was confirmed. It was in my case...
Let us know, if you find the issue if not DB.