Search Results

Search found 9542 results on 382 pages for 'row'.

Page 310/382 | < Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >

  • systemstate dump ??

    - by JaneZhang(???)
            ???????????????hang????,????????systemstate dump?????????,?????,????????,???????????????,????systemstate dump?????????????       ??????,????????systemstate dump, ?????“WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK”?        systemstate dump???????????,??????:??????,???????,????dump????????,???????M????)1. ?sysdba???????:$sqlplus / as sysdba??$sqlplus -prelim / as sysdba <==??????????hang?????SQL>oradebug setmypidSQL>oradebug unlimit;SQL>oradebug dump systemstate 266;?1~2??SQL>oradebug dump systemstate 266;?1~2??SQL>oradebug dump systemstate 266;SQL>oradebug tracefile_name;==>????????2. ????systemstate dump,??????hang analyze??????????????????$sqlplus / as sysdba??$sqlplus -prelim / as sysdba <==??????????hang?????SQL>oradebug setmypidSQL>oradebug unlimit;SQL>oradebug dump hanganalyze 3?1~2??SQL>oradebug dump hanganalyze 3?1~2??SQL>oradebug dump hanganalyze 3SQL>oradebug tracefile_name;==>??????????RAC???,????????????systemstate dump,???????????(?????????):$sqlplus / as sysdba??$sqlplus -prelim / as sysdba <==??????????hang?????SQL>oradebug setmypidSQL>oradebug unlimitSQL>oradebug -g all dump systemstate 266  <==-g all ??????????dump?1~2??SQL>oradebug -g all dump systemstate 266?1~2??SQL>oradebug -g all dump systemstate 266?RAC???hang analyze:SQL>oradebug setmypidSQL>oradebug unlimitSQL>oradebug -g all hanganalyze 3?1~2??SQL>oradebug -g all hanganalyze 3?1~2??SQL>oradebug -g all hanganalyze 3?????????????????systemstate dump,?????????????backgroud_dump_dest??diag trace???????????????????????????,?????hang?,?????systemstate dump?????:10:   dump11:   dump + global cache of RAC256: short stack (????)258: dump(???lock element) + short stack (????)266: 256+10 -->short stack+ dump267: 256+11 -->short stack+ dump + global cache of RAClevel 11? 267? dump global cache, ??????trace ??,??????????????,????????,???266,??????dump?????????,????????????????????short stack????,???????,??2000???,??????30??????????,????level 10 ?? level 258, level 258 ? level 10????short short stack, ??level 10?????lock element data.?????systemstate dump???,??????level?????:??????37???:-rw-r----- 1 oracle oinstall    72721 Aug 31 21:50 rac10g2_ora_31092.trc==>256 (short stack, ????2K)-rw-r----- 1 oracle oinstall  2724863 Aug 31 21:52 rac10g2_ora_31654.trc==>10    (dump,????72K )-rw-r----- 1 oracle oinstall  2731935 Aug 31 21:53 rac10g2_ora_32214.trc==>266 (dump + short stack ,????72K)RAC:-rw-r----- 1 oracle oinstall 55873057 Aug 31 21:49 rac10g2_ora_30658.trc ==>11   (dump+global cache,????1.4M)-rw-r----- 1 oracle oinstall 55879249 Aug 31 21:48 rac10g2_ora_28615.trc ==>267 (dump+global cache+short stack,????1.4M) ??,??????dump global cache(level 11?267,???????????????)??????????,?????????systemstate dump ??

    Read the article

  • PMDB Block Size Choice

    - by Brian Diehl
    Choosing a block size for the P6 PMDB database is not a difficult task. In fact, taking the default of 8k is going to be just fine. Block size is one of those things that is always hotly debated. Everyone has their personal preference and can sight plenty of good reasons for their choice. To add to the confusion, Oracle supports multiple block sizes withing the same instance. So how to decide and what is the justification? Like most OLTP systems, Oracle Primavera P6 has a wide variety of data. A typical table's average row size may be less than 50 bytes or upwards of 500 bytes. There are also several tables with BLOB types but the LOB data tends not to be very large. It is likely that no single block size would be perfect for every table. So how to choose? My preference is for the 8k (8192 bytes) block size. It is a good compromise that is not too small for the wider rows, yet not to big for the thin rows. It is also important to remember that database blocks are the smallest unit of change and caching. I prefer to have more, individual "working units" in my database. For an instance with 4gb of buffer cache, an 8k block will provide 524,288 blocks of cache. The following SQL*Plus script returns the average, median, min, and max rows per block. column "AVG(CNT)" format 999.99 set verify off select avg(cnt), median(cnt), min(cnt), max(cnt), count(*) from ( select dbms_rowid.ROWID_RELATIVE_FNO(rowid) , dbms_rowid.ROWID_BLOCK_NUMBER(rowid) , count(*) cnt from &tab group by dbms_rowid.ROWID_RELATIVE_FNO(rowid) , dbms_rowid.ROWID_BLOCK_NUMBER(rowid) ) Running this for the TASK table, I get this result on a database with an 8k block size. Each activity, on average, has about 19 rows per block. Enter value for tab: task AVG(CNT) MEDIAN(CNT) MIN(CNT) MAX(CNT) COUNT(*) -------- ----------- ---------- ---------- ---------- 18.72 19 3 28 415917 I recommend an 8k block size for the P6 transactional database. All of our internal performance and scalability test are done with this block size. This does not mean that other block sizes will not work. Instead, like many other parameters, this is the safest choice.

    Read the article

  • What does this SAP error mean? recv104 NiIReadrecv nixxi.cp5087

    - by Techboy
    I have an SAP BI Portal system and an SAP BW system. In the Visual Administrator of the BI Portal, section 'JCo RFC Provider' I have created some RFC listeners. In the SAP BW system (in transaction SM59), I have created, tested and activated the relevant RFC connections. When I start a JCo RFC Provider in the BI Portal the BW system that it communicates with produces these errors (displayed from SAP transaction SM21): Operating system call recv failed (error no. 104 ) Module nam Line Error text Caller.... Reason/cal nixxi.cp 5087 recv104 NiIRead recv Documentation for system log message Q0 I : The specified operating system call was returned with an error. For communication calls (receive, send, etc) often the cause of errors are network problems. It could also be a configuration problem at operating system level. (file cannot be opened, no space in the file system etc.). Additional specifications for error number 104 Name for errno number E_UNKNOWN_NO The meaning of the value stored in 'errno' is platform-dependent. The value which occurred here is unknown to the SysLog system. Either there is an incorrect error number in the SysLog message, or the tables TSLE2 or TSLE3 are not completely maintained. Technical details File Offset RecFm System log type Grp N variable message data 6 410220 m Error (Function,Module,Row) Q0 I recv104 NiIReadrecv nixxi.cp5087 There is plenty of space left in the file system, the servers can ping each other and there is no firewall between the servers. The tables (TSLE2 or TSLE3) mentioned in the error message do not provide any additional information. Please can you tell me if this error message refers to something specific, or if it is generic: recv104 NiIReadrecv nixxi.cp5087

    Read the article

  • WMI permissions: Select CommandLine, ProcessId FROM Win32_Process returns no data for CommandLine

    - by user57935
    Hi all, I am gathering performance data via WMI and would like to avoid having to use an account in the Administrators group for this purpose. The target machine is running Windows Server 2003 with the latest SP/updates. I've done what I believe to be the appropriate configuration to allow our user access to WMI (similar to what is described here: http://msdn.microsoft.com/en-us/library/aa393266.aspx). Here are the specific steps that were followed: Open Administrative Tools - Computer Management: Under Computer Management (Local) Expand Services and Applications, right click WMI Control and select properties. In the Security tab, expand Root, highlight CIMV2, click Security (near bottom of window); add Performance Monitor Users and enable the options : Enable Account and Remote Enable. ­Open Administrative Tools - Component Services: Under Console Root go to Component Services- Computers - Right click My Computer and select properties, select the COM security tab, in “Access Permissions” click "Edit Default" select(or add then select) “Performance Monitor Users” group and allow local access and remote access and click ok. In “Launch and Activation Permissions” click “Edit Default” select(or add then select) “Performance Monitor Users” group and allow Local and Remote Launch and Activation Permissions. ­Open Administrative Tools - Component Services: Under Console Root go to Component Services- Computers - My Computer - DCOM Config - highlight “Windows Management and Instrumentation” right click and select properties, Select the Security tab, Under “Launch and Activation Permissions” select Customize, then click edit, add the “Performance Users Group” and allow local and remote Remote Launch and Remote Activation privileges. I am able to connect remotely via WMI Explorer but when I perform this query: Select CommandLine, ProcessId FROM Win32_Process I get a valid result but every row has an empty CommandLine. If I add the user to the Administrators group and re-run the query, the CommandLine column contains the expected data. It seems there is a permission I am missing somewhere but I am not having much luck tracking it down. Many thanks in advance.

    Read the article

  • WMI permissions: Select CommandLine, ProcessId FROM Win32_Process returns no data for CommandLine

    - by user57935
    I am gathering performance data via WMI and would like to avoid having to use an account in the Administrators group for this purpose. The target machine is running Windows Server 2003 with the latest SP/updates. I've done what I believe to be the appropriate configuration to allow our user access to WMI (similar to what is described here: http://msdn.microsoft.com/en-us/library/aa393266.aspx). Here are the specific steps that were followed: Open Administrative Tools - Computer Management: Under Computer Management (Local) Expand Services and Applications, right click WMI Control and select properties. In the Security tab, expand Root, highlight CIMV2, click Security (near bottom of window); add Performance Monitor Users and enable the options : Enable Account and Remote Enable. ­Open Administrative Tools - Component Services: Under Console Root go to Component Services- Computers - Right click My Computer and select properties, select the COM security tab, in “Access Permissions” click "Edit Default" select(or add then select) “Performance Monitor Users” group and allow local access and remote access and click ok. In “Launch and Activation Permissions” click “Edit Default” select(or add then select) “Performance Monitor Users” group and allow Local and Remote Launch and Activation Permissions. ­Open Administrative Tools - Component Services: Under Console Root go to Component Services- Computers - My Computer - DCOM Config - highlight “Windows Management and Instrumentation” right click and select properties, Select the Security tab, Under “Launch and Activation Permissions” select Customize, then click edit, add the “Performance Users Group” and allow local and remote Remote Launch and Remote Activation privileges. I am able to connect remotely via WMI Explorer but when I perform this query: Select CommandLine, ProcessId FROM Win32_Process I get a valid result but every row has an empty CommandLine. If I add the user to the Administrators group and re-run the query, the CommandLine column contains the expected data. It seems there is a permission I am missing somewhere but I am not having much luck tracking it down. Many thanks in advance.

    Read the article

  • Timeline chart in Excel

    - by Axarydax
    Hi, I'd like to see a timeline of events from a database in a "timeline chart", that should look like this: http://i46.tinypic.com/sw3dj5.png - I've made me a small c# program that paints this onto a Bitmap, but that isn't the way to go. I have input data that has 3 fields: StartX EndX Y 2596 15008 1 5438 6783 2 5450 5453 4 5456 5459 4 5462 5466 4 5470 5474 4 5477 5657 5 5662 5665 4 5668 5671 4 As the picture shows, for each line I'd like to have a line from StartX to EndX with a Y value of Y. Stacked bar chart almost solves my problem, but I don't want to have a new line on the chart for every row, I have thousands of rows and I'd like to have X axis as the time axis, and view which events (Y is the type of the event) happened simultaneously. The image ( http://i46.tinypic.com/sw3dj5.png) I've generated with a simple C# program shows that the event SYSTEM was active all the time, and the events TECH and BREAK were almost exclusive, but had some overlaps. I'd like to at least know the correct direction which I should take; I'm lost in the multitude of Excel chart types. Thanks.

    Read the article

  • MacBook Pro battery capacity 65K mAh

    - by Alexander Gladysh
    I have a 15" MacBook Pro 3.1 (that is Late 2007 model AFAIR). I've bought it new a couple of years ago. Recently its on-battery power lifespan became very short (30 to 10 minutes). When my notebook turns itself off due to "low battery" and I press the small button on the battery itself, all LED lights are alight, indicating full charge. When I plug in the power adapter, my Mac displays that "battery is fully charged, finishing charging process" (I have a Russian OS X 10.5.7, so that is a rough translation), but the LEDs on battery itself display (seemingly accurate) status that there are one or two "LEDs still not charged". My battery have as few as 37 recharge cycles (yes, I've neglected calibration over the time I've used it). Battery info programs like iBatt2 report battery capacity of 65 337 mAh (with by-design capacity of 5600 mAh). I get it that something went wrong with battery electronics. I've tried resetting my Mac's PRAM and SMC, it did not changed anything. Now I'm trying to recalibrate the battery, but looks like it does not help as well. Will try to recalibrate it several times in a row. I'd buy a new battery if I knew if it is battery fault, not a notebook's. Any suggestions? Update: After recalibration, my battery status now displays battery capacity of 1500 mAh. But with every recalibration (or simply when I use notebook without power adapter plugged in) this number changes in the range from 200 mAh to 1700 mAh. LEDs on battery now are synchronous with what nodebook thinks on the charge level. Also I've noticed that cycle count changes rather slowly. It is now 39, it was 37 when I've started recalibration, and I went through the process at least ten times... So, the main question is: does it look like that replacing the battery would help me (or does it look like this is notebook's problem)? I guess I should try replacing the battery.

    Read the article

  • SQL Server instance shuts down

    - by user42119
    I'm currently developing a new ASP.NET project hosted on a Windows Server 2008 RC2 with an SQL Server 2008 Express database. I have three SQL Server instances (for different purposes) running which currently all contain a single database. For apparently no reason, these instances tend to shut down after some days. There might be low or no traffic to these instances, because there might be some days in a row, where I can't develop. It now occurred several times, that one or two of these three instances just shut down, so that I can't access the database, without manually starting the instance. I can't seem to find a event log entry for the shutdown, which is most likely because I just enabled logging (why is the default setting off?). So the questions are: * Why does an SQL Server instance shut down? (Is there such a thing as a "Shut down instance after 3 days of inactivity"? * How can I achieve that the instances are running 24/7? Edit: I solved this problem by writing my own application that checks for the status of the SQL Server services. My program will start via a batch file, that gets called by the Windows Task Scheduler every 5 minutes.

    Read the article

  • Grant access for users on a separate domain to SharePoint

    - by Geo Ego
    Hello. I just completed development of a SharePoint site on a virtual server and am currently in the process of granting users from a different domain to the site. The SharePoint domain is SHAREPOINT, and the domain with the users I want to give access to is COMPANY. I have provided them with a link to the site and added them as users via SharePoint, which is all I thought I would need to do. However, when they go to the link, the site shows them a SharePoint error page. In the security event log, I am showing the following: Event Type: Failure Audit Event Source: Security Event Category: Object Access Event ID: 560 Date: 3/18/2010 Time: 11:11:49 AM User: COMPANY\ThisUser Computer: SHAREPOINT Description: Object Open: Object Server: Security Account Manager Object Type: SAM_ALIAS Object Name: DOMAINS\Account\Aliases\00000404 Handle ID: - Operation ID: {0,1719489} Process ID: 416 Image File Name: C:\WINDOWS\system32\lsass.exe Primary User Name: SHAREPOINT$ Primary Domain: COMPANY Primary Logon ID: (0x0,0x3E7) Client User Name: ThisUser Client Domain: PRINTRON Client Logon ID: (0x0,0x1A3BC2) Accesses: AddMember RemoveMember ListMembers ReadInformation Privileges: - Restricted Sid Count: 0 Access Mask: 0xF Then, four of these in a row: Event Type: Failure Audit Event Source: Security Event Category: Object Access Event ID: 560 Date: 3/18/2010 Time: 11:12:08 AM User: NT AUTHORITY\NETWORK SERVICE Computer: SHAREPOINT Description: Object Open: Object Server: SC Manager Object Type: SERVICE OBJECT Object Name: WinHttpAutoProxySvc Handle ID: - Operation ID: {0,1727132} Process ID: 404 Image File Name: C:\WINDOWS\system32\services.exe Primary User Name: SHAREPOINT$ Primary Domain: COMPANY Primary Logon ID: (0x0,0x3E7) Client User Name: NETWORK SERVICE Client Domain: NT AUTHORITY Client Logon ID: (0x0,0x3E4) Accesses: Query status of service Start the service Query information from service Privileges: - Restricted Sid Count: 0 Access Mask: 0x94 Any ideas what permissions I need to grant to the user to get them access to SharePoint?

    Read the article

  • How to bypass resume from hibernate

    - by Daniel Trebbien
    I am attempting to resume a Windows Vista laptop from hibernate, but the resume process seems to be stuck in an endless loop in which Windows is repeatedly trying to read from the optical drive. When I press the Power On button on the laptop, the screen is black (not even the backlight turns on) and the following occurs in a loop: Five seconds pass and I hear the optical drive being accessed. (There's no disk in the drive, so it sounds like a short buzzing noise.) Two seconds pass and I hear the optical drive being accessed. Two seconds pass and I hear the optical drive being accessed. So it's three short buzzing noises in a row, over and over again. Eventually I have to abruptly power off the machine. I have tried inserting a data CD into the drive as well as a bootable CD (a live Linux distro boot disk). For both, the optical drive spins up for a bit, but stops after Windows decides that the disk is not what it is looking for. I have since lost the Windows Vista recovery DVD, but I don't know if inserting the recovery disk into the optical drive would have a different effect than the bootable CD. I have tried pressing F8 immediately after pressing the Power On button (hoping to enter System Restore), but that did not have an effect. Is there a special key sequence that will cause Windows to bypass resuming from hibernate, effectively ignoring hiberfil.sys?

    Read the article

  • find the next due date after today within a group in an Excel PivotTable

    - by Dennis George
    I have got a table set up in one sheet with "transactions". Each row contains a name of a vendor, the amount owed or paid depending on transaction type, and the due date/transaction date. Here is some simplified sample data: Vendor Date Invoice Payment Vendor A 6/30 $200 Vendor A 6/30 ($200) Vendor B 7/5 $500 Vendor B 7/5 ($500) Vendor C 10/28 $50 Vendor A 10/30 $100 Vendor C 11/15 $50 I have already built a PivotTable from that table to group these transactions by vendor and sum the remainder owed. What I'm trying to figure out is how to, for each vendor, get the next due date (min date of the group, excluding dates < Today()), or if there is no next due date then I want to see the max date for that group. Here is what my PivotTable looks like, plus the date column I'd like to add (assuming Today() = 10/23): Vendor Date Owed Vendor B 7/5 - Vendor C 10/28 $100 Vendor A 10/30 $100 I know calling it next due date might not be so accurate if I end up with the date of a payment in that column, but I'm ok with that. tl;dr : I want to find the next earliest date within each group, or the last date. How do I do this?

    Read the article

  • Will disabling hyperthreading improve performance on our SQL Server install

    - by Sam Saffron
    Related to: Current wisdom on SQL Server and Hyperthreading Recently we upgraded our Windows 2008 R2 database server from an X5470 to a X5560. The theory is both CPUs have very similar performance, if anything the X5560 is slightly faster. However, SQL Server 2008 R2 performance has been pretty bad over the last day or so and CPU usage has been pretty high. Page life expectancy is massive, we are getting almost 100% cache hit for the pages, so memory is not a problem. When I ran: SELECT * FROM sys.dm_os_wait_stats order by signal_wait_time_ms desc I got: wait_type waiting_tasks_count wait_time_ms max_wait_time_ms signal_wait_time_ms ------------------------------------------------------------ -------------------- -------------------- -------------------- -------------------- XE_TIMER_EVENT 115166 2799125790 30165 2799125065 REQUEST_FOR_DEADLOCK_SEARCH 559393 2799053973 5180 2799053973 SOS_SCHEDULER_YIELD 152289883 189948844 960 189756877 CXPACKET 234638389 2383701040 141334 118796827 SLEEP_TASK 170743505 1525669557 1406 76485386 LATCH_EX 97301008 810738519 1107 55093884 LOGMGR_QUEUE 16525384 2798527632 20751319 4083713 WRITELOG 16850119 18328365 1193 2367880 PAGELATCH_EX 13254618 8524515 11263 1670113 ASYNC_NETWORK_IO 23954146 6981220 7110 1475699 (10 row(s) affected) I also ran -- Isolate top waits for server instance since last restart or statistics clear WITH Waits AS ( SELECT wait_type, wait_time_ms / 1000. AS [wait_time_s], 100. * wait_time_ms / SUM(wait_time_ms) OVER() AS [pct], ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS [rn] FROM sys.dm_os_wait_stats WHERE wait_type NOT IN ('CLR_SEMAPHORE','LAZYWRITER_SLEEP','RESOURCE_QUEUE', 'SLEEP_TASK','SLEEP_SYSTEMTASK','SQLTRACE_BUFFER_FLUSH','WAITFOR','LOGMGR_QUEUE', 'CHECKPOINT_QUEUE','REQUEST_FOR_DEADLOCK_SEARCH','XE_TIMER_EVENT','BROKER_TO_FLUSH', 'BROKER_TASK_STOP','CLR_MANUAL_EVENT','CLR_AUTO_EVENT','DISPATCHER_QUEUE_SEMAPHORE', 'FT_IFTS_SCHEDULER_IDLE_WAIT','XE_DISPATCHER_WAIT', 'XE_DISPATCHER_JOIN')) SELECT W1.wait_type, CAST(W1.wait_time_s AS DECIMAL(12, 2)) AS wait_time_s, CAST(W1.pct AS DECIMAL(12, 2)) AS pct, CAST(SUM(W2.pct) AS DECIMAL(12, 2)) AS running_pct FROM Waits AS W1 INNER JOIN Waits AS W2 ON W2.rn <= W1.rn GROUP BY W1.rn, W1.wait_type, W1.wait_time_s, W1.pct HAVING SUM(W2.pct) - W1.pct < 95; -- percentage threshold And got wait_type wait_time_s pct running_pct CXPACKET 554821.66 65.82 65.82 LATCH_EX 184123.16 21.84 87.66 SOS_SCHEDULER_YIELD 37541.17 4.45 92.11 PAGEIOLATCH_SH 19018.53 2.26 94.37 FT_IFTSHC_MUTEX 14306.05 1.70 96.07 That shows huge amounts of time synchronizing queries involving parallelism (high CXPACKET). Additionally, anecdotally many of these problem queries are being executed on multiple cores (we have no MAXDOP hints anywhere in our code) The server has not been under load for more than a day or so. We are experiencing a large amount of variance with query executions, typically many queries appear to be slower that they were on our previous DB server and CPU is really high. Will disabling Hyperthreading help at reducing our CPU usage and increase throughput?

    Read the article

  • How to bypass resume from hibernate [closed]

    - by Daniel Trebbien
    I am attempting to resume a Windows Vista laptop from hibernate, but the resume process seems to be stuck in an endless loop in which Windows is repeatedly trying to read from the optical drive. When I press the Power On button on the laptop, the screen is black (not even the backlight turns on) and the following occurs in a loop: Five seconds pass and I hear the optical drive being accessed. (There's no disk in the drive, so it sounds like a short buzzing noise.) Two seconds pass and I hear the optical drive being accessed. Two seconds pass and I hear the optical drive being accessed. So it's three short buzzing noises in a row, over and over again. Eventually I have to abruptly power off the machine. I have tried inserting a data CD into the drive as well as a bootable CD (a live Linux distro boot disk). For both, the optical drive spins up for a bit, but stops after Windows decides that the disk is not what it is looking for. I have since lost the Windows Vista recovery DVD, but I don't know if inserting the recovery disk into the optical drive would have a different effect than the bootable CD. I have tried pressing F8 immediately after pressing the Power On button (hoping to enter System Restore), but that did not have an effect. Is there a special key sequence that will cause Windows to bypass resuming from hibernate, effectively ignoring hiberfil.sys?

    Read the article

  • PostgreSQL continuous archiving not running archive_command

    - by Whatsit
    I've been trying to set up continuous archiving for a simple, test PostgreSQL 9.0 database, as per the documentation. In postgres.conf I've set: wal_level = archive archive_mode = on archive_command = 'touch /home/myusername/backup/testtouch' archive_timeout = 30s ...and restarted PostgreSQL. The file listed by touch never appears. I can manually run the touch command and it works as expected. If I try to create a backup, it waits forever for the archive_command. In psql; postgres=# SELECT pg_start_backup('touchtest'); pg_start_backup ----------------- 0/14000020 (1 row) postgres=# SELECT pg_stop_backup(); NOTICE: pg_stop_backup cleanup done, waiting for required WAL segments to be archived WARNING: pg_stop_backup still waiting for all required WAL segments to be archived (60 seconds elapsed) HINT: Check that your archive_command is executing properly. pg_stop_backup can be cancelled safely, but the database backup will not be usable without all the WAL segments. What would cause this? How can I troubleshoot it? Additional info: Running on CentOS 5.4. PostgreSQL 9.0.2 installed as root.

    Read the article

  • Sporadic crash of master-slave MySQL replication process

    - by obarshay
    Hello, I was wondering if someone has experienced this and can perhaps provide some insight into this issue. We have a plan-vanilla MySQL master-slave replication set up. The tables are MyISAM and the master can get quite read/write active. We use the slave instance to perform full daily backups in order to avoid bringing down the master server. The backup process does the following: STOP SLAVE SQL_THREAD mysqlhotcopy all tables START SLAVE SQL_THREAD Every once in a while (once a month or so) the replication breaks with varying error messages indicating a corrupt query or log file. Here's one that happened last night: mysql> show slave status \G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: server8.propreports.com Master_User: nexus8 Master_Port: 3306 Connect_Retry: 60 Master_Log_File: bin.000045 Read_Master_Log_Pos: 581644327 Relay_Log_File: relay.000086 Relay_Log_Pos: 94131 Relay_Master_Log_File: bin.000045 Slave_IO_Running: Yes Slave_SQL_Running: No Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 1064 Last_Error: Error 'You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '138070603'£' at line 1' on query. Default database: 'wtsdb'. Query: 'UPDATE fill SET clearing_fee='0.0E id='138070603'£' Skip_Counter: 0 Exec_Master_Log_Pos: 4164743 Relay_Log_Space: 577574251 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL I follow the following procedure to recover from above error and resume replication: stop slave; change master to MASTER_LOG_POS = 4164743, MASTER_LOG_FILE = 'bin.000045'; start slave; We have multiple servers set up this way and they all sporadically stop replicating with a similar error. Any advice on how to resolve this would be greatly appreciated.

    Read the article

  • Sorting/grouping when there are multiple values in one cell

    - by ngm
    I have an Excel 2007 spreadsheet, where each row of the dataset describes a feature of a piece of software. One of the columns in the spreadsheet is Relevant Users, which describes which users of the software the feature is of interest to. There may be a couple of different users interested in a feature, in which case I've been filling in the cell with the two user types separated by a colon, e.g. 'Usertype A; Usertype D'. Occassionally, I'd like to sort my data by the Relevant Users column. However, the way I'm populating the column means the sorting isn't very smart. If I have a feature where 'Relevant Users' is 'Usertype A; Usertype D', and then I sort by Relevant Users, that feature will be grouped at the end of all the other features of relevant to Usertype A, as it's just sorting alphabetically. But I want it to be listed in the two separate groups of Usertype A and Usertype D. Or, if I have a pivot table that groups the features together under the heading of Relevant User, I'll get all the features for 'Usertype A', then 'Usertype B', then 'Usertype C', then 'Usertype D', then 'Usertype A; Usertype D', etc. Whereas I really want a feature with Relevant Users as 'Usertype A; Usertype D' to show up in both the Usertype A group and the Usertype D group. I guess if this information was in a database I might have a many-to-many table linking Relevant Users to features. But is there a way to go about having this kind of many-to-many relationship in Excel?

    Read the article

  • Sql Server 2005 database lost, How to recover all records. MDF/LDF size is same as it should be

    - by Shantanu Gupta
    Few months back, I installed a sql server 2005 on one of my client machine. I gave him a backup option to take backup timely but he never took any backup. Today he called me that "i m not able to see any record of mine." I visited at my clients system and saw that none of the record was present on the tables. There was not even a single row in any of the tables. Then I checked if he has any backup file which i found to be absent. I asked him the reason what could be the possible cause. He said it might be due to virus. After this I checked the size of mdf and ldf file and found it should be what it is. when i created his server mdf ldf file had 2MB of database now it is 83 MB and 193Mb mdf/ldf respectively. This shows the data is still present in it but it is not being displayed. What could be the possible cause and how can i restore all data back to my tables ?

    Read the article

  • How to read an Excel file, get and set the information using POI

    - by user1399713
    I'm using Java to read a form that is in an Excel spreadsheet that the user fills in with information about geometric shape. Ex: Shape :_________ Color :_________ Area: _________ Perimeter:________ So far the code I have can I can read what I want in the form and print out the values of Shape, Color, Area, Perimeter. public class RangeSetter { /** * @param args * @throws IOException */ public static void main(String[] args) throws IOException { FileInputStream file = new FileInputStream(new File("test2.xls")); //C:\Users\Yo\Documents // Setup code String cname = "Shape"; HSSFWorkbook wb = new HSSFWorkbook(file); // retrieve workbook // Retrieve the named range // Will be something like "$C$10,$D$12:$D$14"; int namedCellIdx = wb.getNameIndex(cname); Name aNamedCell = wb.getNameAt(namedCellIdx); // Retrieve the cell at the named range and test its contents // Will get back one AreaReference for C10, and // another for D12 to D14 AreaReference[] arefs = AreaReference.generateContiguous(aNamedCell.getRefersToFormula()); for (int i=0; i<arefs.length; i++) { // Only get the corners of the Area // (use arefs[i].getAllReferencedCells() to get all cells) CellReference[] crefs = arefs[i].getAllReferencedCells(); for (int j=0; j<crefs.length; j++) { // Check it turns into real stuff Sheet s = wb.getSheet(crefs[j].getSheetName()); Row r = s.getRow(crefs[j].getRow()); Cell c = r.getCell(crefs[j].getCol()); if (c!= null ){ switch(c.getCellType()){ case Cell.CELL_TYPE_STRING: System.out.println(c.getStringCellValue()); } } } } What I want to do is to create a method that gets the that information and another that sets it. So far I can only print to the console

    Read the article

  • guest crash on long backup via rsync

    - by ToreTrygg
    I recently upgraded host to Ubuntu 9.10 with vmware server 2.0.2, i had two guest machine. One is a sme server i had several crash during a session of backup with rsync to another pc. Normal activities run regularly. The other guest is up without problem since 25 days. I found in the log a lot o f row like these Dec 20 05:29:27.445: vcpu-1| VLANCE: Ethernet0 skipped 2560 time(s) Dec 20 05:29:27.445: vcpu-1| VLANCE: 66 12 5 8 2 3 3 0 1 0 0 1 0 1 2 0 Dec 20 05:29:27.445: vcpu-1| VLANCE: 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 2452 Dec 20 05:29:27.651: vmx| ide0:0: Command WRITE(10) took 1.947 seconds (ok) Dec 20 05:29:37.945: vmx| ide0:0: Command WRITE(10) took 1.033 seconds (ok) when the vitual machine crash the log report, I paste here only some part to limit the lenght of the message Dec 27 01:48:05.686: Worker#2| Caught signal 6 -- tid 700 Dec 27 01:48:05.686: Worker#2| SIGNAL: eip 0x460422 esp 0xb124c024 ebp 0xb124c03 Dec 27 01:48:05.712: Worker#2| SymBacktrace12 00000000 eip 0x39d7ee in function clone in object /lib/tls/i686/cmov/libc.so.6 loaded at 0x2d1000 Dec 27 01:48:05.719: Worker#2| Unexpected signal: 6. Dec 27 01:48:05.720: Worker#2| Core dump limit is 0 KB. Dec 27 01:48:05.762: Worker#2| Child process 10455 failed to dump core (status 0 x6). Dec 27 01:48:05.762: Worker#2|SymBacktrace13 00000000 eip 0x39d7ee in function clone in object /lib/tls/i686/cmov/libc.so.6 loaded at 0x2d1000 Dec 27 01:48:05.779: Worker#2|Msg_Post: Error Dec 27 01:48:05.780: Worker#2|http://msg.log.error.unrecoverable VMware Server unrecoverable error: (Worker#2) Dec 27 01:48:05.780: Worker#2|Unexpected signal: 6. I have no idea how to solve the problem with this installation, I think to dowgrade the host to a version more compatible with vmware server 2. I read a lot of post about difficult of installation I think the problem of compilation during install could be related to my problem. Excuse me if the post isn't very clear, it's my first post here. Any help or suggest will be appreciated. Thanks

    Read the article

  • How can I make the Windows 7 taskbar behave like a cross between the old Quick Launch and new Superbar?

    - by frumious
    I really like the taskbar in Windows 7, I think combining buttons to launch apps and the icons that show your running apps is groovy. However, because I like having as much space as possible, I've got small icons enabled and shrunk the bar down to one row. I've also told it not to group the running apps unless there's no space left (to save me having to work harder to find the particular window I want), which also means that they have captions, and are thus quite wide. The (admittedly small) problem this gives me is that I can pin all my favourite apps to the bar, which looks much like the old Quick Launch bar, but when I launch them the running apps because much wider, and the unlaunched apps get lost amongst them. I can manually change the order to fix this, but next time I'll launch a different app and I'll be back to square one. What I'd prefer is for small unlaunched icons to be kept on the left, and wider running apps to move over to the right, which for me would be the best of both worlds. Is there any way I can organise that? I'm aware that one can use the traditional quick launch bar in Windows 7, but that's not what I'm after; I generally prefer the Windows 7 way.

    Read the article

  • Exchange MSExchangeIS Mailbox Store Error

    - by Bart Silverstrim
    Boss asked me to check to see if I could figure out why he's had to restart the services on the Exchange server three mornings in a row now. While going through the system logs I ran across an error from the MSExchangeIs Mailbox Store, category General, Event 9690. The message said (edited to make generalized): Exchange store 'First Storage Group\Mailbox Store (Servername)': The logical size of this database (the logical size equals the physical size of the .edb file and the .stm file minus the logical free space in each) is 22GB. This database size has exceeded the size limit of 22 GB. This database will be dismounted immediately. Hmm...happened at five in the morning, and I'm thinking this is a pretty good hint that this leads to the culprit. Thing is I'm not an Exchange expert, so I'm still googling around to figure out how to fix the problem. Any better guidance out there? Or am I barking up the wrong binary tree? Exchange System Manager reports that the server is "version 6.5 build 7638.2, SP2", standard, which I believe is Exchange 2003. It's running on Windows Server 2003 R2 Standard, SP2.

    Read the article

  • PC freezes after repeated clicking noises

    - by Péter Török
    I have an oldish PC (Athlon XP 2200, WinXP). Every time I switch it on, I hear a loud "click" first of all, and when it is shut down, again a click is the last sound I hear. Lately it started to behave erratically: it started to click loudly in the middle of a session. First only once in a while, then repeatedly for several seconds in a row, and finally it froze completely. We were not doing anything particular during these times, usually just browsing the web. This has already happened twice in a few week's time period. We typically use it only in the evenings, so when the freeze happened, I just decided it is time for the bed. When it was started up the next day, everything looked fine. Any hints on what could be the culprit? Could it be caused by an ageing heat fan, or dust that has accumulated inside the case? We are backing up all the data stored on it, and then will open the case to look inside, but I thought it would be good to get some background info first of all.

    Read the article

  • Ubuntu Postfix email account with forward

    - by Mika
    I have an Ubuntu 12.04 server with Postfix installed. In Postfix installation I used this guide https://help.ubuntu.com/community/Postfix. I didn't go through all of that, just the sudo dpkg-reconfigure postfix part. I have created user accounts to my server and the users home directories contain a .forward file which have only one row the email address to forward to. I have defined dns A records for the names www.mydomain.com and mydomain.com But if I send an email to [email protected] it doesn't get forwarded. Actually I can't see any sign about any email ever visiting my server. My firewall is defined to allow incoming traffic for ports 80, 443 and 22. For outgoing traffic it allows ports 587 and 22. The exact definitions are below. Should I allow also outgoing http (port 80)? or maybe port 25? # Allow ssh in iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT # Allow incoming HTTP iptables -A INPUT -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT # Allow incoming HTTPS iptables -A INPUT -i eth0 -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT # Allow outgoing SSH iptables -A OUTPUT -o eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT # Allow outgoing emails iptables -A OUTPUT -o eth0 -p tcp --dport 587 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --sport 587 -m state --state ESTABLISHED -j ACCEPT Edits: I found lines from my syslog telling me that there were incoming traffic for port 25 which was blocked. The sender ip's for those packages were trustworthy, so I opened also port 25. Now I can see some Postfix logging in my syslog. It looks like it is at least trying to forward emails. I haven't yet received any forwarder emails into my gmail mail box.

    Read the article

  • mysql INNODB inserts very slow

    - by 133794m3r
    The database's schema is as follows. CREATE TABLE `items` ( `id` mediumint( 8 ) unsigned NOT NULL AUTO_INCREMENT , `name` varchar( 45 ) NOT NULL , `main_type` tinyint( 4 ) NOT NULL , `rarity` tinyint( 4 ) NOT NULL , `stack_size` smallint( 6 ) NOT NULL , `sub_type` tinyint( 4 ) NOT NULL , `cost` mediumint( 8 ) unsigned NOT NULL , `ilvl` smallint( 6 ) unsigned NOT NULL DEFAULT '0', `flavor_text` varchar( 250 ) NOT NULL , `rlvl` tinyint( 3 ) unsigned NOT NULL , `final` tinyint( 4 ) NOT NULL DEFAULT '0', PRIMARY KEY ( `id` ) ) ENGINE = InnoDB DEFAULT CHARSET = ascii; Now, doing an insert on this table takes 0.22 seconds. I don't know why it's taking so long to do a single row insert. Reads are really really fast something like 0.005 seconds. With using the example configuration from here dev mysql innodb it averages ~0.002 to ~0.005 seconds. Why it takes more than 100x more time to do a single insert makes no sense to me. My computer is as follows. OS:Debian Sid x86-x64, Mysql 5.1, RAM:4GB ddr2, cpu 2.0Ghz dual core, HDD 7200RPM 32MB cache 640GB. Why it's taking almost 100x as much time for a SELECT * FROM items; vs INSERT INTO items ...; will never make any sense to me. It's still a small table at only 70 rows, and took that long even when it had 0 rows.

    Read the article

  • MySQL query, 2 similar servers, 2 minute difference in execution times

    - by mr12086
    I had a similar question on stack overflow, but it seems to be more server/mysql setup related than coding. The queries below all execute instantly on our development server where as they can take upto 2 minutes 20 seconds. The query execution time seems to be affected by home ambiguous the LIKE string's are. If they closely match a country that has few matches it will take less time, and if you use something like 'ge' for germany - it will take longer to execute. But this doesn't always work out like that, at times its quite erratic. Sending data appears to be the culprit but why and what does that mean. Also memory on production looks to be quite low (free memory)? Production: Intel Quad Xeon E3-1220 3.1GHz 4GB DDR3 2x 1TB SATA in RAID1 Network speed 100Mb Ubuntu Development Intel Core i3-2100, 2C/4T, 3.10GHz 500 GB SATA - No RAID 4GB DDR3 UPDATE 2 : mysqltuner output: [prod] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.61-0ubuntu0.10.04.1 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 103M (Tables: 180) [--] Data in InnoDB tables: 491M (Tables: 19) [!!] Total fragmented tables: 38 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 77d 4h 6m 1s (53M q [7.968 qps], 14M conn, TX: 87B, RX: 12B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (12K/53M) [OK] Highest usage of available connections: 22% (34/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/10.6M [OK] Key buffer hit rate: 98.7% (162M cached / 2M reads) [OK] Query cache efficiency: 20.7% (7M cached / 36M selects) [!!] Query cache prunes per day: 3934 [OK] Sorts requiring temporary tables: 1% (3K temp sorts / 230K sorts) [!!] Joins performed without indexes: 71068 [OK] Temporary tables created on disk: 24% (3M on disk / 13M total) [OK] Thread cache hit rate: 99% (690 created / 14M connections) [!!] Table cache hit rate: 0% (64 open / 85M opened) [OK] Open file limit used: 12% (128/1K) [OK] Table locks acquired immediately: 99% (16M immediate / 16M locks) [!!] InnoDB data size / buffer pool: 491.9M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) join_buffer_size (> 128.0K, or always use indexes with joins) table_cache (> 64) innodb_buffer_pool_size (>= 491M) [dev] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.62-0ubuntu0.11.10.1 [!!] Switch to 64-bit OS - MySQL cannot currently use all of your RAM -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 185M (Tables: 632) [--] Data in InnoDB tables: 967M (Tables: 38) [!!] Total fragmented tables: 73 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 2h 26m 9s (5K q [0.058 qps], 1K conn, TX: 4M, RX: 1M) [--] Reads / Writes: 99% / 1% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (0/5K) [OK] Highest usage of available connections: 1% (2/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/18.6M [OK] Key buffer hit rate: 99.9% (60K cached / 36 reads) [OK] Query cache efficiency: 44.5% (1K cached / 2K selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 44 sorts) [OK] Temporary tables created on disk: 24% (162 on disk / 666 total) [OK] Thread cache hit rate: 99% (2 created / 1K connections) [!!] Table cache hit rate: 1% (64 open / 4K opened) [OK] Open file limit used: 8% (88/1K) [OK] Table locks acquired immediately: 100% (1K immediate / 1K locks) [!!] InnoDB data size / buffer pool: 967.7M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: table_cache (> 64) innodb_buffer_pool_size (>= 967M) UPDATE 1: When testing the queries listed here there is usually no more than one other query taking place, and usually none. Because production is actually handling apache requests that development gets very few of as it's only myself and 1 other who accesses it - could the 4GB of RAM be getting exhausted by using the single machine for both apache and mysql server? Production: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 24872 MB in 2.00 seconds = 12450.72 MB/sec Timing buffered disk reads: 368 MB in 3.00 seconds = 122.49 MB/sec sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 24786 MB in 2.00 seconds = 12407.22 MB/sec Timing buffered disk reads: 350 MB in 3.00 seconds = 116.53 MB/sec Server version(mysql + ubuntu versions): 5.1.61-0ubuntu0.10.04.1 Development: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 10632 MB in 2.00 seconds = 5319.40 MB/sec Timing buffered disk reads: 400 MB in 3.01 seconds = 132.85 MB/sec Server version(mysql + ubuntu versions): 5.1.62-0ubuntu0.11.10.1 ORIGINAL DATA : This query is NOT the query in question but is related so ill post it. SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' And the explain plan for the above query is, run on both dev and production produce the same plan. +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | 1 | SIMPLE | p2 | const | PRIMARY | PRIMARY | 4 | const | 1 | Using index | | 1 | SIMPLE | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | const | 796 | Using where | | 1 | SIMPLE | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using index | | 1 | SIMPLE | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 1 | SIMPLE | f2 | ref | form_project_id | form_project_id | 4 | const | 15 | Using where | | 1 | SIMPLE | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ This query takes 2 minutes ~20 seconds to execute. The query that is ACTUALLY being run on the server is this one: SELECT COUNT(*) AS num_results FROM (SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' GROUP BY f.form_question_has_answer_id;) dctrn_count_query; With explain plans (again same on dev and production): +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | 1 | PRIMARY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | Select tables optimized away | | 2 | DERIVED | p2 | const | PRIMARY | PRIMARY | 4 | | 1 | Using index | | 2 | DERIVED | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | | 797 | Using where | | 2 | DERIVED | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id,project_company_has_user_garbage_collection | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 2 | DERIVED | f2 | ref | form_project_id | form_project_id | 4 | | 15 | Using where | | 2 | DERIVED | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | | 2 | DERIVED | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_user_id | 1 | Using where; Using index | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ On the production server the information I have is as follows. Upon execution: +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (2 min 14.28 sec) Show profile: +--------------------------------+------------+ | Status | Duration | +--------------------------------+------------+ | starting | 0.000016 | | checking query cache for query | 0.000057 | | Opening tables | 0.004388 | | System lock | 0.000003 | | Table lock | 0.000036 | | init | 0.000030 | | optimizing | 0.000016 | | statistics | 0.000111 | | preparing | 0.000022 | | executing | 0.000004 | | Sorting result | 0.000002 | | Sending data | 136.213836 | | end | 0.000007 | | query end | 0.000002 | | freeing items | 0.004273 | | storing result in query cache | 0.000010 | | logging slow query | 0.000001 | | logging slow query | 0.000002 | | cleaning up | 0.000002 | +--------------------------------+------------+ On development the results are as follows. +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (0.08 sec) Again the profile for this query: +--------------------------------+----------+ | Status | Duration | +--------------------------------+----------+ | starting | 0.000022 | | checking query cache for query | 0.000148 | | Opening tables | 0.000025 | | System lock | 0.000008 | | Table lock | 0.000101 | | optimizing | 0.000035 | | statistics | 0.001019 | | preparing | 0.000047 | | executing | 0.000008 | | Sorting result | 0.000005 | | Sending data | 0.086565 | | init | 0.000015 | | optimizing | 0.000006 | | executing | 0.000020 | | end | 0.000004 | | query end | 0.000004 | | freeing items | 0.000028 | | storing result in query cache | 0.000005 | | removing tmp table | 0.000008 | | closing tables | 0.000008 | | logging slow query | 0.000002 | | cleaning up | 0.000005 | +--------------------------------+----------+ If i remove user and/or project innerjoins the query is reduced to 30s. Last bit of information I have: Mysqlserver and Apache are on the same box, there is only one box for production. Production output from top: before & after. top - 15:43:25 up 78 days, 12:11, 4 users, load average: 1.42, 0.99, 0.78 Tasks: 162 total, 2 running, 160 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 50.4%sy, 0.0%ni, 49.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3772580k used, 265288k free, 243704k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207944k cached top - 15:44:31 up 78 days, 12:13, 4 users, load average: 1.94, 1.23, 0.87 Tasks: 160 total, 2 running, 157 sleeping, 0 stopped, 1 zombie Cpu(s): 0.2%us, 50.6%sy, 0.0%ni, 49.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3834300k used, 203568k free, 243736k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207804k cached But this isn't a good representation of production's normal status so here is a grab of it from today outside of executing the queries. top - 11:04:58 up 79 days, 7:33, 4 users, load average: 0.39, 0.58, 0.76 Tasks: 156 total, 1 running, 155 sleeping, 0 stopped, 0 zombie Cpu(s): 3.3%us, 2.8%sy, 0.0%ni, 93.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3676136k used, 361732k free, 271480k buffers Swap: 3905528k total, 268736k used, 3636792k free, 1063432k cached Development: This one doesn't change during or after. top - 15:47:07 up 110 days, 22:11, 7 users, load average: 0.17, 0.07, 0.06 Tasks: 210 total, 2 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4111972k total, 1821100k used, 2290872k free, 238860k buffers Swap: 4183036k total, 66472k used, 4116564k free, 921072k cached

    Read the article

< Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >