Search Results

Search found 11531 results on 462 pages for 'cpu cache'.

Page 137/462 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • Copy to USB memory stick really slow?

    - by Eloff
    When I copy files to the USB device, it takes much longer than in windows (same usb device, same port) it's faster than USB 1.0 speeds (1MB/s) but much slower than USB 2.0 speeds (12MB/s). To copy 1.8GB takes me over 10 minutes (it should be < 3 min.) I have two identical SanDisk Cruzer 8GB sticks, and I have the same problem with both. I have a super talent 32GB USB SSD in the neighboring port and it works at expected speeds. The problem I seem to see in the GUI is that the progress bar goes to 90% almost instantly, completes to 100% a little slower and then hangs there for 10 minutes. Interrupting the copy at this point seems to result in corruption at the tail end of the file. If I wait for it to complete the copy is successful. Any ideas? dmesg output below: [64059.432309] usb 2-1.2: new high-speed USB device number 5 using ehci_hcd [64059.526419] scsi8 : usb-storage 2-1.2:1.0 [64060.529071] scsi 8:0:0:0: Direct-Access SanDisk Cruzer 1.14 PQ: 0 ANSI: 2 [64060.530834] sd 8:0:0:0: Attached scsi generic sg4 type 0 [64060.531925] sd 8:0:0:0: [sdd] 15633408 512-byte logical blocks: (8.00 GB/7.45 GiB) [64060.533419] sd 8:0:0:0: [sdd] Write Protect is off [64060.533428] sd 8:0:0:0: [sdd] Mode Sense: 03 00 00 00 [64060.534319] sd 8:0:0:0: [sdd] No Caching mode page present [64060.534327] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.537988] sd 8:0:0:0: [sdd] No Caching mode page present [64060.537995] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.541290] sdd: sdd1 [64060.544617] sd 8:0:0:0: [sdd] No Caching mode page present [64060.544619] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.544621] sd 8:0:0:0: [sdd] Attached SCSI removable disk

    Read the article

  • slow virtualbox guest

    - by ecoologic
    I run a guest ubuntu 12.04 on a host ubuntu 12.04, with virtual box, and the guest is much, much slower than the host (ALT+TAB costs 4-5secs). I had a look around and I found contradicting opinions on virtualbox vs vmware (free), so I taught to keep the former. Both systems are updated, I installed the additions on the guest and I evenly split memory and video memory (64mb) between guest and host. I am running a toshiba m200 laptop with 4GB ram and shared video memory. The host bios does not include a configuration option for machine virtualization. I have 2 cpus and I can't give them both to the vm. Is there anything I overlooked that could solve my problem? Feel free to ask for more info, and thank you for any help. EDIT Idling with the monitor open the (single) guest cpu never gets below 55% and could raise to 80 - 90% just moving the mouse around, opening ff will cause the monitor to run 100% in the guest, while the host shows that both cpus are evenly working around 60%. My cpu is Intel® Core™2 Duo CPU T5450 @ 1.66GHz × 2. If this is not a configuration problem, does it mean my machine is too weak for virtualization?

    Read the article

  • Do you have to recreate workspaces after upgrading a TFS 2008 server to TFS 2010?

    - by Clara Oscura
    I am just reposting this thread from a MSDN forum since it seems to be unavailable. It was very useful when I was having trouble with my folder mappings after migrating to TFS 2010. Question: I opened VS2008 and connected it to the upgraded 2010 TFS server.  Upon clicking any of our Team Projects in source control explorer I get "Team Foundation Error - The workspace MYWORKSPACE;DOMAIN\MYUsername already exists on computer MYPCNAME." Answer: The same local paths on your machine are mapped to 2 different workspaces, one on the preupgrade server and one on the postupgrade server.  It's not safe to have multiple workspaces on different servers mapped to the same local paths b/c you could pend some changes while connected to one server, and the other server would have no idea what you did.  You should either delete your conflicting workspaces from one of the servers (if you don't need them on both), or test the new TFS instance from a new workspace (on different machine). If you want to test an existing production workspace on both servers, then yes, you will have to mess around with the workspace cache. You don’t have to delete the entire cache, you just need to run "tf workspaces /remove:* /server:<serverurl>" to clear the cached workspaces from a server (the command won't delete the workspaces), and possibly "tf workspaces /server:<server>" to refresh the workspace cache for a given server.  You will also have to do back up and restore the workspace before switching servers or your local files could be inconsistent. From the “Microsoft Visual Studio Team Foundation Server 2010 Beta 1” forum (not available anymore?) Technorati Tags: TFS 2010,TFS Workspaces,Team System,Team Foundation Server 2010

    Read the article

  • Information about rendering, batches, the graphical card, performance etc. + XNA?

    - by Aidiakapi
    I know the title is a bit vague but it's hard to describe what I'm really looking for, but here goes. When it comes to CPU rendering, performance is mostly easy to estimate and straightforward, but when it comes to the GPU due to my lack of technical background information, I'm clueless. I'm using XNA so it'd be nice if theory could be related to that. So what I actually wanna know is, what happens when and where (CPU/GPU) when you do specific draw actions? What is a batch? What influence do effects, projections etc have? Is data persisted on the graphics card or is it transferred over every step? When there's talk about bandwidth, are you talking about a graphics card internal bandwidth, or the pipeline from CPU to GPU? Note: I'm not actually looking for information on how the drawing process happens, that's the GPU's business, I'm interested on all the overhead that precedes that. I'd like to understand what's going on when I do action X, to adapt my architectures and practices to that. Any articles (possibly with code examples), information, links, tutorials that give more insight in how to write better games are very much appreciated. Thanks :)

    Read the article

  • cannot boot ubuntu 13.10 with my usb, Can i change the kernal on my laptop to run it?

    - by Carlos Dunick
    Currently i am running 12.04 and looking for an upgrade to 13.10 I first tried a bootable 64bit usb and failed. With the message saying "Kernal requires an x86-64 CPU but only detected an I686 CPU Unable to boot please use a kernal appropriate for your CPU" then tried 32bit and same message came up. Is this due to my laptop simply being to slow? or can/should i change the kernal somehow? Acer Aspire 5710z Intel Pentium dual core processor, 1.73Ghz, 533 MHz FSB, 1 MB L2 cache. 2GB DDR2 lspci 00:00.0 Host bridge: Intel Corporation Mobile 945GM/PM/GMS, 943/940GML and 945GT Express Memory Controller Hub (rev 03) 00:02.0 VGA compatible controller: Intel Corporation Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller (rev 03) 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03) 00:1b.0 Audio device: Intel Corporation NM10/ICH7 Family High Definition Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 1 (rev 02) 00:1c.2 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 3 (rev 02) 00:1c.3 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 4 (rev 02) 00:1d.0 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #3 (rev 02) 00:1d.3 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #4 (rev 02) 00:1d.7 USB controller: Intel Corporation NM10/ICH7 Family USB2 EHCI Controller (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev e2) 00:1f.0 ISA bridge: Intel Corporation 82801GBM (ICH7-M) LPC Interface Bridge (rev 02) 00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 02) 00:1f.2 SATA controller: Intel Corporation 82801GBM/GHM (ICH7-M Family) SATA Controller [AHCI mode] (rev 02) 00:1f.3 SMBus: Intel Corporation NM10/ICH7 Family SMBus Controller (rev 02) 04:00.0 Ethernet controller: Broadcom Corporation NetLink BCM5787M Gigabit Ethernet PCI Express (rev 02) 05:00.0 Network controller: Broadcom Corporation BCM4311 802.11b/g WLAN (rev 01) 06:00.0 FLASH memory: ENE Technology Inc ENE PCI Memory Stick Card Reader Controller 06:00.1 SD Host controller: ENE Technology Inc ENE PCI SmartMedia / xD Card Reader Controller 06:00.2 FLASH memory: ENE Technology Inc Memory Stick Card Reader Controller 06:00.3 FLASH memory: ENE Technology Inc ENE PCI Secure Digital / MMC Card Reader Controller

    Read the article

  • NFS users getting a laggy GUI expierence

    - by elzilrac
    I am setting up a system (ubuntu 12.04) that uses ldap, pam, and autofs to load users and their home folders from a remote server. One of the options for login is sitting down at the machine and starting a GUI session. Programs such as chormium (browser) that preform many read/write operations in the ~/.cache and ~/.config files are slowing down the GUI experience as well as putting strain of the NFS server that is causing other users to have problems. Ubuntu had the handy-dandy XDG_CONFIG_HOME and XDG_CACHE_HOME variables that can be set to change the default location of .cache and .config from the home folder to somewhere else. There are several places to set them, but most of them are not optimal. /etc/environment pros: will work across all shells cons: cannot use variables like $USER so that you can't make users have different new locations for .cache and .config. Every users' new location would be the same directory. /etc/bash.bashrc pros: $USER works, so you can place them in different folders cons: only gets run for bash compatible shells ~/.pam_environment pros: works regardless of shell cons: cannot use system variables (like $USER), has it's own syntax, and has to be created for every user

    Read the article

  • GillSans font doesn't show up in any applications

    - by roboknight
    I am running Ubuntu 10.04, Lucid Lynx, and I'm trying to add some fonts for use with Libre Office. In particular, I've placed a GillSans.ttc font file into /usr/share/fonts/truetype by copying it there with: sudo cp GillSans.ttc /usr/share/fonts/truetype and then attempting to "install" it by updating the font cache: sudo fc-cache -fv That appears to complete successfully, however, the font never seems to be available to either Libre Office, nor fontmatrix. In addition, I tried adding the font to my .fonts directory, thinking that I was somehow seeing a cache of just user supplied fonts, but that isn't the case. The file is a .ttc file, is there some other tools I need to use to extract each font from the .ttc, or is there something else I must do to use a .ttc? Nothing appears to have any issue with the file, but it could be silently failing to do anything. I've looked for the answer in other places, but the only time that a similar question was asked, it did not appear to be answered, so I figured I would ask the question differently and see if anyone else might be able to clue me in on how to use a .ttc font file, if that is possible. A related question, are there any tools to see what font families/font names are contained within a .ttc font file?

    Read the article

  • How Do Computers Work? [closed]

    - by Rob P.
    This is almost embarrassing ask...I have a degree in Computer Science (and a second one in progress). I've worked as a full-time .NET Developer for nearly five years. I generally seem competent at what I do. But I Don't Know How Computers Work! Please, bare with me for a second. A quick Google of 'How a Computer Works' will yield lots and lots of results, but I struggled to find one that really answered what I'm looking for. I realize this is a huge, huge question, so really, if you can just give me some keywords or some direction. I know there are components....the power supply, the motherboard, ram, CPU, etc...and I get the 'general idea' of what they do. But I really don't understand how you go from a line of code like Console.Readline() in .NET (or Java or C++) and have it actually do stuff. Sure, I'm vaguely aware of MSIL (in the case of .NET), and that some magic happens with the JIT compiler and it turns into native code (I think). I'm told Java is similar, and C++ cuts out the middle step. I've done some mainframe assembly, it was a few years back now. I remember there were some instructions and some CPU registers, and I wrote code....and then some magic happened....and my program would work (or crash). From what I understand, an 'Emulator' would simulate what happens when you call an instruction and it would update the CPU registers; but what makes those instructions work the way they do? Does this turn into an Electronics question and not a 'Computer' question? I'm guessing there isn't any practical reason for me to understand this, but I feel like I should be able to. (Yes, this is what happens when you spend a day with a small child. It takes them about 10 minutes and five iterations of asking 'Why?' for you to realize how much you don't know)

    Read the article

  • PARTNER News: Tips and Guidelines from Avago (formerly LSI)

    - by Zeynep Koch
    In this blog write-up we would like to focus our attention to one of our IHV partners, Avago (formerly LSI) . Avago and Oracle have been collaborating at many levels for many years.  At the lowest level, Avago and Oracle engineer solutions to inbox advanced features in our I/O device drivers.  We collaborate to test, verify and optimize these drivers in Oracle Linux with Unbreakable Enterprise Kernel. Both LSI Nytro and Sun F-Series PCIe flash devices are supported inbox in Oracle Linux with Unbreakable Enterprise Kernel. By collaborating early in the engineering design cycle we can find and resolve issues sooner and deliver to the end-customer a fully optimized platform for I/O efficiency and data protection.  Hear more about the partnership and benefits in this podcast  LSI and Oracle Partnership. Avago had also been working on technical whitepaper and video whiteboard to explain some of the optimizations you can achieve by using smart flash cache with Oracle Linux.  Technical Paper: Improve Database Performance Using Sun Flash Accelerator Card, Database Smart Flash Cache and Oracle Linux Video: Improving DB Performance with Database Smart Flash Cache If you want more information about the partnership and product benefits, you can visit the LSI Oracle alliance page. 

    Read the article

  • Series On Embedded Development (Part 1)

    - by user12612705
    This is the first in a series of entries on developing applications for the embedded environment. Most of this information is relevant to any type of embedded development (and even for desktop and server too), not just Java. This information is based on a talk Hinkmond Wong and I gave at JavaOne 2012 entitled Reducing Dynamic Memory in Java Embedded Applications. One thing to remember when developing embeddded applications is that memory matters. Yes, memory matters in desktop and server environments as well, but there's just plain less of it in embedded devices. So I'm going to be talking about saving this precious resource as well as another precious resource, CPU cycles...and a bit about power too. CPU matters too, and again, in embedded devices, there's just plain less of it. What you'll find, no surprise, is that there's a trade-off between performance and memory. To get better performance, you need to use more memory, and to save more memory, you need to need to use more CPU cycles. I'll be discussing three Memory Reduction Categories: - Optionality, both build-time and runtime. Optionality is about providing options so you can get rid of the stuff you don't need and include the stuff you do need. - Tunability, which is about providing options so you can tune your application by trading performance for size, and vice-versa. - Efficiency, which is about balancing size savings with performance.

    Read the article

  • ????????

    - by Tatsuya Sugi
    ??????????????????? ?????????????????????????????????????????????????????IT?????????????????????????????????????????????????????????????????????? Java ? .NET ????????????????????????????????...????????????????????????????????????????????????????????????????????????????????COBOL??????????????? Java ???????????????????? ??????????Java????????????????????????????? ??/?????????????????????????? ??????????????????????????????????????? ?????????????·?????????????????HW????????/??????? ????????????????????????? ?????????????????????????????????? ??????????????????????? Java ?????????????????? ???????????????????????????????????????????????????????????????????????????????????? ??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ??????????????????????????? ??????????????????????????????????????????????????????????(HPC:?????????·?????????)??????????????????????????????????????????????Java??????????????????? ???????????????? ??????????? ?Java + ??????·????????????????????? ??????·?????????????????????????? ?????????·?????????????????????????CPU????????????????????????????????????????????????????????TOP500???????????????8???x86?CPU????????????????????????? ??????·????????·???????????????????????????????/?????????????????????????????????????????????????? ????????????????????? ???: ??????????????/???????????? ?????????????????????????????????????????????????????/???????????????????????????????????????????? ???????: ?????????·????????????????? ????CPU????????????????????????????????·??????????/??????????????????????????????????????????????????????????????????????????????????????????? ??????????????????Java????????????????? ???JVM???????????????? ???????????????????????JVM?????????·????????????? ??????????????????????????????????????????? ??????????????? Oracle Coherence ???????????????????????????????????(??/????????) ??/???????? ???????????? ?????????????????????????????? ????????

    Read the article

  • Club Platinum 2010 ??! ~???:???????~

    - by Urakawa
    ?????Platinum Club??????????Platinum Club???ORACLE MASTER Platinum??????????????????????????????????????Platinum Club?????????????Club Platinum?(????????)????5?19??????????!   ??????Oracle Database?????????????????????????? ?????????? ?????????? ?????? ?? ???????????Platinum????????????????????????? ???????????????????3?????????(?????????? ?? ????? ????? ?????????????!)????Oracle?????! DB????????Oracle Database/Exadata ??7???????????3???????!?????????????????????????????? ???????Oracle Database 11g R2?????????ASM????????????(ACFS)???RAC One Node??????????????????????????USB???????????ACFS???????????!????????!????????!?????????????????????   ???????Oracle Exadata??Exadata Smart Flash Cache???Data Loading?????US?Oracle Corporation???????????????????????Smart Flash Cache??????DB?Flash Cache?????Data Loading????????????????????????????????????!??Tips???!?????????????????????????????????????????????????Oracle??????????????????????????!Oracle?????? Inside the Oracle Optimizer Kevin Closson's Oracle Blog   ???????????Oracle Exadata??Exadata Smart Scan???Exadata Hybrid Columnar Compression?????????????????????????????????????????????????????????????????????·?????????????IT?????????????????????????????????????????????????Oracle Database 11g: Real-Time SQL Monitoring?SQL Monitor active report?????????????!?????????   ????????!???????????????!????????????????????Platinum Club????????????????????????????Oracle Exadata??????????????????????????????????????????????????????m(_ _)m?????? Oracle Database 11g R2 ??????(PDF) (Platinum Club????)   ???????ORACLE MASTER??????????ORACLE MASTER?????????????????? ??????? ?????????????????ORACLE MASTER???update????????????Oracle Exadata?????????????!?Oracle Database 11g???????????????ORACLE MASTER Expert????Oracle E-Business Suite???DBA??????????????????????????????????????????Blog????????????????????????????   ????1????????????????????????????????Platinum of the Year????!~???~

    Read the article

  • ??GoldenGate Replicat?HANDLECOLLISIONS??

    - by Liu Maclean(???)
    HANDLECOLLISIONS?????goldengate????????REPLICAT??,???????????????????,???????????????????????????,??????????????????????????reperror????????discard??,????????????????,??????(????error mapping????,???????discard??),??????????????;?????????????????,????????? ??HANDLECOLLISIONS?????: target??delete??(missing delete),??????????discardfile target??update??(missing update) ????????=» update???INSERT ,???????????? ?????????=» ??????????discardfile ????????????target??,???replicat???UPDATE?????????????? ??1 target??delete??(missing delete) : C:\Users\ML>sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Tue Sep 18 13:38:03 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> conn sender/oracle Connected. SQL> create table handlec(t1 int primary key,t2 int); Table created. SQL> insert into handlec values(1,2); 1 row created. SQL> insert into handlec values(3,2); 1 row created. SQL> insert into handlec values(4,2); 1 row created. SQL> commit; Commit complete. SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 3 2 4 2 target : SQL> conn receiver/oracle Connected. SQL> create table handlec(t1 int primary key,t2 int); Table created. SQL> insert into handlec values(1,2); 1 row created. SQL> commit; SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 SQL> GGSCI (XIANGBLI-CN) 1> alter extract load2 , begin now EXTRACT altered. GGSCI (XIANGBLI-CN) 4> alter replicat rep2, begin now REPLICAT altered. GGSCI (XIANGBLI-CN) 13> add trandata sender.* Logging of supplemental redo data enabled for table SENDER.HANDLEC. Logging of supplemental redo log data is already enabled for table SENDER.TV. GGSCI (XIANGBLI-CN) 14> start mgr MGR is already running. GGSCI (XIANGBLI-CN) 15> start er * Sending START request to MANAGER ... EXTRACT LOAD2 starting Sending START request to MANAGER ... REPLICAT REP2 starting GGSCI (XIANGBLI-CN) 16> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING EXTRACT RUNNING LOAD2 00:00:00 00:00:01 REPLICAT RUNNING REP2 00:00:00 00:00:08 ***SOURCE?????TARGET????? SQL> delete handlec where t1=3; 1 row deleted. SQL> commit; Commit complete. ??SQL error 1403??,REPLICAT ABORT 2012-09-18 13:45:48 WARNING OGG-01004 Aborted grouped transaction on 'RECEIVER.HANDLEC', Database error 1403 (OCI Error ORA-01403: no data found, SQL ). 2012-09-18 13:45:48 WARNING OGG-01003 Repositioning to rba 1091 in seqno 3. 2012-09-18 13:45:48 WARNING OGG-01154 SQL error 1403 mapping SENDER.HANDLEC to RECEIVER.HANDLEC OCI Error ORA-01403: no data found, SQL . 2012-09-18 13:45:48 WARNING OGG-01003 Repositioning to rba 1091 in seqno 3. Source Context : SourceModule : [er.errors] SourceID : [er/errors.cpp] SourceFunction : [take_rep_err_action] SourceLine : [623] ThreadBacktrace : [8] elements : [D:\ogg\V34342-01\gglog.dll(??1CContextItem@@UEAA@XZ+0x3272) [0x000000018010BDD2]] : [D:\ogg\V34342-01\gglog.dll(?_MSG_ERR_MAP_TO_TANDEM_FAILED@@YAPEAVCMessage@@PEAVCSourceContext@@AEBV?$CQualDBObjName@$00@ggapp@gglib@ggs@@1W4MessageDisposition@CMessageFactory@@@Z+0x138) [0x00000001800AD508]] : [D:\ogg\V34342-01\replicat.exe(ERCALLBACK+0x6e1e) [0x0000000140099D5E]] : [D:\ogg\V34342-01\replicat.exe(shutdownMonitoring+0x4411) [0x00000001400C9BE1]] : [D:\ogg\V34342-01\replicat.exe(shutdownMonitoring+0x289cd) [0x00000001400EE19D]] : [D:\ogg\V34342-01\replicat.exe(CommonLexerNewSSD+0x9440) [0x00000001402AE980]] : [C:\windows\system32\kernel32.dll(BaseThreadInitThunk+0xd) [0x000000007733652D]] : [C:\windows\SYSTEM32\ntdll.dll(RtlUserThreadStart+0x21) [0x000000007746C521]] 2012-09-18 13:45:48 ERROR OGG-01296 Error mapping from SENDER.HANDLEC to RECEIVER.HANDLEC. *********************************************************************** * ** Run Time Statistics ** * *********************************************************************** Last record for the last committed transaction is the following: ___________________________________________________________________ Trail name : D:\ogg\V34342-01\ex\ze000003 Hdr-Ind : E (x45) Partition : . (x04) UndoFlag : . (x00) BeforeAfter: B (x42) RecLength : 9 (x0009) IO Time : 2012-09-18 13:45:38.000000 IOType : 3 (x03) OrigNode : 255 (xff) TransInd : . (x03) FormatType : R (x52) SyskeyLen : 0 (x00) Incomplete : . (x00) AuditRBA : 44 AuditPos : 3337232 Continued : N (x00) RecCount : 1 (x01) 2012-09-18 13:45:38.000000 Delete Len 9 RBA 1091 Name: SENDER.HANDLEC ___________________________________________________________________ Reading D:\ogg\V34342-01\ex\ze000003, current RBA 1091, 0 records Report at 2012-09-18 13:45:48 (activity since 2012-09-18 13:45:48) From Table SENDER.HANDLEC to RECEIVER.HANDLEC: # inserts: 0 # updates: 0 # deletes: 0 # discards: 1 Last log location read: FILE: D:\ogg\V34342-01\ex\ze000003 SEQNO: 3 RBA: 1091 TIMESTAMP: 2012-09-18 13:45:38.000000 EOF: NO READERR: 0 2012-09-18 13:45:48 ERROR OGG-01668 PROCESS ABENDING. 2012-09-18 13:45:48 INFO OGG-01237 Trace file D:\ogg\V34342-01\REP_TRACE1.TRC closed. 2012-09-18 13:45:48 INFO OGG-01237 Trace file D:\ogg\V34342-01\REP_TRACE2.TRC closed. CACHE OBJECT MANAGER statistics CACHE MANAGER VM USAGE vm current = 0 vm anon queues = 0 vm anon in use = 0 vm file = 0 vm used max = 0 ==> CACHE BALANCED CACHE CONFIGURATION cache size = 2G cache force paging = 3.41G buffer min = 64K buffer highwater = 8M pageout eligible size = 8M ================================================================================ ??skiptransaction???????? GGSCI (XIANGBLI-CN) 18> start rep2 skiptransaction Sending START request to MANAGER ... REPLICAT REP2 starting ??2 target??update??(missing update),???????? : ???????, ??source????????? SQL> update handlec set t1=5 where t1=4; 1 row updated. SQL> commit; Commit complete. ???target ????(miss update)??????? Database error 1403+OGG-01296 2012-09-18 13:49:30 WARNING OGG-01004 Aborted grouped transaction on 'RECEIVER.HANDLEC', Database error 1403 (OCI Error ORA-01403: no data found, SQL <UPDATE "RECEIVER"."HANDLEC" SET "T1" = :a1 WHERE "T1" = :b0>). 2012-09-18 13:49:30 WARNING OGG-01003 Repositioning to rba 1218 in seqno 3. 2012-09-18 13:49:30 WARNING OGG-01003 Repositioning to rba 1218 in seqno 3. Source Context : SourceModule : [er.errors] SourceID : [er/errors.cpp] SourceFunction : [take_rep_err_action] SourceLine : [623] ThreadBacktrace : [8] elements : [D:\ogg\V34342-01\gglog.dll(??1CContextItem@@UEAA@XZ+0x3272) [0x000000018010BDD2]] : [D:\ogg\V34342-01\gglog.dll(?_MSG_ERR_MAP_TO_TANDEM_FAILED@@YAPEAVCMessage@@PEAVCSourceContext@@AEBV?$CQualDBObjName@$00@ggapp@gglib@ggs@@1W4MessageDisposition@CMessageFactory@@@Z+0x138) [0x00000001800AD508]] : [D:\ogg\V34342-01\replicat.exe(ERCALLBACK+0x6e1e) [0x0000000140099D5E]] : [D:\ogg\V34342-01\replicat.exe(shutdownMonitoring+0x4411) [0x00000001400C9BE1]] : [D:\ogg\V34342-01\replicat.exe(shutdownMonitoring+0x289cd) [0x00000001400EE19D]] : [D:\ogg\V34342-01\replicat.exe(CommonLexerNewSSD+0x9440) [0x00000001402AE980]] : [C:\windows\system32\kernel32.dll(BaseThreadInitThunk+0xd) [0x000000007733652D]] : [C:\windows\SYSTEM32\ntdll.dll(RtlUserThreadStart+0x21) [0x000000007746C521]] 2012-09-18 13:49:30 ERROR OGG-01296 Error mapping from SENDER.HANDLEC to RECEIVER.HANDLEC. ??HANDLECOLLISIONS?,rep??????????discard?? GGSCI (XIANGBLI-CN) 23> view params rep2 replicat rep2 userid receiver , password oracle trace ./rep_trace1.trc trace2 ./rep_trace2.trc ASSUMETARGETDEFS HANDLECOLLISIONS map sender.*, target receiver.*; GGSCI (XIANGBLI-CN) 18> start rep2 SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 5 ????T1=5 T2 NULL?????? ,??update?????????????,??replicat??????????????update????????????????,?????T2 ?NULL ,????????????EXTRACT??PKUPDATE??? ????????FETCHOPTIONS FETCHPKUPDATECOLS ????????EXTRACT?????,???EXTRACT? ????extract???????????? ??????: SQL> conn receiver/oracle Connected. SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 100 5 20 200 SQL> delete handlec where t1=5; 1 row deleted. SQL> commit; Commit complete. SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 100 20 200 SQL> conn sender/oracle Connected. SQL> update handlec set t1=t1+1000 where t1=5; 1 row updated. SQL> commit; Commit complete. SQL> conn receiver/oracle Connected. SQL> SQL> SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 100 20 200 1005 2 ???????FETCHOPTIONS FETCHPKUPDATECOLS??????redo image???trail?,????primary key?????HANDLECOLLISIONS????target??????????? ??3 ????????????target??,???replicat???UPDATE??????????????: *** TARGET SQL> conn receiver/oracle Connected. SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 9 5 target????? t1=10 t2=9??? ,????source???(10,100)??? >>SOURCE SQL> insert into handlec values(10,100); 1 row created. SQL> commit; >>TARGET SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 100 5 ???????source?insert??,???target???????????????HANDLECOLLISIONS?REPLICAT???UPDATE??????COLUMNS ?? HANDLECOLLISIONS?????goldengate????????REPLICAT??,???????????????????,???????????????????????????,??????????????????????????reperror????????discard??,????????????????,??????,??????????????;?????????????????,????????? ??HANDLECOLLISIONS?????: target??delete??(missing delete),??????????discardfile target??update??(missing update) ????????=» update???INSERT ,???????????? ?????????=» ??????????discardfile ????????????target??,???replicat???UPDATE?????????????? ?:???????????Insert/Delete??,????????????????Replicat?????abend,????? ???????????,??target??HANDLECOLLISIONS??update??,?????INSERT??????,???????????????,FETCHOPTIONS FETCHPKUPDATECOLS??????redo image???trail?,????primary key?????HANDLECOLLISIONS????target??????????? ??????send ??????HANDLECOLLISIONS GGSCI (XIANGBLI-CN) 29> send rep2, NOHANDLECOLLISIONS Sending NOHANDLECOLLISIONS request to REPLICAT REP2 ... REP2 NOHANDLECOLLISIONS set for 1 tables and 0 wildcard entries

    Read the article

  • Doctrine 2 Cannot find entites

    - by Flyn San
    I'm using Kohana 3 and have a /doctrine/Entites folder with my entities inside. When executing the code $product = Doctrine::em()->find('Entities\Product', 1); in my controller, I get the error class_parents(): Class Entities\Product does not exist and could not be loaded Below is the Controller (classes/controller/welcome.php): <?php class Controller_Welcome extends Controller { public function action_index() { $prod = Doctrine::em()->find('Entities\Product', 1); } } Below is the Entity (/doctrine/Entities/Product.php): <?php /** * @Entity * @Table{name="products"} */ class Product { /** @Id @Column{type="integer"} */ private $id; /** @Column(type="string", length="255") */ private $name; public function getId() { return $this->id; } public function setId($id) { $this->id = intval($id); } public function getName() { return $this->name; } public function setName($name) { $this->name = $name; } } Below is the Doctrine module bootstrap file (/modules/doctrine/init.php): class Doctrine { private static $_instance = null; private $_application_mode = 'development'; private $_em = null; public static function em() { if ( self::$_instance === null ) self::$_instance = new Doctrine(); return self::$_instance->_em; } public function __construct() { require __DIR__.'/classes/doctrine/Doctrine/Common/ClassLoader.php'; $classLoader = new \Doctrine\Common\ClassLoader('Doctrine', __DIR__.'/classes/doctrine'); $classLoader->register(); $classLoader = new \Doctrine\Common\ClassLoader('Symfony', __DIR__.'/classes/doctrine/Doctrine'); $classLoader->register(); $classLoader = new \Doctrine\Common\ClassLoader('Entities', APPPATH.'doctrine'); $classLoader->register(); //Set up caching method $cache = $this->_application_mode == 'development' ? new \Doctrine\Common\Cache\ArrayCache : new \Doctrine\Common\Cache\ApcCache; $config = new Configuration; $config->setMetadataCacheImpl( $cache ); $driver = $config->newDefaultAnnotationDriver( APPPATH.'doctrine/Entities' ); $config->setMetadataDriverImpl( $driver ); $config->setQueryCacheImpl( $cache ); $config->setProxyDir( APPPATH.'doctrine/Proxies' ); $config->setProxyNamespace('Proxies'); $config->setAutoGenerateProxyClasses( $this->_application_mode == 'development' ); $dbconf = Kohana::config('database'); $dbconf = reset($dbconf); //Use the first database specified in the config $this->_em = EntityManager::create(array( 'dbname' => $dbconf['connection']['database'], 'user' => $dbconf['connection']['username'], 'password' => $dbconf['connection']['password'], 'host' => $dbconf['connection']['hostname'], 'driver' => 'pdo_mysql', ), $config); } } Any ideas what I've done wrong?

    Read the article

  • Is it too early to start designing for Task Parallel Library?

    - by Joe Erickson
    I have been following the development of the .NET Task Parallel Library (TPL) with great interest since Microsoft first announced it. There is no doubt in my mind that we will eventually take advantage of TPL. What I am questioning is whether it makes sense to start taking advantage of TPL when Visual Studio 2010 and .NET 4.0 are released, or whether it makes sense to wait a while longer. Why Start Now? The .NET 4.0 Task Parallel Library appears to be well designed and some relatively simple tests demonstrate that it works well on today's multi-core CPUs. I have been very interested in the potential advantages of using multiple lightweight threads to speed up our software since buying my first quad processor Dell Poweredge 6400 about seven years ago. Experiments at that time indicated that it was not worth the effort, which I attributed largely to the overhead of moving data between each CPU's cache (there was no shared cache back then) and RAM. Competitive advantage - some of our customers can never get enough performance and there is no doubt that we can build a faster product using TPL today. It sounds fun. Yes, I realize that some developers would rather poke themselves in the eye with a sharp stick, but we really enjoy maximizing performance. Why Wait? Are today's Intel Nehalem CPUs representative of where we are going as multi-core support matures? You can purchase a Nehalem CPU with 4 cores which share a single level 3 cache today, and most likely a 6 core CPU sharing a single level 3 cache by the time Visual Studio 2010 / .NET 4.0 are released. Obviously, the number of cores will go up over time, but what about the architecture? As the number of cores goes up, will they still share a cache? One issue with Nehalem is the fact that, even though there is a very fast interconnect between the cores, they have non-uniform memory access (NUMA) which can lead to lower performance and less predictable results. Will future multi-core architectures be able to do away with NUMA? Similarly, will the .NET Task Parallel Library change as it matures, requiring modifications to code to fully take advantage of it? Limitations Our core engine is 100% C# and has to run without full trust, so we are limited to using .NET APIs.

    Read the article

  • HTTP caching confusion

    - by Keith
    I'm not sure whether this is a server issue, or whether I'm failing to understand how HTTP caching really works. I have an ASP MVC application running on IIS7. There's a lot of static content as part of the site including lots of CSS, Javascript and image files. For these files I want the browser to cache them for at least a day - our .css, .js, .gif and .png files rarely change. My web.config goes like this: <system.webServer> <staticContent> <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="1.00:00:00" /> </staticContent> </system.webServer> The problem I'm getting is that the browser (tested Chrome, IE8 and FX) doesn't seem to be caching the files as I'd expect. I've got the default settings (check for newer pages automatically in IE). On first visit the content downloads as expected HTTP/1.1 200 OK Cache-Control: max-age=86400 Content-Type: image/gif Last-Modified: Fri, 07 Aug 2009 09:55:15 GMT Accept-Ranges: bytes ETag: "3efeb2294517ca1:0" Server: Microsoft-IIS/7.0 X-Powered-By: ASP.NET Date: Mon, 07 Jun 2010 14:29:16 GMT Content-Length: 918 <content> I think that the Cache-Control: max-age=86400 should tell the browser not to request the page again for a day. Ok, so now the page is reloaded and the browser requests the image again. This time it gets an empty response with these headers: HTTP/1.1 304 Not Modified Cache-Control: max-age=86400 Last-Modified: Fri, 07 Aug 2009 09:55:15 GMT Accept-Ranges: bytes ETag: "3efeb2294517ca1:0" Server: Microsoft-IIS/7.0 X-Powered-By: ASP.NET Date: Mon, 07 Jun 2010 14:30:32 GMT So it looks like the browser has sent the ETag back (as a unique id for the resource), and the server's come back with a 304 Not Modified - telling the browser that it can use the previously downloaded file. It seems to me that would be correct for many caching situations, but here I don't want the extra round trip. I don't care if the image gets out of date when the file on the server changes. There are a lot of these files (even with sprite-maps and the like) and many of our clients have very slow networks. Each round trip to ping for that 304 status is taking about a 10th to a 5th of a second. Many also have IE6 which only has 2 HTTP connections at a time. The net result is that our application appears to be very slow for these clients with every page taking an extra couple of seconds to check that the static content hasn't changed. What response header am I missing that would cause the browser to aggressively cache the files? How would I set this in a .Net web.config for IIS7? Am I misunderstanding how HTTP caching works in the first place?

    Read the article

  • Should I HttpCombine Google Jquery Hosted File?

    - by chobo2
    Hi I am using something called HttpCombiner: http://code.msdn.microsoft.com/HttpCombiner An HTTP handler that combines multiple CSS, Javascript or URL into one response for faster page load. It can combine, compress and cache response which results in faster page load and better scalability of web application It's a good practice to use many small Javascript and CSS files instead of one large Javascript/CSS file for better code maintainability, but bad in terms of website performance. Although you should write your Javascript code in small files and break large CSS files into small chunks but when browser requests those javascript and css files, it makes one Http request per file. Every Http Request results in a network roundtrip form your browser to the server and the delay in reaching the server and coming back to the browser is called latency. So, if you have four javascripts and three css files loaded by a page, you are wasting time in seven network roundtrips. Within USA, latency is average 70ms. So, you waste 7x70 = 490ms, about half a second of delay. Outside USA, average latency is around 200ms. So, that means 1400ms of waiting. Browser cannot show the page properly until Css and Javascripts are fully loaded. So, the more latency you have, the slower page loads. You can reduce the wait time by using a CDN. Read my previous blog post about using CDN. However, a better solution is to deliver multiple files over one request using an HttpHandler that combines several files and delivers as one output. So, instead of putting many or tag, you just put one and one tag, and point them to the HttpHandler. You tell the handler which files to combine and it delivers those files in one response. This saves browser from making many requests and eliminates the latency. This Http Handler reads the file names defined in a configuration and combines all those files and delivers as one response. It delivers the response as gzip compressed to save bandwidth. Moreover, it generates proper cache header to cache the response in browser cache, so that, browser does not request it again on future visit. Now I am wondering since it can handle adding links should I put in it the jquery file? The reason I am not sure is if it gets combined with my other files I think I might close the advantages of it being hosted on googles servers such as caching(my thinking is if it gets combined it will look different so even if a user has it in it's cache I am not sure if it will use the one for the cahce or not). So should I combine it or only the finals that I am using locally?

    Read the article

  • Ajax gets nothing back from the php.

    - by ShaMun
    Jquery i dont have alert and firefox i dont have anything in return. The code was working before, database query have successfull records also. What i am missing??? Jquery ajax. $.ajax({ type : "POST", url : "include/add_edit_del.php?model=teksten_display", data : "oper=search&ids=" + _id , dataType: "json", success : function(msg){ alert(msg); } }); PHP case 'teksten_display': $id = $_REQUEST['ids']; $res = $_dclass-_query_sql( "select a,b,id,wat,c,d from tb1 where id='" . $id . "'" ); $_rows = array(); while ( $rows = mysql_fetch_array ($res) ) { $_rows = $rows; } //header('Cache-Control: no-cache, must-revalidate'); //header('Expires: Mon, 26 Jul 1997 05:00:00 GMT'); header('Content-type: application/json'); echo utf8_encode( json_encode($_rows) ) ; //echo json_encode($_rows); //var_dump($_rows); //print_r ($res); break; Firefox response/request header Date Sat, 24 Apr 2010 22:34:55 GMT Server Apache/2.2.3 (CentOS) X-Powered-By PHP/5.1.6 Expires Thu, 19 Nov 1981 08:52:00 GMT Cache-Control no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma no-cache Content-Length 0 Connection close Content-Type application/json Host www.xxxx.be User-Agent Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9) Gecko/20100330 Fedora/3.5.9-2.fc12 Firefox/3.5.9 Accept application/json, text/javascript, */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 300 Connection keep-alive Content-Type application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With XMLHttpRequest Referer http://www.xxxx.be/xxxxx Content-Length 17 Cookie csdb=2; codb=5; csdbb=1; codca=1.4; csdca=3; PHPSESSID=benunvkpecqh3pmd8oep5b55t7; CAKEPHP=3t7hrlc89emvg1hfsc45gs2bl2

    Read the article

  • Lighttpd + fastcgi + python (for django) slow on first request

    - by EagleOne
    I'm having a problem with a django website I host with lighttpd + fastcgi. It works great but it seems that the first request always takes up to 3seconds. Subsequent requests are much faster (<1s). I activated access logs in lighttpd in order to track the issue. But I'm kind of stuck. Here are logs where I 'lose' 4s (from 10:04:17 to 10:04:21): 2012-12-01 10:04:17: (mod_fastcgi.c.3636) handling it in mod_fastcgi 2012-12-01 10:04:17: (response.c.470) -- before doc_root 2012-12-01 10:04:17: (response.c.471) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.472) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.473) Path : 2012-12-01 10:04:17: (response.c.521) -- after doc_root 2012-12-01 10:04:17: (response.c.522) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.523) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.524) Path : /var/www/finderauto.fcgi 2012-12-01 10:04:17: (response.c.541) -- logical -> physical 2012-12-01 10:04:17: (response.c.542) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.543) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.544) Path : /var/www/finderauto.fcgi 2012-12-01 10:04:21: (response.c.128) Response-Header: HTTP/1.1 200 OK Last-Modified: Sat, 01 Dec 2012 09:04:21 GMT Expires: Sat, 01 Dec 2012 09:14:21 GMT Content-Type: text/html; charset=utf-8 Cache-Control: max-age=600 Transfer-Encoding: chunked Date: Sat, 01 Dec 2012 09:04:21 GMT Server: lighttpd/1.4.28 I guess that if there is a problem, it's whith my configuration. So here is the way I launch my django app: python manage.py runfcgi method=threaded host=127.0.0.1 port=3033 And here is my lighttpd conf: server.modules = ( "mod_access", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", "mod_accesslog", ) server.document-root = "/var/www" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/var/log/lighttpd/error.log" server.pid-file = "/var/run/lighttpd.pid" server.username = "www-data" server.groupname = "www-data" accesslog.filename = "/var/log/lighttpd/access.log" debug.log-request-header = "enable" debug.log-response-header = "enable" debug.log-file-not-found = "enable" debug.log-request-handling = "enable" debug.log-timeouts = "enable" debug.log-ssl-noise = "enable" debug.log-condition-cache-handling = "enable" debug.log-condition-handling = "enable" fastcgi.server = ( "/finderauto.fcgi" => ( "main" => ( # Use host / port instead of socket for TCP fastcgi "host" => "127.0.0.1", "port" => 3033, #"socket" => "/home/finderadmin/finderauto.sock", "check-local" => "disable", "fix-root-scriptname" => "enable", ) ), ) alias.url = ( "/media" => "/home/user/django/contrib/admin/media/", ) url.rewrite-once = ( "^(/media.*)$" => "$1", "^/favicon\.ico$" => "/media/favicon.ico", "^(/.*)$" => "/finderauto.fcgi$1", ) index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", " index.lighttpd.html" ) url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ## Use ipv6 if available #include_shell "/usr/share/lighttpd/use-ipv6.pl" dir-listing.encoding = "utf-8" server.dir-listing = "enable" compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ( "application/x-javascript", "text/css", "text/html", "text/plain" ) include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" If any of you could help me finding out where I lose these 3 or 4 s. I would much appreciate. Thanks in advance!

    Read the article

  • ZFS for Database Log Files

    - by user12620111
    I've been troubled by drop outs in CPU usage in my application server, characterized by the CPUs suddenly going from close to 90% CPU busy to almost completely CPU idle for a few seconds. Here is an example of a drop out as shown by a snippet of vmstat data taken while the application server is under a heavy workload. # vmstat 1  kthr      memory            page            disk          faults      cpu  r b w   swap  free  re  mf pi po fr de sr s3 s4 s5 s6   in   sy   cs us sy id  1 0 0 130160176 116381952 0 16 0 0 0 0  0  0  0  0  0 207377 117715 203884 70 21 9  12 0 0 130160160 116381936 0 25 0 0 0 0 0  0  0  0  0 200413 117162 197250 70 20 9  11 0 0 130160176 116381920 0 16 0 0 0 0 0  0  1  0  0 203150 119365 200249 72 21 7  8 0 0 130160176 116377808 0 19 0 0 0 0  0  0  0  0  0 169826 96144 165194 56 17 27  0 0 0 130160176 116377800 0 16 0 0 0 0  0  0  0  0  1 10245 9376 9164 2  1 97  0 0 0 130160176 116377792 0 16 0 0 0 0  0  0  0  0  2 15742 12401 14784 4 1 95  0 0 0 130160176 116377776 2 16 0 0 0 0  0  0  1  0  0 19972 17703 19612 6 2 92  14 0 0 130160176 116377696 0 16 0 0 0 0 0  0  0  0  0 202794 116793 199807 71 21 8  9 0 0 130160160 116373584 0 30 0 0 0 0  0  0 18  0  0 203123 117857 198825 69 20 11 This behavior occurred consistently while the application server was processing synthetic transactions: HTTP requests from JMeter running on an external machine. I explored many theories trying to explain the drop outs, including: Unexpected JMeter behavior Network contention Java Garbage Collection Application Server thread pool problems Connection pool problems Database transaction processing Database I/O contention Graphing the CPU %idle led to a breakthrough: Several of the drop outs were 30 seconds apart. With that insight, I went digging through the data again and looking for other outliers that were 30 seconds apart. In the database server statistics, I found spikes in the iostat "asvc_t" (average response time of disk transactions, in milliseconds) for the disk drive that was being used for the database log files. Here is an example:                     extended device statistics     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 2053.6    0.0 8234.3  0.0  0.2    0.0    0.1   0  24 c3t60080E5...F4F6d0s0     0.0 2162.2    0.0 8652.8  0.0  0.3    0.0    0.1   0  28 c3t60080E5...F4F6d0s0     0.0 1102.5    0.0 10012.8  0.0  4.5    0.0    4.1   0  69 c3t60080E5...F4F6d0s0     0.0   74.0    0.0 7920.6  0.0 10.0    0.0  135.1   0 100 c3t60080E5...F4F6d0s0     0.0  568.7    0.0 6674.0  0.0  6.4    0.0   11.2   0  90 c3t60080E5...F4F6d0s0     0.0 1358.0    0.0 5456.0  0.0  0.6    0.0    0.4   0  55 c3t60080E5...F4F6d0s0     0.0 1314.3    0.0 5285.2  0.0  0.7    0.0    0.5   0  70 c3t60080E5...F4F6d0s0 Here is a little more information about my database configuration: The database and application server were running on two different SPARC servers. Storage for the database was on a storage array connected via 8 gigabit Fibre Channel Data storage and log file were on different physical disk drives Reliable low latency I/O is provided by battery backed NVRAM Highly available: Two Fibre Channel links accessed via MPxIO Two Mirrored cache controllers The log file physical disks were mirrored in the storage device Database log files on a ZFS Filesystem with cutting-edge technologies, such as copy-on-write and end-to-end checksumming Why would I be getting service time spikes in my high-end storage? First, I wanted to verify that the database log disk service time spikes aligned with the application server CPU drop outs, and they did: At first, I guessed that the disk service time spikes might be related to flushing the write through cache on the storage device, but I was unable to validate that theory. After searching the WWW for a while, I decided to try using a separate log device: # zpool add ZFS-db-41 log c3t60080E500017D55C000015C150A9F8A7d0 The ZFS log device is configured in a similar manner as described above: two physical disks mirrored in the storage array. This change to the database storage configuration eliminated the application server CPU drop outs: Here is the zpool configuration: # zpool status ZFS-db-41   pool: ZFS-db-41  state: ONLINE  scan: none requested config:         NAME                                     STATE         ZFS-db-41                                ONLINE           c3t60080E5...F4F6d0  ONLINE         logs           c3t60080E5...F8A7d0  ONLINE Now, the I/O spikes look like this:                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1053.5    0.0 4234.1  0.0  0.8    0.0    0.7   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1131.8    0.0 4555.3  0.0  0.8    0.0    0.7   0  76 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1167.6    0.0 4682.2  0.0  0.7    0.0    0.6   0  74 c3t60080E5...F8A7d0s0     0.0  162.2    0.0 19153.9  0.0  0.7    0.0    4.2   0  12 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1247.2    0.0 4992.6  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0     0.0   41.0    0.0   70.0  0.0  0.1    0.0    1.6   0   2 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1241.3    0.0 4989.3  0.0  0.8    0.0    0.6   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1193.2    0.0 4772.9  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0 We can see the steady flow of 4k writes to the ZIL device from O_SYNC database log file writes. The spikes are from flushing the transaction group. Like almost all problems that I run into, once I thoroughly understand the problem, I find that other people have documented similar experiences. Thanks to all of you who have documented alternative approaches. Saved for another day: now that the problem is obvious, I should try "zfs:zfs_immediate_write_sz" as recommended in the ZFS Evil Tuning Guide. References: The ZFS Intent Log Solaris ZFS, Synchronous Writes and the ZIL Explained ZFS Evil Tuning Guide: Cache Flushes ZFS Evil Tuning Guide: Tuning ZFS for Database Performance

    Read the article

  • SQL SERVER – Reducing CXPACKET Wait Stats for High Transactional Database

    - by pinaldave
    While engaging in a performance tuning consultation for a client, a situation occurred where they were facing a lot of CXPACKET Waits Stats. The client asked me if I could help them reduce this huge number of wait stats. I usually receive this kind of request from other client as well, but the important thing to understand is whether this question has any merits or benefits, or not. Before we continue the resolution, let us understand what CXPACKET Wait Stats are. The official definition suggests that CXPACKET Wait Stats occurs when trying to synchronize the query processor exchange iterator. You may consider lowering the degree of parallelism if a conflict concerning this wait type develops into a problem. (from BOL) In simpler words, when a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. Threads which came first have to wait for the slower thread to finish. The Wait by a specific completed thread is called CXPACKET Wait Stat. Note that CXPACKET Wait is done by completed thread and not the one which are unfinished. “Note that not all the CXPACKET wait types are bad. You might experience a case when it totally makes sense. There might also be cases when this is also unavoidable. If you remove this particular wait type for any query, then that query may run slower because the parallel operations are disabled for the query.” Now let us see what the best practices to reduce the CXPACKET Wait Stats are. The suggestions, with which you will find that if you search online through the browser, would play a major role as and might be asked about their jobs In addition, might tell you that you should set ‘maximum degree of parallelism’ to 1. I do agree with these suggestions, too; however, I think this is not the final resolutions. As soon as you set your entire query to run on single CPU, you will get a very bad performance from the queries which are actually performing okay when using parallelism. The best suggestion to this is that you set ‘the maximum degree of parallelism’ to a lower number or 1 (be very careful with this – it can create more problems) but tune the queries which can be benefited from multiple CPU’s. You can use query hint OPTION (MAXDOP 0) to run the server to use parallelism. Here is the two-quick script which helps to resolve these issues: Change MAXDOP at Server Level EXEC sys.sp_configure N'max degree of parallelism', N'1' GO RECONFIGURE WITH OVERRIDE GO Run Query with all the CPU (using parallelism) USE AdventureWorks GO SELECT * FROM Sales.SalesOrderDetail ORDER BY ProductID OPTION (MAXDOP 0) GO Below is the blog post which will help you to find all the parallel query in your server. SQL SERVER – Find Queries using Parallelism from Cached Plan Please note running Queries in single CPU may worsen your performance and it is not recommended at all. Infect this can be very bad advise. I strongly suggest that you identify the queries which are offending and tune them instead of following any other suggestions. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • High Load mysql on Debian server

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ?

    Read the article

  • Why your Netapp is so slow...

    - by Darius Zanganeh
    Have you ever wondered why your Netapp FAS box is slow and doesn't perform well at large block workloads?  In this blog entry I will give you a little bit of information that will probably help you understand why it’s so slow, why you shouldn't use it for applications that read and write in large blocks like 64k, 128k, 256k ++ etc..  Of course since I work for Oracle at this time, I will show you why the ZS3 storage boxes are excellent choices for these types of workloads. Netapp’s Fundamental Problem The fundamental problem you have running these workloads on Netapp is the backend block size of their WAFL file system.  Every application block on a Netapp FAS ends up in a 4k chunk on a disk. Reference:  Netapp TR-3001 Whitepaper Netapp has proven this lacking large block performance fact in at least two different ways. They have NEVER posted an SPC-2 Benchmark yet they have posted SPC-1 and SPECSFS, both recently. In 2011 they purchased Engenio to try and fill this GAP in their portfolio. Block Size Matters So why does block size matter anyways?  Many applications use large block chunks of data especially in the Big Data movement.  Some examples are SAS Business Analytics, Microsoft SQL, Hadoop HDFS is even 64MB! Now let me boil this down for you.  If an application such MS SQL is writing data in a 64k chunk then before Netapp actually writes it on disk it will have to split it into 16 different 4k writes and 16 different disk IOPS.  When the application later goes to read that 64k chunk the Netapp will have to again do 16 different disk IOPS.  In comparison the ZS3 Storage Appliance can write in variable block sizes ranging from 512b to 1MB.  So if you put the same MSSQL database on a ZS3 you can set the specific LUNs for this database to 64k and then when you do an application read/write it requires only a single disk IO.  That is 16x faster!  But, back to the problem with your Netapp, you will VERY quickly run out of disk IO and hit a wall.  Now all arrays will have some fancy pre fetch algorithm and some nice cache and maybe even flash based cache such as a PAM card in your Netapp but with large block workloads you will usually blow through the cache and still need significant disk IO.  Also because these datasets are usually very large and usually not dedupable they are usually not good candidates for an all flash system.  You can do some simple math in excel and very quickly you will see why it matters.  Here are a couple of READ examples using SAS and MSSQL.  Assume these are the READ IOPS the application needs even after all the fancy cache and algorithms.   Here is an example with 128k blocks.  Notice the numbers of drives on the Netapp! Here is an example with 64k blocks You can easily see that the Oracle ZS3 can do dramatically more work with dramatically less drives.  This doesn't even take into account that the ONTAP system will likely run out of CPU way before you get to these drive numbers so you be buying many more controllers.  So with all that said, lets look at the ZS3 and why you should consider it for any workload your running on Netapp today.  ZS3 World Record Price/Performance in the SPC-2 benchmark ZS3-2 is #1 in Price Performance $12.08ZS3-2 is #3 in Overall Performance 16,212 MBPS Note: The number one overall spot in the world is held by an AFA 33,477 MBPS but at a Price Performance of $29.79.  A customer could purchase 2 x ZS3-2 systems in the benchmark with relatively the same performance and walk away with $600,000 in their pocket.

    Read the article

  • SQL SERVER – Speed Up! – Parallel Processes and Unparalleled Performance – TechEd 2012 India

    - by pinaldave
    TechEd India 2012 is just around the corner and I will be presenting there on two different session. SQL Server Performance Tuning is a very challenging subject that requires expertise in Database Administration and Database Development. I always have enjoyed talking about SQL Server Performance tuning subject. Just like doctors I like to call my every attempt to improve the performance of SQL Server queries and database server as a practice too. I have been working with SQL Server for more than 8 years and I believe that many of the performance tuning concept I have mastered. However, performance tuning is not a simple subject. However there are occasions when I feel stumped, there are occasional when I am not sure what should be the next step. When I face situation where I cannot figure things out easily, it makes me most happy because I clearly see this as a learning opportunity. I have been presenting in TechEd India for last three years. This is my fourth time opportunity to present a technical session on SQL Server. Just like every other year, I decided to present something different, something which I have spend years of learning. This time, I am going to present about parallel processes. It is widely believed that more the CPU will improve performance of the server. It is true in many cases. However, there are cases when limiting the CPU usages have improved overall health of the server. I will be presenting on the subject of Parallel Processes and its effects. I have spent more than a year working on this subject only. After working on various queries on multi-CPU systems I have personally learned few things. In coming TechEd session, I am going to share my experience with parallel processes and performance tuning. Session Details Title: Speed Up! – Parallel Processes and Unparalleled Performance (Add to Calendar) Abstract: “More CPU More Performance” – A  very common understanding is that usage of multiple CPUs can improve the performance of the query. To get maximum performance out of any query – one has to master various aspects of the parallel processes. In this deep dive session, we will explore this complex subject with a very simple interactive demo. An attendee will walk away with proper understanding of CX_PACKET wait types, MAXDOP, parallelism threshold and various other concepts. Date and Time: March 23, 2012, 12:15 to 13:15 Location: Hotel Lalit Ashok - Kumara Krupa High Grounds, Bengaluru – 560001, Karnataka, India. Add to Calendar Please submit your questions in the comments area and I will be for sure discussing them during my session. If I pick your question to discuss during my session, here is your gift I commit right now – SQL Server Interview Questions and Answers Book. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • A problem with conky in Gnome 3.4 [closed]

    - by Pranit Bauva
    Possible Duplicate: Conky not working in Gnome 3.4 My conky in Gnome 3.4 is not working. When I run a conky script nothing appears but the process is running. Please also see the debug code : pungi-man@pungi-man:~$ sh conky_startup.sh Conky: forked to background, pid is 3157 Conky: desktop window (c00023) is subwindow of root window (aa) Conky: window type - override Conky: drawing to created window (0x2200001) Conky: drawing to double buffer My conky script is : background yes update_interval 1 cpu_avg_samples 2 net_avg_samples 2 temperature_unit celsius double_buffer yes no_buffers yes text_buffer_size 2048 gap_x 10 gap_y 30 minimum_size 190 450 maximum_width 190 own_window yes own_window_type override own_window_transparent yes own_window_hints undecorate,sticky,skip_taskbar,skip_pager,below border_inner_margin 0 border_outer_margin 0 alignment tr draw_shades no draw_outline no draw_borders no draw_graph_borders no override_utf8_locale yes use_xft yes xftfont caviar dreams:size=8 xftalpha 0.5 uppercase no default_color FFFFFF color1 DDDDDD color2 AAAAAA color3 888888 color4 666666 lua_load /home/pungi-man/.conky/conky_grey.lua lua_draw_hook_post main TEXT ${voffset 35} ${goto 95}${color4}${font ubuntu:size=22}${time %e}${color1}${offset -50}${font ubuntu:size=10}${time %A} ${goto 85}${color2}${voffset -2}${font ubuntu:size=9}${time %b}${voffset -2} ${color3}${font ubuntu:size=12}${time %Y}${font} ${voffset 80} ${goto 90}${font Ubuntu:size=7,weight:bold}${color}CPU ${goto 90}${font Ubuntu:size=7,weight:normal}${color1}${top name 1}${alignr}${top cpu 1}% ${goto 90}${font Ubuntu:size=7,weight:normal}${color2}${top name 2}${alignr}${top cpu 2}% ${goto 90}${font Ubuntu:size=7,weight:normal}${color3}${top name 3}${alignr}${top cpu 3}% ${goto 90}${cpugraph 10,100 666666 666666} ${goto 90}${voffset -10}${font Ubuntu:size=7,weight:normal}${color}${threads} process ${voffset 20} ${goto 90}${font Ubuntu:size=7,weight:bold}${color}MEM ${goto 90}${font Ubuntu:size=7,weight:normal}${color1}${top_mem name 1} ${alignr}${top_mem mem 1}% ${goto 90}${font Ubuntu:size=7,weight:normal}${color2}${top_mem name 2} ${alignr}${top_mem mem 2}% ${goto 90}${font Ubuntu:size=7,weight:normal}${color3}${top_mem name 3} ${alignr}${top_mem mem 3}% ${voffset 15} ${goto 90}${font Ubuntu:size=7,weight:bold}${color}DISKS ${goto 90}${diskiograph 30,100 666666 666666}${voffset -30} ${goto 90}${font Ubuntu:size=7,weight:normal}${color}used: ${fs_used /home} /home ${goto 90}${font Ubuntu:size=7,weight:normal}${color}used: ${fs_used /} / ${voffset 10} ${goto 70}${font Ubuntu:size=18,weight:bold}${color3}NET${alignr}${color2}${font Ubuntu:size=7,weight:bold}${color1}${if_up eth0}eth ${addr eth0} ${endif}${if_up wlan0}wifi ${addr wlan0}${endif} ${goto 90}${font Ubuntu:size=7,weight:bold}${color}open ports: ${alignr}${color2}${tcp_portmon 1 65535 count} ${goto 90}${font Ubuntu:size=7,weight:bold}${color}${offset 10}IP${alignr}DPORT ${goto 90}${font Ubuntu:size=7,weight:normal}${color1}${tcp_portmon 1 65535 rip 0}${alignr 1}${tcp_portmon 1 65535 rport 0} ${goto 90}${font Ubuntu:size=7,weight:normal}${color1}${tcp_portmon 1 65535 rip 1}${alignr 1}${tcp_portmon 1 65535 rport 1} ${goto 90}${font Ubuntu:size=7,weight:normal}${color1}${tcp_portmon 1 65535 rip 2}${alignr 1}${tcp_portmon 1 65535 rport 2} ${goto 90}${font Ubuntu:size=7,weight:normal}${color1}${tcp_portmon 1 65535 rip 3}${alignr 1}${tcp_portmon 1 65535 rport 3} ${goto 90}${font Ubuntu:size=7,weight:normal}${color1}${tcp_portmon 1 65535 rip 4}${alignr 1}${tcp_portmon 1 65535 rport 4} ${goto 90}${font Ubuntu:size=7,weight:normal}${color1}${tcp_portmon 1 65535 rip 5}${alignr 1}${tcp_portmon 1 65535 rport 5} ${goto 90}${font Ubuntu:size=7,weight:normal}${color1}${tcp_portmon 1 65535 rip 6}${alignr 1}${tcp_portmon 1 65535 rport 6} ${goto 90}${font Ubuntu:size=7,weight:normal}${color1}${tcp_portmon 1 65535 rip 7}${alignr 1}${tcp_portmon 1 65535 rport 7} ${goto 90}${font Ubuntu:size=7,weight:normal}${color1}${tcp_portmon 1 65535 rip 8}${alignr 1}${tcp_portmon 1 65535 rport 8} ${goto 90}${font Ubuntu:size=7,weight:normal}${color1}${tcp_portmon 1 65535 rip 9}${alignr 1}${tcp_portmon 1 65535 rport 9} ${goto 90}${font Ubuntu:size=7,weight:normal}${color1}${tcp_portmon 1 65535 rip 10}${alignr 1}${tcp_portmon 1 65535 rport 10} ${goto 90}${font Ubuntu:size=7,weight:normal}${color1}${tcp_portmon 1 65535 rip 11}${alignr 1}${tcp_portmon 1 65535 rport 11} ${goto 90}${font Ubuntu:size=7,weight:normal}${color1}${tcp_portmon 1 65535 rip 12}${alignr 1}${tcp_portmon 1 65535 rport 12} ${goto 90}${font Ubuntu:size=7,weight:normal}${color1}${tcp_portmon 1 65535 rip 13}${alignr 1}${tcp_portmon 1 65535 rport 13} ${goto 90}${font Ubuntu:size=7,weight:normal}${color1}${tcp_portmon 1 65535 rip 14}${alignr 1}${tcp_portmon 1 65535 rport 14} This script works fine with unity but faces problems in gnome 3.4 Can anyone please sort it out?

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >