Search Results

Search found 6598 results on 264 pages for 'opcode cache'.

Page 208/264 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • X11 performance problem after upgrading from Centos3 to Centos5 with an ATI Rage XL

    - by Marcelo Santos
    After upgrading a computer from Centos3 to Centos5 an application that does a lot of scrolling took a very high performance hit. top tells me that X is using a lot of CPU and that was not happening before. The machine has an ATI Rage XL with 8MB and X is using the ati driver as there is no proprietary ATI driver for this board on linux. The xorg.conf: Section "Device" Identifier "Videocard0" Driver "ati" EndSection Section "Screen" Identifier "Screen0" Device "Videocard0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 Modes "1024x768" "800x600" "640x480" EndSubSection EndSection Section "DRI" Group 0 Mode 0666 EndSection A similar machine that still has Centos3 installed is able to start DRI on the X server while this one is not, this is the Xorg.0.log for the Centos5 machine: drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed [drm] failed to load kernel module "mach64" (II) ATI(0): [drm] drmOpen failed (EE) ATI(0): [dri] DRIScreenInit Failed (II) ATI(0): Largest offscreen areas (with overlaps): (II) ATI(0): 1024 x 1279 rectangle at 0,768 (II) ATI(0): 768 x 1280 rectangle at 0,768 (II) ATI(0): Using XFree86 Acceleration Architecture (XAA) Screen to screen bit blits Solid filled rectangles 8x8 mono pattern filled rectangles Indirect CPU to Screen color expansion Solid Lines Offscreen Pixmaps Setting up tile and stipple cache: 32 128x128 slots 10 256x256 slots (==) ATI(0): Backing store disabled (==) ATI(0): Silken mouse enabled (II) ATI(0): Direct rendering disabled (==) RandR enabled I also tried using EXA instead of XAA and setting: Option "AccelMethod" "XAA" Option "XAANoOffscreenPixmaps" "true" uname -a Linux sir5.erg.inpe.br 2.6.18-128.7.1.el5 #1 SMP Mon Aug 24 08:20:55 EDT 2009 i686 i686 i386 GNU/Linux rpm -qa | grep xorg-x11-server xorg-x11-server-utils-7.1-4.fc6 xorg-x11-server-sdk-1.1.1-48.52.el5 xorg-x11-server-Xvfb-1.1.1-48.52.el5 xorg-x11-server-Xnest-1.1.1-48.52.el5 xorg-x11-server-Xorg-1.1.1-48.52.el5 The drmOpenDevice error continues when using the suggested Option "AIGLX" "true".

    Read the article

  • Firefox is very slow when establish SSL sessions

    - by yanglei
    Using wireshark, I discovered that Firefox v3.0 gets stuck every time before "client key exchange, change cipher spec" stage when establishing a SSL session. Specifically, it takes 0.8~1.8 second before Firefox send "Client Key Exchange" request. This is unacceptable since our application is HTTPS only. I tested this on IE6 and IE8, both works well. Any clues? [Update] Finally, I found the reason of 1 ~ 2 seconds stuck by displaying all captured packets in Wireshark. After the "server hello" stage, Firefox makes a request to ocsp.verisign.com combined with an additional DNS lookup for that domain. Firefox must wait the revocation status from OCSP before entering the next stage of SSL. Depends on whether DNS cache is in effect, this process takes 1 ~ 2 seconds. A interesting observation is that the IP packet contains "client key exchange" has a high possibility to get lost and thus a TCP retransmission is necessary. When this happens, the process can take 3 seconds at worst. I'm not sure if this is a coincidence or a bug. Anyway, here is the result from Wireshark: (delta-time) 0.369296 src-ip dst-ip TCP [ACK] Seq=161 Ack=2741 Win=65340 Len=0 2.538835 src-ip dst-ip TLSv1 Client Key Exchange, Change Cipher Spec, Finished 2.987034 src-ip dst-ip TLSv1 [TCP Retransmission] Client Key Exchange, Change Cipher Spec, Finished The difference between Firefox and IE is this: Firefox 3 enables OCSP checking by default where as IE only supports it. So, there is no problem with both IE6 and IE8. This is indeed a "certificate revoke" problem. Thanks

    Read the article

  • Android failure to boot on LG [migrated]

    - by Ukavi
    I need to recover data from my AT&T LG Thrill Android Phone Background: My AT&T LG Thrill phone's battery died a couple of days ago because I forgot to charge it. When I charged the phone and tried to turn it on, it showed the LG logo followed by the dropping balls and the AT&T "Rethink Possible" screen. I then get a mesage that the Application Google Services Framework has crashed and the phone goes into a loop with the dropping balls showing again followed by "Rethink Possible" screen. This sequence repeats itself over and over and the phone does not get out of this loop. I have been able to go into the recovery screen (both Safe Mode and the Android Recovery Service) and have cleared cache, etc. However, I DO NOT want to wipe user data and restore to factory settings as this will wipe all of my data (pictures, application data, etc). Solution Needed: I need a suggestion to a way of accessing my data so that I can back it up onto an SD card/computer. I DO NOT want to root the phone as this may void the warranty. What I'm looking for is a way of perhaps putting the original flash image on the micro SD card and then have the phone read that image. Or some other similar solution that will get the phone out of this loop and allow me to get to the data.

    Read the article

  • django : Serving static files through nginx

    - by PlanetUnknown
    I'm using apache+mod_wsgi for django. And all css/js/images are served through nginx. For some odd, reason when others/friends/colleagues try accessing the site, jquery/css is not getting loaded for them, hence the page looks jumbled up. My html files use code like this - <link rel="stylesheet" type="text/css" href="http://x.x.x.x:8000/css/custom.css"/> <script type="text/javascript" src="http://1x.x.x.x:8000/js/custom.js"></script> My nginx configuration in sites-available is like this - server { listen 8000; server_name localhost; access_log /var/log/nginx/aa8000.access.log; error_log /var/log/nginx/aa8000.error.log; location / { index index.html index.htm; } location /static/ { autoindex on; root /opt/aa/webroot/; } } There is a directory /opt/aa/webroot/static/ which have corresponding css & js directories. The odd thing is that the pages show fine when I access them. I have cleared my cache/etc, but the page loads fine for me, from various browsers. Also, I don't see any 404 any error in the nginx log files. Actually the logs for nginx are not getting refreshed at all. I restarted the nginx server using root, is that incorrect ? There is a user www-data defined in the nginx configuration file. Any pointers would be great.

    Read the article

  • Apache / PHP Begins to Deny SQL Requests after about 2000

    - by Daniel Stern
    We have a web page on our server that we use to run administrative scripts. For example, we might run the script "unenrolStudents()" which runs 5,000 SQL SET commands one after another and sets 5000 student entries in an SQL database to unenrolled. However, we are finding that after running a few thousand queries (it is not totally consistent) we will be "locked out" by our server. SYMPTOMS OF LOCKING OUT: - unable to connect to server with winSCP - opening putty with that connection shows a blank screen (no login / pass) - clearing cookies / cache in chrome does NOT fix locking out - other computers in the office ALSO become locked out - locking out can be triggered with a high frequency of requests (10000 in 1 second) or by less over time (10000 in 500 seconds - this will still cause a lockout even though the frequency is much less) We believe this is a security feature of our own Apache. I know we are using Suhosin but I didn't configure it so I don't know. How can I disable this locking effect so that I can confidently run all my SQL requests and they will go through? Has anyone else dealt with this and found workarounds? Thanks DS

    Read the article

  • Apt pin and self hosted apt repo

    - by Hamish Downer
    We have our own apt/deb repository with a handful of packages where we want to control the version. Crucially this includes puppet, which can be sensitive to versions being different. I want our desktops to only get puppet from our repository, but also for people to be able to add their own PPAs, enable backports etc. The current problem we have is backports on Ubuntu Lucid. Some important lines from /etc/apt/sources.list: deb http://gb.archive.ubuntu.com/ubuntu/ lucid main restricted universe multiverse deb http://gb.archive.ubuntu.com/ubuntu/ lucid-updates main restricted universe multiverse deb http://gb.archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb http://security.ubuntu.com/ubuntu/ lucid-security main restricted universe multiverse deb http://deb.example.org/apt/ubuntu/lucid/ binary/ And in /etc/apt/preferences.d/puppet: Package: puppet puppet-common Pin: release a=binary Pin-Priority: 800 Package: puppet puppet-common Pin: release a=lucid-backports Pin-Priority: -10 Currently policy says: $ sudo apt-cache policy puppet puppet: Installed: (none) Candidate: (none) Package pin: 2.7.1-1ubuntu3.6~lucid1 Version table: 2.7.1-1ubuntu3.6~lucid1 -10 500 http://gb.archive.ubuntu.com/ubuntu/ lucid-backports/main Packages 100 /var/lib/dpkg/status 2.6.14-1puppetlabs1 -10 500 http://deb.example.org/apt/ubuntu/lucid/ binary/ Packages 0.25.4-2ubuntu6.8 -10 500 http://gb.archive.ubuntu.com/ubuntu/ lucid-updates/main Packages 500 http://security.ubuntu.com/ubuntu/ lucid-security/main Packages 0.25.4-2ubuntu6 -10 500 http://gb.archive.ubuntu.com/ubuntu/ lucid/main Packages If I use n= instead of a= then I get Package pin: (not found) I'm just plain confused at this point as to what I should use. Any help appreciated.

    Read the article

  • setting up git on cygwin - openssl

    - by user23020
    I'm trying to get git running in cygwin on a windows 7 machine I have git unpacked and the directory git-1.7.1.1 when i run make install from within that directory, I get CC fast-import.o In file included from builtin.h:4, from fast-import.c:147: git-compat-util.h:136:19: iconv.h: No such file or directory git-compat-util.h:140:25: openssl/ssl.h: No such file or directory git-compat-util.h:141:25: openssl/err.h: No such file or directory In file included from builtin.h:6, from fast-import.c:147: cache.h:9:21: openssl/sha.h: No such file or directory In file included from fast-import.c:156: csum-file.h:10: error: parse error before "SHA_CTX" csum-file.h:10: warning: no semicolon at end of struct or union csum-file.h:15: error: 'crc32' redeclared as different kind of symbol /usr/include/zlib.h:1285: error: previous declaration of 'crc32' was here csum-file.h:15: error: 'crc32' redeclared as different kind of symbol /usr/include/zlib.h:1285: error: previous declaration of 'crc32' was here csum-file.h:17: error: parse error before '}' token fast-import.c: In function `store_object': fast-import.c:995: error: `SHA_CTX' undeclared (first use in this function) fast-import.c:995: error: (Each undeclared identifier is reported only once fast-import.c:995: error: for each function it appears in.) fast-import.c:995: error: parse error before "c" fast-import.c:1000: warning: implicit declaration of function `SHA1_Init' fast-import.c:1000: error: `c' undeclared (first use in this function) fast-import.c:1001: warning: implicit declaration of function `SHA1_Update' fast-import.c:1003: warning: implicit declaration of function `SHA1_Final' fast-import.c: At top level: fast-import.c:1118: error: parse error before "SHA_CTX" fast-import.c: In function `truncate_pack': fast-import.c:1120: error: `to' undeclared (first use in this function) fast-import.c:1126: error: dereferencing pointer to incomplete type fast-import.c:1127: error: dereferencing pointer to incomplete type fast-import.c:1128: error: dereferencing pointer to incomplete type fast-import.c:1128: error: `ctx' undeclared (first use in this function) fast-import.c: In function `stream_blob': fast-import.c:1140: error: `SHA_CTX' undeclared (first use in this function) fast-import.c:1140: error: parse error before "c" fast-import.c:1154: error: `pack_file_ctx' undeclared (first use in this functio n) fast-import.c:1154: error: dereferencing pointer to incomplete type fast-import.c:1160: error: `c' undeclared (first use in this function) make: *** [fast-import.o] Error 1 I'm guessing that most of these errors are due to the iconv.h and openssl files which apparently are missing, but I can't figure out how I'm supposed to install those (if I am), or if there is some other way to get around this.

    Read the article

  • Samba PDC share slow with LDAP backend

    - by hmart
    The scenario I have a SUSE SLES 11.1 SP1 machine as Samba master PDC with LDAP backend. In one share there are Database files for a Client-Server application. I log XP and Windows 7 machines to the local domain (example.local), the login is a little slow but works. In the client computers have an executable which opens, reads and writes the database files from the server share. The Problem When running Samba with LDAP password backend the client application runs VERY SLOW with a maximum transfer rate of 2500 MBit per second. If disable LDAP the client app speed increases 20x, with transfer rate of 50Mbit/sec and running smoothly. I'm doing test with just two users and two machines, so concurrency, or LDAP size shouldn't be the problem here. The suspect LDAP, Smb.conf [global] section configuration. The Question What can I do? I've googled a lot, but still have no answer. Slow smb.conf WITH LDAP [global] workgroup = zmartsoft.local passdb backend = ldapsam:ldap://127.0.0.1 printing = cups printcap name = cups printcap cache time = 750 cups options = raw map to guest = Bad User logon path = \\%L\profiles\.msprofile logon home = \\%L\%U\.9xprofile logon drive = P: usershare allow guests = Yes add machine script = /usr/sbin/useradd -c Machine -d /var/lib/nobody -s /bin/false %m$ domain logons = Yes domain master = Yes local master = Yes netbios name = server os level = 65 preferred master = Yes security = user wins support = Yes idmap backend = ldap:ldap://127.0.0.1 ldap admin dn = cn=Administrator,dc=zmartsoft,dc=local ldap group suffix = ou=Groups ldap idmap suffix = ou=Idmap ldap machine suffix = ou=Machines ldap passwd sync = Yes ldap ssl = Off ldap suffix = dc=zmartsoft,dc=local ldap user suffix = ou=Users

    Read the article

  • Users database empty after Samba3 to Samba4 migration on different servers

    - by ouzmoutous
    I have to migrate a Samba 3 to a new Samba 4 server. My problem is that the database on the samba 3 server seems a bit empty. The secrets.dtb file is only 20K whereas the “pbedit -L |wc -l”command give me 16970 lines. On my Samba3 /var/lib/samba is 1,5M After I had migrate the databse (following instructions on http://dev.tranquil.it/index.php/SAMBA_-_Migration_Samba3_Samba4), “pdbedit -L” command on the new server give me only : SAMBA4$, Administrator, dns-samba4, krbtgt and nobody. So I tried to create a VM with a Samba3. I added some users, done the same things I did for the migration and now I can see the users created on the VM. It’s like users on the Samba 3 server are in a sort of cache. I already migrate the /etc/{passwd,shadow,group} files and I can see users with the “getent passwd” command. Any ideas why my users are present when I use pdbedit but the database is so empty ? The global part of my smb.conf on the Samba 3 server : [global] workgroup = INTERNET netbios name = PDC-SMB3 server string = %h server interfaces = eth0 obey pam restrictions = Yes passdb backend = smbpasswd passwd program = /usr/bin/passwd %u passwd chat = *new* %n\n *Re* %n\n *pa* username map = /etc/samba/smbusers unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%U max log size = 1000 socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192 add user script = /usr/sbin/useradd -s /bin/false -m '%u' -g users delete user script = /usr/sbin/userdel -r '%u' add group script = /usr/sbin/groupadd '%g' delete group script = /usr/sbin/groupdel '%g' add user to group script = /usr/sbin/usermod -G '%g' '%u' add machine script = /usr/sbin/useradd -s /bin/false -d /dev/null '%u' -g machines logon script = logon.cmd logon home = \\$L\%U domain logons = Yes os level = 255 preferred master = Yes local master = Yes domain master = Yes dns proxy = No ldap ssl = no panic action = /usr/share/samba/panic-action %d invalid users = root admin users = admin, root, administrateur log level = 2

    Read the article

  • Online resizing of kvm guest root filesystem?

    - by Bittrance
    I have a Linux guest that uses an LVM volume directly as root file system (that is, there is no partition table). libvirt config looks thus: <os> <type arch='x86_64' machine='rhel6.4.0'>hvm</type> <kernel>/boot/vmlinuz-X.Y.Z.el6.x86_64</kernel> <initrd>/boot/initramfs-X.Y.Z.el6.x86_64.img</initrd> <cmdline>console=ttyS0 root=/dev/vda</cmdline> <boot dev='hd'/> </os> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none' io='native'/> <source dev='/dev/vg/guest'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> From inside the guest: $ mount /dev/vda on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) Is it possible to resize the guest's root partition without rebooting the guest? Just doing lvextend on the host and resize2fs from the guest does not seem to be enough.

    Read the article

  • Server load high, CPU idle. NFS the cause?

    - by Mech Software
    I am running into a scenario where I'm seeing a high server load (sometimes upwards of 20 or 30) and a very low CPU usage (98% idle). I'm wondering if these wait states are coming as part of an NFS filesystem connection. Here is what I see in VMStat procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 2 1 0 1298784 0 0 0 0 16 5 0 9 1 1 97 2 0 0 1 0 1308016 0 0 0 0 0 0 0 3882 4 3 80 13 0 0 1 0 1307960 0 0 0 0 120 0 0 2960 0 0 88 12 0 0 1 0 1295868 0 0 0 0 4 0 0 4235 1 2 84 13 0 6 0 0 1292740 0 0 0 0 0 0 0 5003 1 1 98 0 0 4 0 0 1300860 0 0 0 0 0 120 0 11194 4 3 93 0 0 4 1 0 1304576 0 0 0 0 240 0 0 11259 4 3 88 6 0 3 1 0 1298952 0 0 0 0 0 0 0 9268 7 5 70 19 0 3 1 0 1303740 0 0 0 0 88 8 0 8088 4 3 81 13 0 5 0 0 1304052 0 0 0 0 0 0 0 6348 4 4 93 0 0 0 0 0 1307952 0 0 0 0 0 0 0 7366 5 4 91 0 0 0 0 0 1307744 0 0 0 0 0 0 0 3201 0 0 100 0 0 4 0 0 1294644 0 0 0 0 0 0 0 5514 1 2 97 0 0 3 0 0 1301272 0 0 0 0 0 0 0 11508 4 3 93 0 0 3 0 0 1307788 0 0 0 0 0 0 0 11822 5 3 92 0 0 From what I can tell when the IO goes up the waits go up. Could NFS be the cause here or should I be worried about something else? This is a VPS box on a fiber channel SAN. I'd think the bottleneck wouldn't be the SAN. Comments?

    Read the article

  • Viability of Apache (MPM Worker), FastCGI PHP 4/5.2/5.3, and MySQL 5

    - by Adrian
    My server will be hosting numerous PHP web applications ranging from Joomla, Drupal, and some legacy (read: PHP4) and other custom-built code inherited from clients. This will be a development machine used by a dozen or so web developers and issues like fluctuating loads or particularly high load expectations are not important. Now, my question: are there any concerns I should know about when using Apache w/ MPM Worker, PHP 4/PHP 5.2/PHP 5.3 (all via FastCGI), and MySQL 5 (with a query cache of 64MB)? I have not tested the various applications extensively and I have only recently learned how to install PHP and utilize it via FastCGI (rather than mod_php, which in this case seemed impossible (considering the multiple versions of PHP and the desire to use MPM Worker over MPM Prefork)). I have come to understand that there could be concerns regarding XCache and APC, namely non-thread-safety issues where data becomes corrupted and the capability to use MPM Worker becomes null and void. Is this a valid concern? I have been using my personal testing server (running Ubuntu Server Edition 10.04 in VirtualBox) which has 2GB of RAM available to it. Here is the configuration used (the actual server will likely use a configuration more tailored to suit it's purposes): Apache: Server version: Apache/2.2.14 (Ubuntu) Server built: Apr 13 2010 20:22:19 Server's Module Magic Number: 20051115:23 Server loaded: APR 1.3.8, APR-Util 1.3.9 Compiled using: APR 1.3.8, APR-Util 1.3.9 Architecture: 64-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) Worker: <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 400 MaxRequestsPerChild 2000 </IfModule> PHP ./configure (PHP 4.4.9, PHP 5.2.13, PHP 5.3.2): --enable-bcmath \ --enable-calendar \ --enable-exif \ --enable-ftp \ --enable-mbstring \ --enable-pcntl \ --enable-soap \ --enable-sockets \ --enable-sqlite-utf8 \ --enable-wddx \ --enable-zip \ --enable-fastcgi \ --with-zlib \ --with-gettext \ Apache php-fastcgi-setup.conf FastCgiServer /var/www/cgi-bin/php-cgi-5.3.2 FastCgiServer /var/www/cgi-bin/php-cgi-5.2.13 FastCgiServer /var/www/cgi-bin/php-cgi-4.4.9 ScriptAlias /cgi-bin-php/ /var/www/cgi-bin/

    Read the article

  • Why can't I get rid of default index.html even if I disable the default virtual host in Apache2?

    - by Emre Sevinç
    I have created a virtual host settings file and I disabled the default settings by using a2dissite default (this is a pretty standard Ubuntu 10.04 installation). But no matter what I try my Apache2 server simply keeps on displaying the default index.html page instead of the index.php page that I set up in the virtual host file. Can someone help me what I'm missing. Details follow: No default settings: ls -l /etc/apache2/sites-enabled/ total 0 lrwxrwxrwx 1 root root 51 May 5 13:32 webmin.1273066327.conf -> /etc/apache2/sites-available/webmin.1273066327.conf lrwxrwxrwx 1 root root 34 May 30 11:03 www.accontax.be -> ../sites-available/www.accontax.be Contents of the relevant virtual host: cat /etc/apache2/sites-enabled/www.accontax.be <VirtualHost *> ServerName www.accontax.be ServerAlias accontax.be DirectoryIndex index.php DocumentRoot /var/www/drupal/ <Directory /var/www/drupal/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> Contents of httpd.conf: cat /etc/apache2/httpd.conf Listen 80 NameVirtualHost * I also have those relevant lines in my apache2.conf: # Include generic snippets of statements Include /etc/apache2/conf.d/ # Include the virtual host configurations: Include /etc/apache2/sites-enabled/ When I visit http://www.accontax.be I expect apache2 server go to the /var/www/drupal subdirectory and start serving index.php but it simply keeps on serving index.html from /var/www directory. I have reloaded the configuration, restarted the server, deleted my browser cache. Nothing changed. Probably I'm missing a simple yet crucial step but I just could not find it.

    Read the article

  • 100% iowait + drive faults in dmesg

    - by w00t
    Hi, I have a server on which resides a fairly visited web app. It has a raid1 of 2 HDDs, 64MB Buffer, 7200 RPM. Today it started throwing out errors like: kernel: ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen kernel: ata2.00: cmd b0/d0:01:00:4f:c2/00:00:00:00:00/00 tag 0 pio 512 in kernel: res 40/00:00:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) kernel: ata2.00: status: { DRDY } kernel: ata2: hard resetting link kernel: ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300) kernel: ata2.00: max_sectors limited to 256 for NCQ kernel: ata2.00: max_sectors limited to 256 for NCQ kernel: ata2.00: configured for UDMA/133 kernel: sd 1:0:0:0: timing out command, waited 7s kernel: ata2: EH complete kernel: SCSI device sda: 976773168 512-byte hdwr sectors (500108 MB) kernel: sda: Write Protect is off kernel: SCSI device sda: drive cache: write back All day long it has been in load higher than 10-15. I am monitoring it with atop and it gives some bizarre readings: DSK | sda | busy 100% | read 2 | write 208 | KiB/r 16 | KiB/w 32 | MBr/s 0.00 | MBw/s 0.65 | avq 86.17 | avio 47.6 ms | DSK | sdb | busy 1% | read 10 | write 117 | KiB/r 17 | KiB/w 5 | MBr/s 0.02 | MBw/s 0.07 | avq 4.86 | avio 1.04 ms | I frankly don't understand why only sda is taking all the hit. I do have one process that is constantly writing with 1-2megs but what the hell.. 100% iowait?

    Read the article

  • ajax.googleapis.com stopping my Firefox

    - by Oscar Reyes
    Today for some strange reason, Firefox stops working properly because it is trying to fetch something from ajax.googleapis.com. Is there something I can do to avoid this? Safari and Chrome work just fine. I tried uninstalling Firebug and clearing the cache. The only thing that worked was disabling the JavaScript altogether. This seems to be the culprit link: http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js What can I do? EDIT I think I have found where the problem is. My proxy is serving one byte at a time the file, so firefox consume it at that peace. What I don't understand is why Safari and Chrome takes it right away. What I did last night was, leave the FF open all the night to give him change to load the file, my hope was that I got cached and the next time there was no need to go for it. Today in the morning, the page load successfully but the page was not cached, because the next request failed the same. Here's a video showing the problem:

    Read the article

  • HTTP responses curl and wget different results

    - by Fab
    To check HTTP response header for a set of urls I send with curl the following request headers foreach ( $urls as $url ) { // Setup headers - I used the same headers from Firefox version 2.0.0.6 $header[ ] = "Accept: text/xml,application/xml,application/xhtml+xml,"; $header[ ] = "text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; $header[ ] = "Cache-Control: max-age=0"; $header[ ] = "Connection: keep-alive"; $header[ ] = "Keep-Alive: 300"; $header[ ] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7"; $header[ ] = "Accept-Language: en-us,en;q=0.5"; $header[ ] = "Pragma: "; // browsers keep this blank. curl_setopt( $ch, CURLOPT_URL, $url ); curl_setopt( $ch, CURLOPT_USERAGENT, 'Googlebot/2.1 (+http://www.google.com/bot.html)'); curl_setopt( $ch, CURLOPT_HTTPHEADER, $header); curl_setopt( $ch, CURLOPT_REFERER, 'http://www.google.com'); curl_setopt( $ch, CURLOPT_HEADER, true ); curl_setopt( $ch, CURLOPT_NOBODY, true ); curl_setopt( $ch, CURLOPT_RETURNTRANSFER, true ); curl_setopt( $ch, CURLOPT_FOLLOWLOCATION, true ); curl_setopt( $ch, CURLOPT_HTTPAUTH, CURLAUTH_ANY ); curl_setopt( $ch, CURLOPT_TIMEOUT, 10 ); //timeout 10 seconds } Sometimes I receive 200 OK which is good other time 301, 302, 307 which I consider good as well, but other times I receive weird status as 406, 500, 504 which should identify an invalid url but when I open it on the browser they are fine for example the script returns http://www.awe.co.uk/ => HTTP/1.1 406 Not Acceptable and wget returns wget http://www.awe.co.uk/ --2011-06-23 15:26:26-- http://www.awe.co.uk/ Resolving www.awe.co.uk... 77.73.123.140 Connecting to www.awe.co.uk|77.73.123.140|:80... connected. HTTP request sent, awaiting response... 200 OK Does anyone know which request header I am missing or adding in excess?

    Read the article

  • VMWare Workstation Linux Host performance tuning

    - by Hoghweed
    I need to improve my linux hosted vmware workstation for using multiple virtual machines at the same time. I feel very stupid I lost a great blog post link which I found last month (and I'm not able to find it again..) so I try to ask here if anyone can help me: This is my host (laptop): 16GB DDR3 Ram HDD Hybrid 750GB 7200 (8GB SSD Cache) Mint 15 x64 Kernel 3.9.7 swappiness set to 10 The above are the important things about the host. So, My need is the ability to run 2 or 3 VMs at the same time. The lack of performance is about the disk, The last time from that blog post I lost, I setup /tmp to be mounted ad a memory partition and in my previous installation that was good, now I'm not able to find a good solution to tweak the things. I think with 16GB o RAM there will be no problems to run multiple VMs, but whe they start to swap or use the /tmp things going bad (guest cursor going too fast after a freeze, guest freeze and so on) Anyone can help me to fit a good host tweak and configuration to get better performance? Thanks in advance

    Read the article

  • Some Can reach bidmail.com others can't.

    - by user69426
    On a windows 7 Professional machine in Chrome one of our Estimating assistants can't get to www.bidmail.com, however the other 3 can. On his machine I did nslookup then bidmail.com and it fails to find it. I then went to a machine that could reach bidmail and did nslookup. It can't find it. I was skeptical and thought maybe it was a cached page so I cleared the cache then went back to bidmail.com was able to get to the page, login, lookup a newly posted bid then download the file. Yet I can not look it up through nslookup and I can't ping it www.bidmail.com and I can't trace it. I remoted to our other warehouse which is set up as a workgroup and attempt to nslookup bidmail and that nslookup fail... and on that machine which has never been to bidmail before it was able to connect to the website! I am totally confused if I can't ping it and I can't use nslookup to get there how in the hell is Chrome getting to the page and how do I get this guy back on? Also while typing this I took a new laptop out of the box plugged it in with no updates and can get to bidmail! omg!

    Read the article

  • Adobe premiere CS5 problem with the display driver

    - by user30179
    This error is really hindering our project. I get an error, it started showing-up June 16th 2010. There are no windows updates at the on the same date as the error, other than (Windows Defender) Seems to happen when working with Image overlays. ERROR: "The NVIDIA OpenGL driver detected a problem with the display driver and is unable to continue. The application must close." We opened the side of the case in the possibility there is an over heating problem. Nvidia Driver ver 8.16.11.9175 (nVidia Quadro FX 1700) I am running: Windows 7 x64 Adobe premiere CS5 Production nVidia Quadro FX 1700 (MRGA14L) 4 Gig ram RAID 10 2 750GB drives Duo core 3.0 6MB L2 Cache This is at least three other people that have come across this error: NVidia Forum EVGA Forum NVidia Forum UPDATE: Having the case open did not help. I also installed New Nvidia drivers now I get a different error: *ERROR:*Your hardware configuration does not meet minimum specifications needed to run the application. The application must close. I ran Windows Update and installed all four updates so now I am waiting to see if the error occurs again. Anything beyond this I am out of options.

    Read the article

  • Adding new SPNs to existing service ids

    - by jmh
    We have a tomcat server using spring-security kerberos to authenticate users to the webpage against active directory. There are around 25 domain controllers. The site has two CNAME based DNS aliases. The site currently has one Service ID with SPNs registered for the DNS A record as well as each of the CNAMEs. While everything is working right now, I don't know how to reliably change this configuration without possible downtime. The reason is that clients cache kerberos tickets: http://www.juniper.net/techpubs/en_US/uac4.2/topics/concept/user-role-active-directory-about.html The 'kerbtray.exe' program is helpful for viewing and deleting Kerberos tickets on the endpoint. Old tickets must be purged from the endpoint if SPNs are updated or passwords are changed (assuming the endpoint still has a cached copy of the ticket from a prior SPNEGO request to the MAG Series device. During testing, you should purge tickets before each authentication request. Description of "klist" program used to inspect/delete cached tickets: http://technet.microsoft.com/en-us/library/hh134826.aspx So if each of the clients (users running windows) who connect to my web server have kerberos tickets that become invalid as soon as I update the SPNs or passwords, how do I ensure changes are seamless? Are there any operations that can be done safely? I can't just ask all of the users to install klist and delete their old tickets.

    Read the article

  • Need Corrected htaccess File

    - by Vince Kronlein
    I'm attempting to use a wordpress plugin called WP Fast Cache which creates static html files from all your posts, pages and categories. It creates the following directory structure inside wp-content: wp_fast_cache example.com pagename index.html categoryname postname index.html basically just a nested directory structure and a final index.html for each item. But the htaccess edits it makes are crazy. #start_wp_fast_cache - do not remove this comment <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_METHOD} ^(GET) RewriteCond /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}x__query__x%{QUERY_STRING}index.html -f RewriteCond %{HTTP_USER_AGENT} !(iPhone|Windows\sCE|BlackBerry|NetFront|Opera\sMini|Palm\sOS|Blazer|Elaine|^WAP.*$|Plucker|AvantGo|Nokia) RewriteCond %{HTTP_COOKIE} !(wordpress_logged_in) [NC] RewriteRule ^(.*)$ /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}x__query__x%{QUERY_STRING}index.html [L] RewriteCond %{REQUEST_METHOD} ^(GET) RewriteCond %{QUERY_STRING} ^$ RewriteCond /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}index.html -f RewriteCond %{HTTP_USER_AGENT} !(iPhone|Windows\sCE|BlackBerry|NetFront|Opera\sMini|Palm\sOS|Blazer|Elaine|^WAP.*$|Plucker|AvantGo|Nokia) RewriteCond %{HTTP_COOKIE} !(wordpress_logged_in) [NC] RewriteRule ^(.*)$ /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}index.html [L] </IfModule> #end_wp_fast_cache No matter how I try and work this out I get a 404 not found. And not the Wordpress 404, and janky apache 404. I need to find the correct syntax to route all requests that don't exist ie: files or directories to: wp-content/wp_fast_cache/hostname/request_uri/ So for example: Page: example.com/about-us/ => wp-content/wp_page_cache/example.com/about-us/index.html Post: example.com/my-category/my-awesome-post/ => wp-content/wp_fast_cache/example.com/my-category/my-awesome-post/index.html Category: example.com/news/ => wp-content/wp_fast_cache/example.com/news/index.html Any help is appreciated.

    Read the article

  • Cannot Resolve Host Or Access Website Through Router

    - by Boris_yo
    This is weird. I am on Windows XP with Edimax BR 6204Wg. I have 3 devices - 2 laptops and 1 smartphone. 1st laptop and smartphone are connected through WiFi to router and 2nd laptop is connected through LAN to router. Before firmware upgrade i did not try to access website but after firmware upgrade to latest version: http://www.edimax.eu/en/support_detail.php?pd_id=11&pl1_id=3#02 i had problems resolving host, pinging, tracerting and accessing website. Sometimes ping and tracert work but i cannot access website and sometimes i can access website but ping and tracert do not work. Weird? I downgraded to previous version and no changes. If i can no longer access that website through Internet Explorer, i can access it in Firefox. I tried deleting cookies, clearing cache and that seem not make difference. Switching LAN port did not make difference. When i disconnect router and connect laptop through LAN to internet modem, everything is normal. I tried resetting router, resetting to factory default settings and all did not help. At the moment i can access website on laptop connected through LAN from Firefox and Internet Explorer, but on my smartphone i can access website only with Opera but not with built-in browser and Skyfire. UPDATE: I just could only access with Internet Explorer but not with other browsers on my PC. Minutes later i could access with all browsers. But on smartphone i could only access with Opera and not with other browsers. I am confused. I also determined that sometimes i can access and sometimes can't. What is also weird is that when ping and tracert cannot resolve host, i still am able to access website.

    Read the article

  • Can I tell if crashplan has backed up a particular file in a particular state?

    - by Chris Cogdon
    I would like to be able to tell, programmatically, if CrashPlan has backed-up a particular file, including the current updates to that file. I.e., that the current contents of a file are backed up. It's relatively easy to tell when CrashPlan last backed up a file: its file name appears in /usr/local/crashplan/log/backup_files.log.0, and with some accuracy, I could compare the backup time with the last modification time to the file, but that method appears to be somewhat dubious. A couple of methods I could think of, but I don't know how: Compare the current file to CrashPlan's metadata about that file. This needs knowledge about the format of CrashPlan's "cache" files as well as the hashing system used. This might be achievable through the CLI, but the CLI is just a portal into the GUI, and I need something that's scriptable. Restore the file to a temporary directory, and compare it. Unfortunately, there is no CLI to do restores; the GUI is the only way. I'll describe what I'm trying to achieve. It would be nice to know how to do the above, even if there are alternative methods for the following: I'm using CrashPlan for continuous backups to my PostgreSQL database, using WAL archives. In the current configuration, the archive command copies the files to an archive directory, which is backed up by CrashPlan. Every so often I manually confirm (or just trust) a group of WALs are backed up, and remove them from the archive directory, and occasionally do a restore through the GUI to ensure I can retrieve current and "deleted" WALs. The xlog directory is backed-up, too, so I have a good chance of doing a near-full restore even if a particular xlog hasn't been archived by PostgreSQL yet. I'd like to be able to automate this process, which necessitates either confirming the backup status and recency, or automating a restore for comparison purposes. (As a bonus, if the method is trustworthy, I could turn the "archive_command" from "copy to archive directory" into "confirm CrashPlan has backed up the current version", and do away with the archive directory completely). (And, yes, I'm doing regular pg_dumpall's, in addition to the above.)

    Read the article

  • Intel NIC X540-T1 non-functional in Ubuntu Server 12.04

    - by Jeff Carr
    I have installed three Intel X540-T1's in servers running Ubuntu Server 12.04, but all are non-functional, no link lights, no packets sent or received, and no connection on ip4 or ip6 whether set up as dhcp or static. Also, dmesg doesn't detect cable connection or disconnection. I updated the default ixgbe driver to Intel's latest version (3.11.33) with no change. The ethernet controller is being reported as X540-AT2 (which might be a problem that I can't figure out how to fix), but the subsystem is X540-T1 so I believe that might be intended. Does anyone have any experience with this that could assist? ifconfig eth2 eth2 Link encap:Ethernet HWaddr a0:36:9f:14:5f:ea inet addr:192.168.101.1 Bcast:192.168.101.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1<br> RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ethtool -i eth2 driver: ixgbe version: 3.11.33 firmware-version: 0x8000037c bus-info: 0000:08:00.0 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes lspci -vvnns 08:00.0 08:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller 10 Gigabit X540-AT2 [8086:1528] (rev 01) Subsystem: Intel Corporation Ethernet Converged Network Adapter X540-T1 [8086:0002] Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 16 Region 0: Memory at e8000000 (64-bit, prefetchable) [size=2M] Region 4: Memory at e8200000 (64-bit, prefetchable) [size=16K] [virtual] Expansion ROM at e8280000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: ixgbe Kernel modules: ixgbe

    Read the article

  • BIND9 server types

    - by aGr
    I was configuring DNS on my server using BIND9, everything seems to work, but I have a question regarding my config file. I've ended up with this configuration in /etc/bind/named.conf.local zone "example.com" { type master; file "/etc/bind/db.example.com"; allow-transfer { 192.168.1.1; }; }; zone "1.168.192.in-addr.arpa" { type master; notify no; file "/etc/bind/db.192"; allow-transfer { 192.168.1.1; }; }; forwarders { 10.253.22.140; 10.253.22.141; }; I've read about the different type of dns server, like primary master etc. The first two parts (zone and zone) corresponds to primary dns server configuration. First record for "classic" lookup, second one for reverse. The last part (forwarders) is configuration of cache-server and contains the ISP's IP of DNS server. So all names resolved thanks to this server will be cached. Simple question: am I right? Does my description make sense? Or one server can be only either master or either cached?

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >