Search Results

Search found 17716 results on 709 pages for 'bad pool header'.

Page 43/709 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Bad performance with Linux software RAID5 and LUKS encryption

    - by Philipp Wendler
    I have set up a Linux software RAID5 on three hard drives and want to encrypt it with cryptsetup/LUKS. My tests showed that the encryption leads to a massive performance decrease that I cannot explain. The RAID5 is able to write 187 MB/s [1] without encryption. With encryption on top of it, write speed is down to about 40 MB/s. The RAID has a chunk size of 512K and a write intent bitmap. I used -c aes-xts-plain -s 512 --align-payload=2048 as the parameters for cryptsetup luksFormat, so the payload should be aligned to 2048 blocks of 512 bytes (i.e., 1MB). cryptsetup luksDump shows a payload offset of 4096. So I think the alignment is correct and fits to the RAID chunk size. The CPU is not the bottleneck, as it has hardware support for AES (aesni_intel). If I write on another drive (an SSD with LVM) that is also encrypted, I do have a write speed of 150 MB/s. top shows that the CPU usage is indeed very low, only the RAID5 xor takes 14%. I also tried putting a filesystem (ext4) directly on the unencrypted RAID so see if the layering is problem. The filesystem decreases the performance a little bit as expected, but by far not that much (write speed varying, but 100 MB/s). Summary: Disks + RAID5: good Disks + RAID5 + ext4: good Disks + RAID5 + encryption: bad SSD + encryption + LVM + ext4: good The read performance is not affected by the encryption, it is 207 MB/s without and 205 MB/s with encryption (also showing that CPU power is not the problem). What can I do to improve the write performance of the encrypted RAID? [1] All speed measurements were done with several runs of dd if=/dev/zero of=DEV bs=100M count=100 (i.e., writing 10G in blocks of 100M). Edit: If this helps: I'm using Ubuntu 11.04 64bit with Linux 2.6.38. Edit2: The performance stays approximately the same if I pass a block size of 4KB, 1MB or 10MB to dd.

    Read the article

  • Cisco 7206 error trying to copy running-config (Bad file number)

    - by jasondewitt
    I have a cisco 7206 that terminates a bunch of pppoa sessions for dsl users. Today I noticed that if I tried to "show run" nothing happened. I mean that it doesn't show anything and just sends me right back to the command prompt. I decided I should probably try and back up the config and that is where I'm stuck. Any time I try to copy the running-config to tftp or to pcmcia card that I know is not full I get the following error: %Error opening system:/running-config (Bad file number) I get this error when I try to do anything with the running config. I've been googling around, but I haven't found any thing else that talks about this error. I've seen people say to erase the nvram and then try to "copy run start", but I don't want to erase the nvram until I can pull off a copy of the running-config. I would try to reboot it, but the startup-config that is on the nvram looks to be woefully out of date (good job me!). Any ideas what might be wrong? or how I can get the running config off the router?

    Read the article

  • USB drive errors after airport scan

    - by bobobobo
    Well, I just got a new PNY usb drive and it passed through an airport scanner yesterday. For some reason, I wrote to it and then tried to read from it today, and it gave me a corrupted error! chkdsk reports errors like: Bad links in lost chain at cluster 1179 corrected. Lost chain cross-linked at cluster 1200. Orphan truncated. Lost chain cross-linked at cluster 1228. Orphan truncated. Lost chain cross-linked at cluster 1236. Orphan truncated. Lost chain cross-linked at cluster 1237. Orphan truncated. Lost chain cross-linked at cluster 1244. Orphan truncated. Lost chain cross-linked at cluster 1250. Orphan truncated. Lost chain cross-linked at cluster 1266. Orphan truncated. Lost chain cross-linked at cluster 1278. Orphan truncated. etc. What is this from? Could it possibly be from the airport scanner, or is it likely a defective USB chip? How can I check the chip to see if I should just return/throw it away or continue to use it?

    Read the article

  • Website latency and bad tcp packets

    - by Mistero Lupo
    I have multiple websites hosted on a Linode VPS and I'm having an issue with one of them: every page that I try to load has about 10 seconds latency. Apache logs are clean and the other websites on the same machine are running well. At a first glance I tought it was a memory problem since the VPS has got only 512M, but from the linode dashboard CPU and Disk I/O are normal. Anyway here we have the ram status: $ free -m total used free shared buffers cached Mem: 487 463 23 0 2 55 -/+ buffers/cache: 404 82 Swap: 255 155 100 Only 23M free, but if it was a memory problem why other websites are going as usual? I took a live capture with wireshark, and there are some duplicates SYN ACK packets just before the 10 seconds gap. I'm out of ideas, looking for some clues. Wireshark live capture screenshot As you can see from the image, the gap is after the last bad tcp. Thank you in advance. UPDATE I've checked Apache2 logs in debug error level, and this is where something is appening: 151.97.156.191 - - [14/Nov/2012:11:19:40 +0100] [www.fmaisi.it/sid#7f32c625a220][rid#7f32c6801578/subreq] (3) [perdir /home/fmaisi/sites/www.fmaisi.it/public_html/] applying pattern '^index\.php$' to uri 'index.php' 151.97.156.191 - - [14/Nov/2012:11:19:40 +0100] [www.fmaisi.it/sid#7f32c625a220][rid#7f32c6801578/subreq] (1) [perdir /home/fmaisi/sites/www.fmaisi.it/public_html/] pass through /home/fmaisi/sites/www.fmaisi.it/public_html/index.php 151.97.156.191 - - [14/Nov/2012:11:19:54 +0100] [www.fmaisi.it/sid#7f32c625a220][rid#7f32c6537c78/initial] (3) [perdir /home/fmaisi/sites/www.fmaisi.it/public_html/] strip per-dir prefix: /home/fmaisi/sites/www.fmaisi.it/public_html/wp-content/plugins/wp-filebase/wp-filebase_css.php -> wp-content/plugins/wp-filebase/wp-filebase_css.php 151.97.156.191 - - [14/Nov/2012:11:19:54 +0100] [www.fmaisi.it/sid#7f32c625a220][rid#7f32c6537c78/initial] (3) [perdir /home/fmaisi/sites/www.fmaisi.it/public_html/] applying pattern '^index\.php$' to uri 'wp-content/plugins/wp-filebase/wp-filebase_css.php' As you can see there is a gap of 14 seconds after the pass through of index.php. Any suggestions? I'm out of ideas again.

    Read the article

  • Strange issue in header location redirect

    - by hd01
    I have three websites hosted (example1.com, example2.com, example3.com) on a server. There is a page (test.php) on example1.com with just code below inside it: <?php header('Location:http://example2.com/a.php'); ?> When I browse test.php it goes to http://example1.com/a.php . it doesn't understand it is another domain url, it tried to find the page on itself. but when I put http://google.com instead of example2.com/a.php it works correct. I really get confused. What is the problem ? Should I set some configuration on the server? ( I am administrator of the hosting server ). Ps. The server is behind a pound server. Here's the Firebug Net output for example1.com/test.php Response Headers: HTTP/1.1 302 Found Date: Tue, 09 Oct 2012 09:03:34 GMT Server: Apache/2.2.16 (Debian) Location: http://example1.com/a.php Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 21 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=utf-8 Request Headers: Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding gzip, deflate Accept-Language en-us,en;q=0.5 Connection keep-alive Cookie mycookie Host example1.com User-Agent Mozilla/5.0 (X11; Linux i686; rv:14.0) Gecko/20100101 Firefox/14.0.1

    Read the article

  • Why is a ZFS pool not persisting over server restart?

    - by Chance
    I have a ZFS pool with 4 drives. It also has a 3gb ZIL and a 20GB L2ARC that are each partitions on an SDD that doubles as my Linux Mint (ver. 13) boot drive. The pool is mounted to /data. The problem I am running into is that when I restart the server the pool/directory is completely wiped despite having data in it prior. I'm afraid I'm doing something wrong in the setup, which leads me to the following questions: What would cause this? Is there anyway to get the data back? How do I stop it from happening in the future? Thank you in advance! pool: data state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 sda1 ONLINE 0 0 0 sdb1 ONLINE 0 0 0 sdc1 ONLINE 0 0 0 sdd1 ONLINE 0 0 0 logs sde4 ONLINE 0 0 0 cache sde3 ONLINE 0 0 0 errors: No known data errors

    Read the article

  • 401 - Unauthorized: Access is denied error from web app running in IIS 7.5 using App Pool Identity

    - by Eric Gatesman
    I have an ASP.NET app on a Windows 2008 server, IIS 7.5. When I try to access web site, I get a login popup. If I click "cancel" I get a 401 - Unauthorized: Access is denied due to invalid credentials. The app is using Windows authentication (anonymous is disabled). The app has it's own app pool, running under the App Pool Identity. If I change the app pool to run under the NetworkService account, my website functions just fine. I'm guessing that this is just a permissions issue, but can't figure out what permissions I need to change. I gave the App Pool Identity permissions on the physical directory of the app, but that didn't solve the problem.

    Read the article

  • tomcat5 HTTP 400 BAd Request

    - by Oneiroi
    OS is centOS 5.5 x64, rpm's are as follows: tomcat5-jsp-2.0-api-5.5.23-0jpp.9.el5_5 tomcat5-common-lib-5.5.23-0jpp.9.el5_5 tomcat5-servlet-2.4-api-5.5.23-0jpp.9.el5_5 tomcat5-server-lib-5.5.23-0jpp.9.el5_5 tomcat5-5.5.23-0jpp.9.el5_5 tomcat5-jasper-5.5.23-0jpp.9.el5_5 telnet localhost 8080 Trying 127.0.0.1... Connected to localhost.localdomain (127.0.0.1). Escape character is '^]'. GET / HTTP/1.0 Host: localhost HTTP/1.1 400 Bad Request Server: Apache-Coyote/1.1 Date: Thu, 16 Sep 2010 15:06:21 GMT Connection: close alternatives --display java output: alternatives --display java java - status is manual. link currently points to /usr/lib/jvm/jre1.6.0_21/bin/java /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/java - priority 16000 slave keytool: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/keytool slave orbd: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/orbd slave pack200: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/pack200 slave rmid: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/rmid slave rmiregistry: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/rmiregistry slave servertool: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/servertool slave tnameserv: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/tnameserv slave unpack200: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/unpack200 slave jre_exports: /usr/lib/jvm-exports/jre-1.6.0-openjdk.x86_64 slave jre: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64 slave java.1.gz: /usr/share/man/man1/java-java-1.6.0-openjdk.1.gz slave keytool.1.gz: /usr/share/man/man1/keytool-java-1.6.0-openjdk.1.gz slave orbd.1.gz: /usr/share/man/man1/orbd-java-1.6.0-openjdk.1.gz slave pack200.1.gz: /usr/share/man/man1/pack200-java-1.6.0-openjdk.1.gz slave rmid.1.gz: /usr/share/man/man1/rmid-java-1.6.0-openjdk.1.gz slave rmiregistry.1.gz: /usr/share/man/man1/rmiregistry-java-1.6.0-openjdk.1.gz slave servertool.1.gz: /usr/share/man/man1/servertool-java-1.6.0-openjdk.1.gz slave tnameserv.1.gz: /usr/share/man/man1/tnameserv-java-1.6.0-openjdk.1.gz slave unpack200.1.gz: /usr/share/man/man1/unpack200-java-1.6.0-openjdk.1.gz /usr/lib/jvm/jre-1.4.2-gcj/bin/java - priority 1420 slave keytool: /usr/lib/jvm/jre-1.4.2-gcj/bin/keytool slave orbd: (null) slave pack200: (null) slave rmid: (null) slave rmiregistry: /usr/lib/jvm/jre-1.4.2-gcj/bin/rmiregistry slave servertool: (null) slave tnameserv: (null) slave unpack200: (null) slave jre_exports: /usr/lib/jvm-exports/jre-1.4.2-gcj slave jre: /usr/lib/jvm/jre-1.4.2-gcj slave java.1.gz: (null) slave keytool.1.gz: (null) slave orbd.1.gz: (null) slave pack200.1.gz: (null) slave rmid.1.gz: (null) slave rmiregistry.1.gz: (null) slave servertool.1.gz: (null) slave tnameserv.1.gz: (null) slave unpack200.1.gz: (null) /usr/lib/jvm/jre1.6.0_21/bin/java - priority 2 slave keytool: (null) slave orbd: (null) slave pack200: (null) slave rmid: (null) slave rmiregistry: (null) slave servertool: (null) slave tnameserv: (null) slave unpack200: (null) slave jre_exports: (null) slave jre: (null) slave java.1.gz: (null) slave keytool.1.gz: (null) slave orbd.1.gz: (null) slave pack200.1.gz: (null) slave rmid.1.gz: (null) slave rmiregistry.1.gz: (null) slave servertool.1.gz: (null) slave tnameserv.1.gz: (null) slave unpack200.1.gz: (null) Current `best' version is /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/java. Same occurs trying HTTP/1.1, and I am at a complete loss as to why.

    Read the article

  • Providing DNS redirection to honeypot server for known bad domains

    - by syn-
    Currently running BIND on RHEL 5.4 and am looking for a more efficient manner of providing DNS redirection to a honeypot server for a large (30,000+) list of forbidden domains. Our current solution for this requirement is to include a file containing a zone master declaration for each blocked domain in named.conf. Subsequently, each of these zone declarations point to the same zone file, which resolves all hosts in that domain to our honeypot servers. ...basically this allows us to capture any "phone home" attempts by malware that may infiltrate the internal systems. The problem with this configuration is the large amount of time taken to load all 30,000+ domains as well as management of the domain list configuration file itself... if any errors creep into this file, the BIND server will fail to start, thereby making automation of the process a little frightening. So I'm looking for something more efficient and potentially less error prone. named.conf entry: include "blackholes.conf"; blackholes.conf entry example: zone "bad-domain.com" IN { type master; file "/var/named/blackhole.zone"; allow-query { any; }; notify no; }; blackhole.zone entries: $INCLUDE std.soa @ NS ns1.ourdomain.com. @ NS ns2.ourdomain.com. @ NS ns3.ourdomain.com.                        IN            A                192.168.0.99 *                      IN            A                192.168.0.99

    Read the article

  • 502 Bad Gateway with nginx + apache + subversion + ssl (SVN COPY)

    - by theplatz
    I've asked this on stackoverflow, but it may be better suited for serverfault... I'm having a problem running Apache + Subversion with SSL behind an Nginx proxy and I'm hoping someone might have the answer. I've scoured google for hours looking for the answer to my problem and can't seem to figure it out. What I'm seeing are "502 (Bad Gateway)" errors when trying to MOVE or COPY using subversion; however, checkouts and commits work fine. Here are the relevant parts (I think) of the nginx and apache config files in question: Nginx upstream subversion_hosts { server 127.0.0.1:80; } server { listen x.x.x.x:80; server_name hostname; access_log /srv/log/nginx/http.access_log main; error_log /srv/log/nginx/http.error_log info; # redirect all requests to https rewrite ^/(.*)$ https://hostname/$1 redirect; } # HTTPS server server { listen x.x.x.x:443; server_name hostname; passenger_enabled on; root /path/to/rails/root; access_log /srv/log/nginx/ssl.access_log main; error_log /srv/log/nginx/ssl.error_log info; ssl on; ssl_certificate server.crt; ssl_certificate_key server.key; add_header Front-End-Https on; location /svn { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; set $fixed_destination $http_destination; if ( $http_destination ~* ^https(.*)$ ) { set $fixed_destination http$1; } proxy_set_header Destination $fixed_destination; proxy_pass http://subversion_hosts; } } Apache Listen 127.0.0.1:80 <VirtualHost *:80> # in order to support COPY and MOVE, etc - over https (443), # ServerName _must_ be the same as the nginx servername # http://trac.edgewall.org/wiki/TracNginxRecipe ServerName hostname UseCanonicalName on <Location /svn> DAV svn SVNParentPath "/srv/svn" Order deny,allow Deny from all Satisfy any # Some config omitted ... </Location> ErrorLog /var/log/apache2/subversion_error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/subversion_access.log combined </VirtualHost> From what I could tell while researching this problem, the server name has to match on both the apache server as well as the nginx server, which I've done. Additionally, this problem seems to stick around even if I change the configuration to use http only.

    Read the article

  • Any way to know what files were in a broken ZFS pool?

    - by Erik Tjernlund
    I have a large ZFS pool of 4 combined drives. Now, the filesystem can not be mounted: pool: tank state: UNAVAIL status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-3C scan: none requested config: NAME STATE READ WRITE CKSUM tank UNAVAIL 0 0 0 insufficient replicas c10t0d0 ONLINE 0 0 0 c8t0d0 UNAVAIL 0 0 0 cannot open c8t1d0 ONLINE 0 0 0 c10t1d0 ONLINE 0 0 0 Probably a broken drive (c8t0d0). I'm not overly concerned by the loss of the data, but I'd love to know exactly which files were in that pool. Is there any way to get a listing of what files were there?

    Read the article

  • 40k Event Log Errors an hour Unknown Username or bad password

    - by ErocM
    I am getting about 200k of these an hour: An account failed to log on. Subject: Security ID: SYSTEM Account Name: TGSERVER$ Account Domain: WORKGROUP Logon ID: 0x3e7 Logon Type: 4 Account For Which Logon Failed: Security ID: NULL SID Account Name: administrator Account Domain: TGSERVER Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x334 Caller Process Name: C:\Windows\System32\svchost.exe Network Information: Workstation Name: TGSERVER Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: Advapi Authentication Package: Negotiate Transited Services: - Package Name (NTLM only): - Key Length: 0 This event is generated when a logon request fails. It is generated on the computer where access was attempted. The Subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe. The Logon Type field indicates the kind of logon that was requested. The most common types are 2 (interactive) and 3 (network). The Process Information fields indicate which account and process on the system requested the logon. The Network Information fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases. The authentication information fields provide detailed information about this specific logon request. - Transited services indicate which intermediate services have participated in this logon request. - Package name indicates which sub-protocol was used among the NTLM protocols. - Key length indicates the length of the generated session key. This will be 0 if no session key was requested. On my server... I changed my adminstrative username to something else and since then I've been inidated with these messages. I found on http://technet.microsoft.com/en-us/library/cc787567(v=WS.10).aspx that the 4 means "Batch logon type is used by batch servers, where processes may be executing on behalf of a user without their direct intervention." which really doesn't shed any light on it for me. I checked the services and they are all logging in as local system or network service. Nothing for administrator. Anyone have any idea how I tell where these are coming from? I would assume this is a program that is crapping out... Thanks in advance!

    Read the article

  • Nginx + PHP-FPM on Centos 6.5 gives me 502 Bad Gateway (fpm error: unable to read what child say: Bad file descriptor)

    - by Latheesan Kanes
    I am setting up a standard LEMP stack. My current setup is giving me the following error: 502 Bad Gateway This is what is currently installed on my server: Here's the configurations I've created/updated so far, can some one take a look at the following and see where the error might be? I've already checked my logs, there's nothing in there (http://i.imgur.com/iRq3ksb.png). And I saw the following in /var/log/php-fpm/error.log file. sidenote: both the nginx and php-fpm has been configured to run under a local account called www-data and the following folders exits on the server nginx.conf global nginx configuration user www-data; worker_processes 6; worker_rlimit_nofile 100000; error_log /var/log/nginx/error.log crit; pid /var/run/nginx.pid; events { worker_connections 2048; use epoll; multi_accept on; } http { include /etc/nginx/mime.types; default_type application/octet-stream; # cache informations about FDs, frequently accessed files can boost performance open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # to boost IO on HDD we can disable access logs access_log off; # copies data between one FD and other from within the kernel # faster then read() + write() sendfile on; # send headers in one peace, its better then sending them one by one tcp_nopush on; # don't buffer data sent, good for small data bursts in real time tcp_nodelay on; # server will close connection after this time keepalive_timeout 60; # number of requests client can make over keep-alive -- for testing keepalive_requests 100000; # allow the server to close connection on non responding client, this will free up memory reset_timedout_connection on; # request timed out -- default 60 client_body_timeout 60; # if client stop responding, free up memory -- default 60 send_timeout 60; # reduce the data that needs to be sent over network gzip on; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; gzip_disable "MSIE [1-6]\."; # Load vHosts include /etc/nginx/conf.d/*.conf; } conf.d/www.domain.com.conf my vhost entry ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } /etc/php-fpm.d/www-data.conf my php-fpm pool config ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } I've got a file in /home/www-data/public_html/index.php with the code <?php phpinfo(); ?> (file uploaded as user www-data).

    Read the article

  • How much memory do I need for innodb buffer pool?

    - by Shirko
    I read here that You need buffer pool a bit (say 10%) larger than your data (total size of Innodb TableSpaces) On the other hand I've read elswher that innodb_buffer_pool_size must be up to %80 of the memory. So I'm really confused how should I choose the best size for the pool. My database size is about 6GB and my total memory 64GB. Also I'm wondering if I increase the buffer pool size, I should shrink the number of maximum connections to make room for extra buffer, or these parameters are independent. Thanks

    Read the article

  • How do I stop my IIS App Pool making a request to wpad.mydomain.com?

    - by Programming Hero
    As part of some performance troubleshooting, I've monitored the slow startup of a "cold" App Pool (one without an active worker process) in IIS. When using a built-in account, the App Pool starts in sub-second time. When using a custom local account the App Pool takes 30+ seconds to start processing requests. The service appears to be making requests to wpad.mydomain.com, an address it does not have access to, which causes it to wait 30 seconds for a response before eventually timing out. As a workaround, I've added the hostname to the server's hosts file, to direct the traffic to the local machine, which returns much faster (1-2 seconds). What do I need to do to stop IIS making this request when this identity is used for the App Pool?

    Read the article

  • Is there any reason for an object pool to not be treated as a singleton?

    - by Chris Charabaruk
    I don't necessarily mean implemented using the singleton pattern, but rather, only having and using one instance of a pool. I don't like the idea of having just one pool (or one per pooled type). However, I can't really come up with any concrete situations where there's an advantage to multiple pools for mutable types, at least not any where a single pool can function just as well. What advantages are there to having multiple pools over a singleton pool?

    Read the article

  • why DKIM bad signature

    - by Ashish
    Hi, I have setup a postfix mail receiving server. On this I am using DKIM milter to verify incoming mail DKIM signatures. From some email clients I am getting the following 'bad signature data' error: Jun 7 02:10:09 ip-10-194-99-63 dkim-filter[27964]: (unknown-jobid) no signing domain match for `gmail.com' Jun 7 02:10:09 ip-10-194-99-63 dkim-filter[27964]: (unknown-jobid) no signing subdomain match for `gmail.com' Jun 7 02:10:09 ip-10-194-99-63 dkim-filter[27964]: (unknown-jobid) no signing keylist match for `[email protected]' Jun 7 02:10:09 ip-10-194-99-63 dkim-filter[27964]: (unknown-jobid) not internal Jun 7 02:10:09 ip-10-194-99-63 dkim-filter[27964]: (unknown-jobid) not authenticated Jun 7 02:10:09 ip-10-194-99-63 dkim-filter[27964]: (unknown-jobid) mode select: verifying Jun 7 02:10:09 ip-10-194-99-63 dkim-filter[27964]: BA6E210015D: bad signature data Jun 7 02:10:10 ip-10-194-99-63 postfix/cleanup[30131]: BA6E210015D: milter-reject: END-OF-MESSAGE from mail-pv0-f176.google.com[74.125.83.176]: 5.7.0 bad DKIM signature data; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<mail-pv0-f176.google.com> can anybody give me any clue why I am getting the above error in my postfix logs and what remedies I can put in to rectify or workaround this or warn the sender. Thanks in advance Ashish Sharma

    Read the article

  • StyleCop XML Documentation Header - Using 3 /// instead of 2 //

    - by Adam Jenkin
    I am using XML documentation headers on my c# files to pass the StyleCop rule SA1633. Currently, I have to use the 2 slash commenting rule to allow StyleCop to recognize the header. for example: // <copyright file="abc.ascx.cs" company="MyCompany.com"> // MyCompany.com. All rights reserved. // </copyright> // <author>Me</author> This works fine for StyleCop, however I would like to use the 3 slash commenting rule to enable visual studio to understand the comments as XML and provide the XML functionality (highlighting, auto indenting etc) /// <copyright file="abc.ascx.cs" company="MyCompany.com"> /// MyCompany.com. All rights reserved. /// </copyright> /// <author>Me</author> The problem is that when using 3 slashes, StyleCop no longer see's the header and throws the SA1633 warning. Is there anyway to configure stylecop to understand the header is contained in XML using 3 slashes? Thanks, Adam

    Read the article

  • WPF ListView get GridViewColumn from control inside DataTemplate

    - by ragi
    Example: <ListView Name="lvAnyList" ItemsSource="{Binding}"> <ListView.View> <GridView> <GridViewColumn Header="xx" DisplayMemberBinding="{Binding XX}" CellTemplate="{DynamicResource MyDataTemplate}"/> <GridViewColumn Header="yy"> <GridViewColumn.CellTemplate> <DataTemplate> <TextBlock Text="{Binding YY}" /> </DataTemplate> </GridViewColumn.CellTemplate> </GridViewColumn> </GridView> </ListView.View> </ListView> If I access the TextBlock inside DataTemplate I will have access to information about the bound path. But I don't know how to find (starting from the TextBlock) its containing GridViewColumn (which is in the GridViewRowPresenter columns list) and compare it to the grid's GridViewColumn (from GridViewHeaderRowPresenter column list) to get the header's name. At the end I should have the pairs: xx-XX, yy-YY I can find a list of all TextBlocks inside ListView by using this: GridViewRowPresenter gvrp = ControlAux.GetVisualDescendants<GridViewRowPresenter>(element).FirstOrDefault(); IEnumerable<TextBlock> entb = GetVisualDescendants<TextBlock>(gvrp); I can find a list of all GridViewColumnHeaders: GridViewHeaderRowPresenter gvhrp = ControlAux.GetVisualDescendants<GridViewHeaderRowPresenter>(element).FirstOrDefault(); And I cannot find the connection between the TextBlocks and the GridViewColumns... I hope this is understandable.

    Read the article

  • Add a custom variable to an email header already within a gmail inbox

    - by Ali
    Hi guys - this may seem odd but I was wondering if it was possible to add custom header details to emails already in an inbox. Like lets say I wish to add in the Header of the email something like - myvariable = myvalue and then be able to query it somehow. I'm looking at code from Iloha mail and most of the details like subject and from recieved etc are in the headers and you can search through them. SO is it possible to add my own custom variable to an email header and query it in the same way? How can it be done using php? EDIT ==================== Thanks I know how you can modify the headers of sent messages plus also query for custom variables in message headers however in this case I want to know if it would be possible to add a custom variable in a recieved message already in my inbox. Actually let me define the situation here. I'm working on a google apps solution which requires maintaining references to emails. Basically the application is as such that when an email comes in - we create an order from that email and wish to maintain a reference to that EXACT email by some kind of identifier which would enable us to identify that email. The fact is that we don't want to download the emails in a database and maintain a separate store as we would want to keep all the emailing on GMAIL. We just need: A way to be able to 'link' to a specific email permanently - the UID is just a sequence number and not very reliable. We couldn't find any property of emails that could function as a unique ID or primary key and so we thought if we could instead generate a key on our end and store it in a custom variable on the email itself. However it seems unfortunately that there isn't a way to manipulate headers of an already existing email. :( is there any solution to this problem I could use any IDEA !

    Read the article

  • How to transfer objects through the header in WCF

    - by Michael
    I'm trying to transfer some user information in the header of the message through message inspectors. I have created a behavior which adds the inspector to the service (both client and server). But when I try to communicate with the service I get the following error: XmlException: Name cannot begin with the '<' character, hexadecimal value 0x3C. I have also get exception telling me that DataContracts where unexpected. Type 'System.DelegateSerializationHolder+DelegateEntry' with data contract name 'DelegateSerializationHolder.DelegateEntry:http://schemas.datacontract.org/2004/07/System' is not expected. Consider using a DataContractResolver or add any types not known statically to the list of known types - for example, by using the KnownTypeAttribute attribute or by adding them to the list of known types passed to DataContractSerializer. The thing is that my object contains other objects which are marked as DataContract and I'm not interested adding the KnownType attribute for those types. Another problem might be that my object to serialize is very restricted in form of internal class and internal properties etc. Can anyone guide me in the right direction. What I'm I doing wrong? Some code: public virtual object BeforeSendRequest(ref Message request, IClientChannel channel) { var header = MessageHeader.CreateHeader("<name>", "<namespace>", object); request.Headers.Add(header); return Guid.NewGuid(); }

    Read the article

  • controlling which project header file Xcode will include

    - by jdmuys
    My Xcode project builds to variations of the same product using two targets. The difference between the two is only on which version of an included library is used. For the .c source files it's easy to assign the correct version to the correct target using the target check box. However, including the header file always includes the same one. This is correct for one target, but wrong for the other one. Is there a way to control which header file is included by each target? Here is my project file hierarchy (which is replicated in Xcode): MyProject TheirOldLib theirLib.h theirLib.cpp TheirNewLib theirLib.h theirLib.cpp myCode.cpp and myCode.cpp does thing such as: #include "theirLib.h" … somecode() { #if OLDVERSION theirOldLibCall(…); #else theirNewLibCall(…); #endif } And of course, I define OLDVERSION for one target and not for the other. So is there a way to tell Xcode which theirLib.h to include per target? Constraints: - the two header files have the same name. As a last resort, I could rename one of them, but I'd rather avoid that as this will lead to major hair pulling on the other platforms. - I'm free to tweak my project as I otherwise see fit Thanks for any help.

    Read the article

  • Add row to GridView after header row

    - by Jan-Frederik Carl
    I have a GridView with a header and some rows and want to add another row just below the header using jQuery. <form id="form1" runat="server"> <div> <asp:GridView ID="GridView1" ShowHeader="true" runat="server"> <Columns> <asp:TemplateField HeaderText="Activity Name"> <ItemTemplate> <asp:Label runat="server" Text='<%# DataBinder.Eval(Container.DataItem, "Name") %>'></asp:Label> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> <asp:Button Text="Add Activity" runat="server" OnClientClick="addActivity(); return false;" /> </div> </form> My tries were $('#GridView1 tbody').prepend('<tr><td>new activity</td></tr>'); Puts a new row above the header $('#GridView1 table tr:first').after('<tr><td>new activity</td></tr>'); Does nothing (at least nothing visible, as well with any other tr element)

    Read the article

  • Interface Builder caching bad data (voodoo)

    - by Ryan Townshend
    Sometimes IB will hold onto old or bad references, and I cannot seem to remove or edit them. EDIT I have made this a wiki question with the intention of gathering more data on the phenomenon. Answers involving situations where other coders have encountered this are welcome. This happened to me again last night with a table controller. When I created a spike project to try and reproduce the error, the system worked the way I anticipated. Then back in the actual project the bad behavior continued, even if I remove the xib file and all controllers involved. Creating a whole new project with none of the original (problematic) xib and nib files worked correctly. This question is not about the specifics of this incident but about this type of incident in IB. Does anyone know more about this type of bad IB behaviour, and possibly a more stylish way to to eliminate it than nuking the project? Note, removing the offending IB files and recreating them in the same project has not solved this for me in the past, only whole new projects. Answers regarding examples of when/how this glitch has been observed/created are welcome as well.

    Read the article

  • C headers: compiler specific vs library specific?

    - by leonbloy
    Is there some clear-cut distinction between standard C *.h header files that are provided by the C compiler, as oppossed to those which are provided by a standard C library? Is there some list, or some standard locations? Motivation: int this answer I got a while ago, regarding a missing unistd.h in the latest TinyC compiler, the author argued that unistd.h (contrarily to sys/unistd.h) should not be provided by the compiler but by your C library. I could not make much sense of that response (for one thing shouldn't that also apply to, say, stdio.h?) but I'm still wondering about it. Is that correct? Where is some authoritative reference for this? Looking in other compilers, I see that other "self contained" POSIX C compilers that are hosted in Windows (like the GCC toolchain that comes with MinGW, in several incarnations; or Digital Mars compiler), include all header files. And in a standard Linux distribution (say, Centos 5.10) I see that the gcc package provides a few header files (eg, stdbool.h, syslimits.h) in /usr/lib/gcc/i386-redhat-linux/4.1.1/include/, and the glibc-headers package provides the majority of the headers in /usr/include/ (including stdio.h, /usr/include/unistd.h and /usr/include/sys/unistd.h). So, in neither case I see support for the above claim.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >