Search Results

Search found 29159 results on 1167 pages for 'xml configuration'.

Page 593/1167 | < Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >

  • Dynamic IP on NGINX geo module without restart

    - by joaorvmaia
    I want create a task on my Capistrano deploy to put my public IP on geo module configuration of my NGINX server without restart NGINX, is it possible? Example, my /etc/nginx/nginx.conf: geo $geo { default no; include /home/deploy_user/appname/shared/ip_list; } The file /home/deploy_user/appname/shared/ip_list I will provide during deploy. I need this because my public IP can change many times. Regards, João

    Read the article

  • Secret, unlogged, transparent, case-sensitive proxy in IIS6?

    - by Ian Boyd
    Does IIS have a secret, unlogged, transparent, case-sensitive proxy built into it? A file exists on the web-server: GET http://www.stackoverflow.com/javascript/ModifyQuoteArea.js HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-US User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.stackoverflow.com HTTP/1.1 200 OK Connection: Keep-Alive Content-Length: 29246 Date: Mon, 07 Mar 2011 14:20:07 GMT Content-Type: application/x-javascript ETag: "5a0a6178edacb1:1c51" Server: Microsoft-IIS/6.0 Last-Modified: Fri, 02 Tue 2010 17:03:32 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET ... Problem is that a changes made to the file will not get served, the old (i.e. February of last year) version keeps getting served: HTTP/1.1 200 OK Connection: Keep-Alive Content-Length: 29246 Date: Mon, 07 Mar 2011 14:23:07 GMT Content-Type: application/x-javascript ETag: "5a0a6178edacb1:1c51" Server: Microsoft-IIS/6.0 Last-Modified: Fri, 02 Tue 2010 17:03:32 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET ... The same old file gets served, even though we've: renamed the file deleted the file restarted IIS The request for this file does not appear in the IIS logs (e.g. C:\WINNT\System32\LogFiles\W3SVC7\) And this only happens from the outside (i.e. the internet). If you issue the request locally on the server, then you will: - get the current file (file there) - 404 (file renamed) - 404 (file deleted) But if i change the case of the requested resource, i.e.: GET http://www.stackoverflow.com/javascript/MoDiFyQuOtEArEa.js HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-US User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.stackoverflow.com Note: MoDiFyQuOtEArEa.js verses ModifyQuoteArea.js Then i do get the proper file (or get the 404 as i expect if the file is renamed or deleted). But any subsequent changes to the file will not show up until i change the case of the file i'm asking for. Checking the IIS logs all indicate that the (internet) requests are all coming the correct client on the internet (i.e. not from some intermediate proxy). Since the file doesn't exist on the hard drive anymore, i conclude that there is a proxy. The requests serviced from this proxy are not logged in the IIS logs. The requests for new files are logged, and from the client IP, not a proxy IP. The proxy is case sensitivie. This does not sound like something Microsoft, or IIS, would do: - a transparent proxy - case-sensitivie - unlogged - surviving restarts of IIS - surviving in a cache for hours i can't believe that our customer's IIS is doing these things. i'm assuming there is some other transparent proxy in front of IIS. Or, does IIS have a transparent, unlogged, case-sensitive, memory based, proxy, that caches content for at least 7 hours? (Come Monday morning, IIS is serving the correct file, unlogged).

    Read the article

  • Valid certificate issued by certificate authority

    - by Null
    Using the below configuration: internal Domain: company.corp Server 2008 DC and CA I've setup Radius/NPS for WPA2-Enterprise authentication, but the mobile clients are getting certificate warnings because the PEAP certificate is self signed by the CA. How can I fix the warning? Do I need to get a signed certificate for the company.corp domain?

    Read the article

  • Oracle physical standby database received redo has not been applied

    - by Arthur Aoife
    Hi, I followed the steps in oracle documentation on creation of a physical standby database. The link to the configuration steps, http://download.oracle.com/docs/cd/B28359_01/server.111/b28294/create_ps.htm#i63561 When I perform "Step 4 Verfiy that received redo has been applied." my query result is not as expected, following is the result, SQL SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#; SEQUENCE# APP 5 NO 6 NO 7 NO 8 NO 4 rows selected. Appreciate any advice on how to proceed, thanks.

    Read the article

  • How many domains can you configure on a Sun M5000 system?

    - by Andre Miller
    We have a few Sun M5000 servers with the following configuration: Each system has 2 system boards each containing 2 x 2.5Ghz quad core processors Each system board has 16GB of RAM Each system has 4 x 300GB disks I would like to know how many hardware domains can I configure per system? Do I need one system board per domain (implying a total of 2 domains), or can I create 4 domains, each with one cpu each?

    Read the article

  • IRC server for Windows?

    - by Noah
    I would like to run an IRC server for users of my LAN, and my best option would be to do so on a Windows XP box. Is there a decent Windows/Cygwin IRC server that you have used and would recommend? If so, any configuration pointers would also be greatly appreciated.

    Read the article

  • Backing Up vs. Redundancy

    - by TK Kocheran
    I'm currently in stage 2 of 3 of building my home workstation. What this means is that my RAID-0 array of solid state disks will be backed up nightly to a RAID-5 or RAID-6 array of traditional spinning hard disks. However, it recently dawned on me that redundancy is not backup. The main reason for setting up a RAID array with redundancy was to protect myself in the event of a drive failure to serve as an effective backup solution. Wait. What if a bolt of lightning finds a way to travel into my house, through my surge-protector, into my power supply and physically destroys all of my hard disks and SSDs? Well, in that case, I guess I'd be fine because I generally keep most important files (music, pictures, videos) stored in multiple places like on my laptop, my wife's laptop, and an encrypted USB hard drive. Wait. What if a giant hedgehog meteor attacks my house from space traveling at mach 3 and all machines and hard disks are blown to smithereens. Well, I guess I could find a way to do ridiculously slow and cumbersome rsyncs or backups to Amazon's Glacier. Wait. What if there's a nuclear apocalypse... and at this point I start laughing hysterically. At what point does backing up become irrelevant? I completely understand situation one (mechanical drive failure), situation two (workstation compromised or destroyed somehow), possibly even situation three (all machines and disks destroyed), but situation four? There's no questioning the need for backups. None. However, there are three questions I'd really like addressed: To what level should one backup? I definitely understand the merits of physical disk redundancy. I also believe in keeping important files on multiple machines and thinning out the possibility of losing all of my files. Online backups make sense, but they beg the following question. What should I be backing up remotely and how often? It's no problem storage-wise to back up important files (music, pictures, videos) and even configuration and temporal data for all of the machines in my network (all Linux based)... albeit locally. Transferring to the cloud is another story. Worst-case scenario, if I lost all of my configuration for my individual computers, the reality is that I probably lost the machines too. The cloud is a long way away from here; I can run backups over CAT-6 here and see 100MB/s easily, but I'm afraid that I'm only going to see 2MB/s at best when transferring up to the cloud.

    Read the article

  • Login to a Windows 7 Computer with disabled Administrator account.

    - by sirlancelot
    I just built a Windows 7 Image using sysprep and a custom answer file that's supposed to attach to our domain. However, it did not attach to the domain and so I have a computer that has been automatically setup via unattend.xml and is sitting at a login screen; with no active users (Administrator account says it is disabled). Is there a way I can activate the Administrator account and rescue this machine or do I have to rebuild it from scratch?

    Read the article

  • Some rules are not listed when I open my GPO rules editor

    - by Copeleto
    Hi, I need to enable a rule using a GPO's, the rule is - Security Zones: Do not allow users to Add/Delete Sites. The rule is located Computer Configuration\Administrative Templates\Windows Components\Internet Explorer But I need to configure that rule on a windows server 2003 sp2, but does not appear, but If I look on a Windows 2008 the rule it's there. So the question is how can update my rule list on a windows server 2003? Thanks

    Read the article

  • Cannot right click desktop windows xp

    - by Robert Harvey
    This occurred after a Trojan incident. We managed to get the Trojan cleaned off the computer, but now we can't Right click the desktop. We have tried changing HKCU/software/microsoft/windows/current version/policies/explorer/noviewcontextmenu in the registry, and group policy user configuration/administrative templates/windows explorer/remove windows explorers default context menu, but neither worked. How do we reenable the right-click menu for the desktop? (it works everywhere else)

    Read the article

  • solved: passenger(mod_rails) fails to start puppet master under nginx

    - by Anadi Misra
    On the server [root@bangvmpllDA02 logs]# ruby -v ruby 1.8.7 (2011-06-30 patchlevel 352) [x86_64-linux] [root@bangvmpllDA02 logs]# puppet --version 3.0.1 and [root@bangvmpllDA02 logs]# service nginx configtest nginx: the configuration file /apps/nginx/nginx.conf syntax is ok nginx: configuration file /apps/nginx/nginx.conf test is successful [root@bangvmpllDA02 logs]# service nginx status nginx (pid 25923 25921 25920 25917 25908) is running... [root@bangvmpllDA02 logs]# however none of my agents are able to connect to the master, they all fail with errors like so [amisr1@blramisr195602 ~]$ puppet agent --test --verbose --server bangvmpllda02.XXX.com Info: Creating a new SSL certificate request for blramisr195602.XXX.com Info: Certificate Request fingerprint (SHA256): 26:EB:08:1F:82:32:E4:03:7A:64:8E:30:A3:99:93:26:E6:66:B9:B0:49:B6:08:F9:67:CA:1B:0C:00:B9:1D:41 Error: Could not request certificate: Error 405 on SERVER: <html> <head><title>405 Not Allowed</title></head> <body bgcolor="white"> <center><h1>405 Not Allowed</h1></center> <hr><center>nginx</center> </body> </html> Exiting; failed to retrieve certificate and waitforcert is disabled when I check logs on puppet master [root@bangvmpllDA02 logs]# tail puppet_access.log [05/Dec/2012:17:45:18 +0530] "GET /production/certificate/ca? HTTP/1.1" 404 162 "-" "Ruby" [05/Dec/2012:18:32:23 +0530] "PUT /production/certificate_request/sl63anadi.XXX.com HTTP/1.1" 405 166 "-" "-" [05/Dec/2012:18:33:33 +0530] "GET /production/certificate/sl63anadi.XXX.com? HTTP/1.1" 404 162 "-" "-" [05/Dec/2012:18:33:33 +0530] "GET /production/certificate_request/sl63anadi.XXX.com? HTTP/1.1" 404 162 "-" "-" [05/Dec/2012:18:33:33 +0530] "PUT /production/certificate_request/sl63anadi.XXX.com HTTP/1.1" 405 166 "-" "-" and the error logs show that nginx is not really able to process the request well 2012/12/05 18:33:33 [error] 25920#0: *23 open() "/etc/puppet/rack/public/production/certificate/sl63anadi.XXX.com" failed (2: No such file or directory), client: 10.209.47.26, server: , request: "GET /production/certificate/sl63anadi.XXX.com? HTTP/1.1", host: "bangvmpllda02.XXX.com:8140" 2012/12/05 18:33:33 [error] 25920#0: *24 open() "/etc/puppet/rack/public/production/certificate_request/sl63anadi.XXX.com" failed (2: No such file or directory), client: 10.209.47.26, server: , request: "GET /production/certificate_request/sl63anadi.XXX.com? HTTP/1.1", host: "bangvmpllda02.XXX.com:8140" 2012/12/05 18:47:56 [error] 25923#0: *27 open() "/etc/puppet/rack/public/production/certificate/ca" failed (2: No such file or directory), client: 10.209.47.31, server: , request: "GET /production/certificate/ca? HTTP/1.1", host: "bangvmpllda02.XXX.com:8140" 2012/12/05 18:47:56 [error] 25923#0: *28 open() "/etc/puppet/rack/public/production/certificate_request/blramisr195602.XXX.com" failed (2: No such file or directory), client: 10.209.47.31, server: , request: "GET /production/certificate_request/blramisr195602.XXX.com? HTTP/1.1", host: "bangvmpllda02.XXX.com:8140" Passenger does not show any application groups either [root@bangvmpllDA02 nginx]# passenger-status ----------- General information ----------- max = 15 count = 0 active = 0 inactive = 0 Waiting on global queue: 0 ----------- Application groups ----------- [root@bangvmpllDA02 nginx]# here's my nginx configuration [root@bangvmpllDA02 logs]# cat ../nginx.conf user puppet; worker_processes 4; #error_log logs/error.log; #error_log logs/error.log notice; error_log logs/error.log info; #pid logs/nginx.pid; events { use epoll; worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; sendfile on; #tcp_nopush on; server_tokens off; #keepalive_timeout 0; keepalive_timeout 120; gzip on; gzip_http_version 1.1; gzip_disable "msie6"; gzip_vary on; gzip_min_length 1100; gzip_buffers 64 8k; gzip_comp_level 3; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml; server { listen 80; server_name bangvmpllda02.XXXX.com; charset utf-8; #access_log logs/http.access.log main; location / { root html; index index.html index.htm index.php; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root html; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { access_log off; log_not_found off; deny all; } location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ { access_log off; log_not_found off; expires 2d; } } # Passenger needed for puppet passenger_root /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18; passenger_ruby /usr/bin/ruby; passenger_max_pool_size 15; server { ssl on; listen 8140 default ssl; server_name bangvmpllda02.XXXX.com; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /etc/puppet/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } } and the puppet.conf [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet dns_alt_names = devops.XXXX.com,devops confdir = /etc/puppet vardir = /var/lib/puppet storeconfigs = true storeconfigs_backend = puppetdb thin_storeconfigs = false async_storeconfigs = false ssl_client_header = SSL_CLIENT_S_D ssl_client_verify_header = SSL_CLIENT_VERIFY # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl any ideas where am I going wrong? I checkthe directory permissions; /usr/share/puppet, /etc/puppet and /var/lib/puppet (and files inside them) are owned by puppet user. Solved The simple solution to my complicated problem was that I had placed the config.ru in wrong place moved it to /etc/puppet/rack , it was in /etc/puppet/rack/public Well!!! :-/

    Read the article

  • Where are the TweetDeck settings-files located in (ubuntu-) linux?

    - by Philipp Andre
    Hi Everybody, i'm running Windows as well as Ubuntu and like to sync both tweetdeck installations via dropbox. Therefore i need to locate two files: td_26_[username].db preferences_[username].xml I found them on windows under the folder c:\Users[account]\AppData\Roaming\TweetDeckFast.[random string]\Local Store\ But i can't find them on my ubuntu installation. Does anyone know where these files are located? Best Regards Philipp

    Read the article

  • Developer Notebook i5 or i7

    - by Cas Sakal
    I could not decide on which configuration below gives better performance for developers on a notebook(running VS.NET, SQL Server, no gaming); (A) i5 540M + SSD (Intel) or (B) i7 720M + 7200 RPM HDD (Western Digital) Since these two configurations(A, B) costs nearly same $ for me, I would like to buy the fastest one for my job environment. Please do not comment just buy this or that, if you can give an inference about your choice I would be appreciated. Thank you, cas

    Read the article

  • Providing DNS redirection to honeypot server for known bad domains

    - by syn-
    Currently running BIND on RHEL 5.4 and am looking for a more efficient manner of providing DNS redirection to a honeypot server for a large (30,000+) list of forbidden domains. Our current solution for this requirement is to include a file containing a zone master declaration for each blocked domain in named.conf. Subsequently, each of these zone declarations point to the same zone file, which resolves all hosts in that domain to our honeypot servers. ...basically this allows us to capture any "phone home" attempts by malware that may infiltrate the internal systems. The problem with this configuration is the large amount of time taken to load all 30,000+ domains as well as management of the domain list configuration file itself... if any errors creep into this file, the BIND server will fail to start, thereby making automation of the process a little frightening. So I'm looking for something more efficient and potentially less error prone. named.conf entry: include "blackholes.conf"; blackholes.conf entry example: zone "bad-domain.com" IN { type master; file "/var/named/blackhole.zone"; allow-query { any; }; notify no; }; blackhole.zone entries: $INCLUDE std.soa @ NS ns1.ourdomain.com. @ NS ns2.ourdomain.com. @ NS ns3.ourdomain.com.                        IN            A                192.168.0.99 *                      IN            A                192.168.0.99

    Read the article

  • Losing internet connectivity on server after installing LogMeIn Hamachi (with server set as gateway node)

    - by Kim Jong-Un
    Our domain controller (SBS 2003) completely lost internet and network connectivity yesterday after I remotely installed LogMeIn Hamachi on it and set it to be a gateway node- in an attempt to create a VPN link between the server and a remote site. I had to go in to the office to resolve the problem as, unsurprisingly, my own remote access to the server was also lost. I was only able to restore network connectivity by deleting a virtual network adapter Hamachi created when making the server the gateway node (called "Hamachi bridge" I believe), then rebooting the server. This is a repeatable problem. Every time I try to get this to work, it just takes the server offline. Why would this bridge affect regular TCP/IP connectivity on the NIC in this way? I have tried a "hub-and-spoke" configuration between the server and our PC at a remote site (server set as hub, remote site as spoke). This caused no such problems with general internet connectivity, and file transfer worked well between the two computers. However, there was a DNS issue with the VPN between the two sites- resulting in Active Directory not being able to communicate between them (could not log on using domain user accounts at remote site if they were not already cached on that machine). I only tried a "gateway" network as LogMeIn support told me: If you can get the Active Directory to work it would only be through a "Gateway" network type with the server acting as the Gateway Node. You would configure the gateway settings on the server in the Hamachi client on that machine to push whatever IP's/DNS settings you prefer and at that point AD would be able (all things being equal) communicate to the client node when it attaches. We do not have any ActiveDirectory configuration info as that's outside the scope of our support. I hope this helps. It would be fantastic if I could get Active Directory to work over a Hamachi VPN connection, without worrying about the server going offline in this way. Does anyone have any ideas how I should proceed, or any theories as to what is going on when I try to use the "gateway" network type? I want to try to narrow down what is going on here.

    Read the article

  • Nginx alongside apache in PLESK 9.3

    - by Saif Bechan
    Does anyone know how to set up Nginx alongside apache in PLESK 9.3. I want to serve my dynamic content from apache and the static content from Ningx. I read that there is a new configuration setting in plesk 9.3 where you can do that, but i can't find an explanation on how to do so. I run centos 5 Thank you guys

    Read the article

  • Openerp wIth Ubuntu 9.04

    - by agbrand
    I had Ubuntu 8.10. I upgrade it to 9.04. I have Openerp5.0 server/client/web. It worked on 8.10 but not with 9.04. Now when I try to launch my server using: ./openerp-server.py I have this error: ERROR: Import xpath module ERROR: Try to install the old python-xml package It seems that this version of openerp doesn't work with python2.6. Can I redirect openerp to use old version of Python?

    Read the article

  • Intel Matrix Storage Manager not showing on Asus P5W DH Deluxe?

    - by Leon
    I have set, under "Main" - "IDE Configuration" - "Configure SATA as" to "Raid" and "Onboard Serial-ATA BootRom" to "Enabled", but upon POST I still do not see the Intel Matrix Storage Manager screen where I can press Ctrl+I to set up my raid? I have the latest BIOS version EDIT: Although I had set the rom to "enabled", I then went and reset the bios settings to default. I then set the bootrom setting to "disabled", restarted and then "enabled" again and it seemed to work.

    Read the article

  • Duplicate IIS web site with Web Deploy

    - by gsantovena
    I have a Win2008 server with IIS 7 and I want to duplicate one web site and just change the binding port and the application pool that is using, so I will have 2 web sites (locally or remote) with same configuration but listening on different ports. Is there a way to do this with web deploy tool ir order to deploy locally and remotely this unique web site and change the binding ports in the destination?

    Read the article

  • http://localhost does not work, http://127.0.0.1 works

    - by dskanth
    Iam running Zend with Apache and got to see a strange behaviour.... If i type http://127.0.0.1 in my browser url, it works fine, but after typing: http://localhost, i will get a file download window, saying file type as: application/x-httpd-php And in my httpd.conf file, i have the following under VirtualHost *:80 definition: ServerName localhost DocumentRoot E:\zend\Apache2\htdocs\my_project\public Directory E:\zend\Apache2\htdocs\my_project\public Perhaps some configuration problem... can anyone guide me..

    Read the article

  • PHP-FPM Pool, Child Processes and Memory Consumption

    - by Jhilke Dai
    In my PHP-FPM configuration I have 3 Pools, the eg: Config is: ;;;;;;;;;;;;;;;;;;;;;;; ; Pool 1 ; ;;;;;;;;;;;;;;;;;;;;;;; [www1] user = www group = www listen = /tmp/php-fpm1.sock; listen.backlog = -1 listen.owner = www listen.group = www listen.mode = 0666 pm = dynamic pm.max_children = 40 pm.start_servers = 6 pm.min_spare_servers = 6 pm.max_spare_servers = 12 pm.max_requests = 250 slowlog = /var/log/php/$pool.log.slow request_slowlog_timeout = 5s request_terminate_timeout = 120s rlimit_files = 131072 ;;;;;;;;;;;;;;;;;;;;;;; ; Pool 2 ; ;;;;;;;;;;;;;;;;;;;;;;; [www2] user = www group = www listen = /tmp/php-fpm2.sock; listen.backlog = -1 listen.owner = www listen.group = www listen.mode = 0666 pm = dynamic pm.max_children = 40 pm.start_servers = 6 pm.min_spare_servers = 6 pm.max_spare_servers = 12 pm.max_requests = 250 slowlog = /var/log/php/$pool.log.slow request_slowlog_timeout = 5s request_terminate_timeout = 120s rlimit_files = 131072 ;;;;;;;;;;;;;;;;;;;;;;; ; Pool 3 ; ;;;;;;;;;;;;;;;;;;;;;;; [www3] user = www group = www listen = /tmp/php-fpm3.sock; listen.backlog = -1 listen.owner = www listen.group = www listen.mode = 0666 pm = dynamic pm.max_children = 40 pm.start_servers = 6 pm.min_spare_servers = 6 pm.max_spare_servers = 12 pm.max_requests = 250 slowlog = /var/log/php/$pool.log.slow request_slowlog_timeout = 5s request_terminate_timeout = 120s rlimit_files = 131072 I calculated the pm.max_children processes according to some example calculations on the web like 40 x 40 Mb = 1600 Mb. I have separated 4 GB of RAM for PHP, now according to the calculations 40 Child Processes via one socket, and I have total of 3 sockets in my Nginx and FPM configuration. My doubt is about the amount of memory consumption by those child processes. I tried to create high load in the server via httperf hog and siege but I could not calculate the accurate memory usage by all the PHP processes (other processes like MySQL and Nginx were also running). And all the sockets were in use, So, I seek guidance from anyone who have done this before or know how exactly the pm.max_children in PHP Works. Since I have 3 Pools/sockets with 40 child processes does that count to 3 x 40 x 40 Mb of Memory usage ? or it is just like 40 Max. Child processes sharing 3 sockets (and the total memory usage is just 40 x 40 Mb) ?

    Read the article

< Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >