Search Results

Search found 25798 results on 1032 pages for 'android xml'.

Page 652/1032 | < Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >

  • Tell Tomcat to drop requests instead of dying "All threads (150) are currently busy"

    - by Nicolas Raoul
    My Tomcat 6.0.26 sometimes dies saying: SEVERE: All threads (150) are currently busy, waiting. Increase maxThreads (150) or check the servlet status ... then Tomcat shuts down, and users can't access the webapp until I restart Tomcat manually. Some of the threads indeed take a long time to execute, it is by-design, not a thread-gone-wild problem. I know I could increase maxThreads, but that is not a viable solution, because the server might receive requests even more requests. QUESTION: Instead of dying, can I tell Tomcat to just drop requests when maxThreads is reached and the AJP/1.3 backlog is full? Below is my server.xml in any case: <?xml version='1.0' encoding='utf-8'?> <Server port="8005" shutdown="SHUTDOWN"> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <Listener className="org.apache.catalina.core.JasperListener" /> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <GlobalNamingResources> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <Service name="Catalina"> <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" minSpareThreads="100"/> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" enableLookups="false" useBodyEncodingForURI="true" backlog="150" maxThreads="150" executor="tomcatThreadPool" keepAliveTimeout="5000" connectionTimeout="300000" /> <Engine name="Catalina" defaultHost="localhost" jvmRoute="ecm1"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> </Engine> </Service> </Server>

    Read the article

  • How can I avoid an error in this .htaccess file?

    - by mipadi
    I have a blog. The blog is stored under the /blog/ prefix on my website. It has the usual URLs for a blog, so articles have URLs in the format /blog/:year/:month/:day/:title/. First and foremost, I want to automatically redirect visitors to the www subdomain (in case they leave that off), and internally rewrite the root URL to /blog/, so that the front page of the blog appears on the front page of the site. I have accomplished that with the following set of rewrite rules in my .htaccess file: RewriteEngine On # Rewrite monkey-robot.com to www.monkey-robot.com RewriteCond %{HTTP_HOST} ^monkey-robot\.com$ RewriteRule ^(.*)$ http://www.monkey-robot.com/$1 [R=301,L] RewriteRule ^$ /blog/ [L] RewriteRule ^feeds/blog/?$ /feeds/blog/atom.xml [L] That works fine. The problem is that the front page of the blog now appears at two distinct URLs: / and /blog/. So I'd like to redirect the /blog/ URL to the root URL. Initially I tried to accomplish this with the following set of rewrite rules: RewriteEngine On # Rewrite monkey-robot.com to www.monkey-robot.com RewriteCond %{HTTP_HOST} ^monkey-robot\.com$ RewriteRule ^(.*)$ http://www.monkey-robot.com/$1 [R=301,L] RewriteRule ^$ /blog/ [L] RewriteRule ^blog/?$ / [R,L] RewriteRule ^feeds/blog/?$ /feeds/blog/atom.xml [L] But that gave me an infinite redirect (maybe because of the preceding rule?). So then I tried this set: RewriteEngine On # Rewrite monkey-robot.com to www.monkey-robot.com RewriteCond %{HTTP_HOST} ^monkey-robot\.com$ RewriteRule ^(.*)$ http://www.monkey-robot.com/$1 [R=301,L] RewriteRule ^$ /blog/ [L] RewriteRule ^blog/?$ http://www.monkey-robot.com/ [R,L] RewriteRule ^feeds/blog/?$ /feeds/blog/atom.xml [L] But I got a 500 Internal Server Error with the following log message: Invalid command '[R,L]', perhaps misspelled or defined by a module not included in the server configuration What gives? I don't think [R,L] is a syntax error.

    Read the article

  • mod_deflate doesn't work [closed]

    - by kikio
    I want to gzip my static files. so put this in .htaccess: <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/text text/html text/plain text/xml text/css application/x-javascript application/javascript </IfModule> and looked for mod_deflate in phpinfo() output Loaded Modules section, and I found it. But when I track server responses with Firebug, no gzipped file can be found: HTTP/1.1 200 OK Date: Sat, 08 Sep 2012 21:41:21 GMT Last-Modified: Sat, 08 Sep 2012 21:26:04 GMT Accept-Ranges: bytes Cache-Control: max-age=604800 Expires: Sat, 15 Sep 2012 21:41:21 GMT Vary: Accept-Encoding Keep-Alive: timeout=3, max=50 Connection: Keep-Alive Content-Type: text/css Content-Length: 18206 What's the problem? I'm sure I have mod_deflate enabled (according to php apache_get_modules()). UPDATE: the request headers: GET /d/jquery-ui.css HTTP/1.1 Host: 127.0.0.1 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Connection: keep-alive Pragma: no-cache Cache-Control: no-cache

    Read the article

  • Microsoft ISA 2004 - Failed Connection Attempt

    - by Arief
    I have an issue where users with Android tablet cannot download apps through ISA 2004. This is what I get from the ISA 2004 logging: I did try to add the source ip address and the destination ip address in All for All Modified rule. However, it does not fix that problem. I also use GFI Web Monitoring. I did add the 151.101.13.80 ip address into the Whitelist, and no luck. What Failed Connection Attempt exactly is? How to overcome this. The Android tablet is throwing an error 495, could not be downloaded. Thanks everyone.

    Read the article

  • StrongSwan + xl2tpd client timeout between 2-5 minutes

    - by Howard Guo
    I run CentOS 6.4 on Amazon EC2, using xl2tpd-1.3.1 from EPEL repository together with StrongSwan 5.0.4. I setup a simple IPSec connection: conn l2tp type=transport keyexchange=ikev1 rekey=no authby=psk leftsubnet=0.0.0.0/0 rightsubnet=0.0.0.0/0 compress=yes auto=add And here is xl2tpd.conf: [global] ipsec saref = yes [lns default] ip range = 192.168.0.2-192.168.0.250 local ip = 192.168.0.1 ppp debug = yes pppoptfile = /etc/ppp/options.xl2tpd length bit = yes Here is options.xl2tpd: ms-dns 8.8.4.4 auth lock debug proxyarp There is only one client - Android 4.2 Android connects successfully: Oct 27 19:45:02 ip-172-31-17-30 xl2tpd[2706]: Connection established to x.x.x.x, 59578. Local: 18934, Remote: 29291 (ref=0/0). LNS session is 'default' Oct 27 19:45:02 ip-172-31-17-30 xl2tpd[2706]: Call established with x.x.x.x, Local: 36452, Remote: 29845, Serial: -1369754322 Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: pppd 2.4.5 started by howard, uid 0 Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: Using interface ppp0 Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: Connect: ppp0 <--> /dev/pts/0 Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: peer from calling number x.x.x.x authorized Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: Deflate (15) compression enabled Oct 27 19:45:03 ip-172-31-17-30 pppd[2709]: Cannot determine ethernet address for proxy ARP Oct 27 19:45:03 ip-172-31-17-30 pppd[2709]: local IP address 192.168.0.1 Oct 27 19:45:03 ip-172-31-17-30 pppd[2709]: remote IP address 192.168.0.2 Oct 27 19:45:03 ip-172-31-17-30 charon: 06[KNL] 192.168.0.1 appeared on ppp0 Oct 27 19:45:03 ip-172-31-17-30 charon: 06[KNL] 192.168.0.1 disappeared from ppp0 Oct 27 19:45:03 ip-172-31-17-30 charon: 06[KNL] 192.168.0.1 appeared on ppp0 Oct 27 19:45:03 ip-172-31-17-30 charon: 06[KNL] interface ppp0 activated In the meanwhile, Internet works perfectly on the Android client, the VPN connection is stable and fast. However, it always happens that within 2-5 minutes after the connection is established: Oct 27 19:47:07 ip-172-31-17-30 xl2tpd[2706]: Maximum retries exceeded for tunnel 18934. Closing. Oct 27 19:47:07 ip-172-31-17-30 xl2tpd[2706]: Connection 29291 closed to 95.91.227.224, port 59578 (Timeout) Oct 27 19:47:07 ip-172-31-17-30 charon: 06[KNL] interface ppp0 deactivated Oct 27 19:47:07 ip-172-31-17-30 charon: 06[KNL] interface ppp0 deleted Then the VPN connection is broken. So what might have gone wrong? The same L2TP service works flawlessly on iOS 7, MacOS 10.8, and Windows 7, there is no disconnection issue on those OSes. Thank you!

    Read the article

  • Ubuntu not installed-Please Remove installation media and close tray and press enter

    - by Ram
    I have downloaded Ubuntu 12.04.1 LTS 32 bit, burned it on DVD and tried to install it on my PC. My PC is running in Windows 7 Ultimate 32 bit mounted on the C: drive. Now I want install Ubuntu along with my Windows 7. When I boot Ubuntu through the CD, It boots and the Ubuntu install windows opens It offers "Try Ubuntu" and "Install Ubuntu". I choose "Install Ubuntu" Then I go on to install Ubuntu with Windows(First Option)-install It shows some blank screen with some lines, and says "Please Remove installation media and close tray and press enter" Then the PC restarts and runs Windows 7 same as before normally. But Ubuntu is not installed. How to solve this problem and install Ubuntu on my PC properly? Note: I am an Android Developer. So I need to install Ubuntu for my Android Development purpose.

    Read the article

  • Dropbox alternative with local sync support?

    - by srid
    I am currently using Dropbox. Just decided to sync my huge (about 5 GB) iTunes Library (music collection) in Dropbox. For that I must subscribe to their paid account. But before I do so, I'd like evaluate the alternatives. Is there an alternative that does this? Local LAN sync (eg: sync my huge music collection across computers in local network without uploading/downloading them to internet) The following would be nice (but not required): Native android client - so music will be made available in the Android music app / SDHC card Selective sync: sync particular folders / exclude certain folders on certain computers .. eg: excluding porn folder on work computers ;-) Just like Dropbox, it MUST work on 64-bit Windows, Linux and Mac. Know of any? (I am currently evaluating Spideroak. Boy, was it so complicated to use?)

    Read the article

  • HTTP responses curl and wget different results

    - by Fab
    To check HTTP response header for a set of urls I send with curl the following request headers foreach ( $urls as $url ) { // Setup headers - I used the same headers from Firefox version 2.0.0.6 $header[ ] = "Accept: text/xml,application/xml,application/xhtml+xml,"; $header[ ] = "text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; $header[ ] = "Cache-Control: max-age=0"; $header[ ] = "Connection: keep-alive"; $header[ ] = "Keep-Alive: 300"; $header[ ] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7"; $header[ ] = "Accept-Language: en-us,en;q=0.5"; $header[ ] = "Pragma: "; // browsers keep this blank. curl_setopt( $ch, CURLOPT_URL, $url ); curl_setopt( $ch, CURLOPT_USERAGENT, 'Googlebot/2.1 (+http://www.google.com/bot.html)'); curl_setopt( $ch, CURLOPT_HTTPHEADER, $header); curl_setopt( $ch, CURLOPT_REFERER, 'http://www.google.com'); curl_setopt( $ch, CURLOPT_HEADER, true ); curl_setopt( $ch, CURLOPT_NOBODY, true ); curl_setopt( $ch, CURLOPT_RETURNTRANSFER, true ); curl_setopt( $ch, CURLOPT_FOLLOWLOCATION, true ); curl_setopt( $ch, CURLOPT_HTTPAUTH, CURLAUTH_ANY ); curl_setopt( $ch, CURLOPT_TIMEOUT, 10 ); //timeout 10 seconds } Sometimes I receive 200 OK which is good other time 301, 302, 307 which I consider good as well, but other times I receive weird status as 406, 500, 504 which should identify an invalid url but when I open it on the browser they are fine for example the script returns http://www.awe.co.uk/ => HTTP/1.1 406 Not Acceptable and wget returns wget http://www.awe.co.uk/ --2011-06-23 15:26:26-- http://www.awe.co.uk/ Resolving www.awe.co.uk... 77.73.123.140 Connecting to www.awe.co.uk|77.73.123.140|:80... connected. HTTP request sent, awaiting response... 200 OK Does anyone know which request header I am missing or adding in excess?

    Read the article

  • I have added a port to the public zone in firewalld but still can't access the port

    - by mikemaccana
    I've been using iptables for a long time, but have never used firewalld until recently. I have enabled port 3000 TCP via firewalld with the following command: # firewall-cmd --zone=public --add-port=3000/tcp --permanent However I can't access the server on port 3000. From an external box: telnet 178.62.16.244 3000 Trying 178.62.16.244... telnet: connect to address 178.62.16.244: Connection refused There are no routing issues: I have a separate rule for a port forward from port 80 to port 8000 which works fine externally. My app is definitely listening on the port too: Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN 99 36797 18662/node firewall-cmd doesn't seem to show the port either - see how ports is empty. You can see the forward rule I mentioned earlier. # firewall-cmd --list-all public (default, active) interfaces: eth0 sources: services: dhcpv6-client ssh ports: masquerade: no forward-ports: port=80:proto=tcp:toport=8000:toaddr= icmp-blocks: rich rules: However I can see the rule in the XML config file: # cat /etc/firewalld/zones/public.xml <?xml version="1.0" encoding="utf-8"?> <zone> <short>Public</short> <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description> <service name="dhcpv6-client"/> <service name="ssh"/> <port protocol="tcp" port="3000"/> <forward-port to-port="8000" protocol="tcp" port="80"/> </zone> What else do I need to do to allow access to my app on port 3000? Also: is adding access via a port the correct thing to do? Or should I make a firewalld 'service' for my app instead?

    Read the article

  • Yum Error Installing Git from kernel.org Repo

    - by Lance
    I want to install the latest version of Git using yum and the RPM repository on kernel.org, but adding the repo to yum.repos.d causes yum to fail with checksum errors. The prevailing solution to this issue seems to be to simply use the repository at Webtatic as answered here on superuser. I know I can also install an older version of Git using the EPEL repo, or compile from the latest source tarball, but honestly I want to understand why I'm having issues using the kernel.org repo. Here’s the workflow, after a clean install of CentOS 5.5 and "yum update": [root]# wget -P /etc/yum.repos.d/ http://kernel.org/pub/software/scm/git/RPMS/git.repo [root]# yum clean all [root]# yum repolist Loaded plugins: fastestmirror Determining fastest mirrors * addons: mirrors.netdna.com * base: mirror.clarkson.edu * epel: serverbeach1.fedoraproject.org * extras: centos.mirror.nac.net * updates: mirror.cogentco.com addons | 951 B 00:00 addons/primary | 202 B 00:00 base | 2.1 kB 00:00 base/primary_db | 1.6 MB 00:01 epel | 3.7 kB 00:00 epel/primary_db | 2.8 MB 00:01 extras | 2.1 kB 00:00 extras/primary_db | 188 kB 00:00 git | 1.2 kB 00:00 git/primary | 155 kB 00:00 http://www.kernel.org/pub/software/scm/git/RPMS/i386/repodata/primary.xml.gz: [Errno -3] Error performing checksum Trying other mirror. git/primary | 155 kB 00:00 http://www.kernel.org/pub/software/scm/git/RPMS/i386/repodata/primary.xml.gz: [Errno -3] Error performing checksum Trying other mirror. Error: failure: repodata/primary.xml.gz from git: [Errno 256] No more mirrors to try. Any suggestions as to a solution, or details why the kernel.org repo has this issue? (Sorry I can't include more links to my references, but I don't have the reputation for that yet.)

    Read the article

  • Yum Error Installing Git from kernel.org Repo

    - by Lance
    I want to install the latest version of Git using yum and the RPM repository on kernel.org, but adding the repo to yum.repos.d causes yum to fail with checksum errors. The prevailing solution to this issue seems to be to simply use the repository at Webtatic as answered here on superuser. I know I can also install an older version of Git using the EPEL repo, or compile from the latest source tarball, but honestly I want to understand why I'm having issues using the kernel.org repo. Here’s the workflow, after a clean install of CentOS 5.5 and "yum update": [root]# wget -P /etc/yum.repos.d/ http://kernel.org/pub/software/scm/git/RPMS/git.repo [root]# yum clean all [root]# yum repolist Loaded plugins: fastestmirror Determining fastest mirrors * addons: mirrors.netdna.com * base: mirror.clarkson.edu * epel: serverbeach1.fedoraproject.org * extras: centos.mirror.nac.net * updates: mirror.cogentco.com addons | 951 B 00:00 addons/primary | 202 B 00:00 base | 2.1 kB 00:00 base/primary_db | 1.6 MB 00:01 epel | 3.7 kB 00:00 epel/primary_db | 2.8 MB 00:01 extras | 2.1 kB 00:00 extras/primary_db | 188 kB 00:00 git | 1.2 kB 00:00 git/primary | 155 kB 00:00 http://www.kernel.org/pub/software/scm/git/RPMS/i386/repodata/primary.xml.gz: [Errno -3] Error performing checksum Trying other mirror. git/primary | 155 kB 00:00 http://www.kernel.org/pub/software/scm/git/RPMS/i386/repodata/primary.xml.gz: [Errno -3] Error performing checksum Trying other mirror. Error: failure: repodata/primary.xml.gz from git: [Errno 256] No more mirrors to try. Any suggestions as to a solution, or details why the kernel.org repo has this issue? (Sorry I can't include more links to my references, but I don't have the reputation for that yet.)

    Read the article

  • Conditionally permitting HTTP-only requests to Tomcat?

    - by Mike
    I have 2 versions of a system: Tomcat webserver Nginx reverse-proxy sitting in front of a tomcat webserver. In version 2, nginx only ever talks to Tomcat over HTTP. A user could configure the system so that only HTTPS requests are allowed. If the user does this in Version 1 and then the XML configuration files for Tomcat takes care of this. In version 2, nginx takes care of this. The problem is this: I cannot force a user to update their Tomcat XML config files when they upgrade from version 1 to version 2 (it will be recommended that they do so) because this is done as part of a larger process. This means that if they upgrade and don't update the Tomcat config, an HTTPS request will arrive at nginx, which will proxy it over HTTP to Tomcat which will reject the request because it is not HTTPS. So I can't force an update to the Tomcat XML, and I have to use HTTP between nginx and Tomcat. Any ideas? Is there some way I can affect how Tomcat reads its config in Version 2 so that it ignores the HTTPS-only section?

    Read the article

  • nginx: how do I add new site/server_name in nginx?

    - by Neo
    I'm just starting to explore Nginx on my Ubuntu 10.04. I installed Nginx and I'm able to get the "Welcome to Nginx" page on localhost. However I'm not able to add a new server_name, even when I make the changes in site-available/default file. Tried reloading/restarting Nginx, but nothing works. One interesting observation. "http://mycomputername" in browser works. So somehow there is a command like 'server_name $hostname' somewhere over-riding my rule. File: sites-available/mine.enpass server { listen 80; server_name mine.enpass ; access_log /var/log/nginx/localhost.access.log; location / { root /var/www/nginx-default; index index.html index.htm; } } File: nginx.confg user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }

    Read the article

  • CSS and JS files not being updated, supposedly because of Nginx Caching

    - by Alberto Elias
    I have my web app working with AppCache and I would like that when I modify my html/css/js files, and then update my Cache Manifest, that when the user accesses my web app, they will have an updated version of those files. If I change an HTML file, it works perfectly, but when I change CSS and JS files, the old version is still being used. I've been checking everything out and I think it's related to my nginx configuration. I have a cache.conf file that contains the following: gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header Pragma "public"; add_header Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate"; } And in default.conf I have my locations. I would like to have this caching working on all locations except one, how could I configure this? I've tried the following and it isn't working: location /dir1/dir2/ { root /var/www/dir1; add_header Pragma "no-cache"; add_header Cache-Control "private"; expires off; } Thanks

    Read the article

  • Bluetooth not detecting any devices in Windows 7

    - by underDog
    My Lenovo ThinkPad E320 Laptop running Windows 7 64bit has recently been refusing to detect any Bluetooth devices. I have tried to connect, using 'Add devices' under 'devices and printers' to two different Bluetooth mice and my HTC Wildfire android (2.2.1) phone and none of them are detected in the 'Add a device' dialog. History - Bluetooth initially seemed OK when I first got this laptop. I was able to connect to and use my android phone as a remote with no issues. I got my first Bluetooth mouse, it paired, but after each restart, or even after sleeping, it would not 're-connect' (even though it was listed under Bluetooth devices and supposedly 'working'), and I would need to remove the device and add it again. A week or two ago it stopped working all together. It is not detected at all. I gave up on the mouse and bought another (Lenovo ThinkPad brand) only to find that it was not detected either. I subsequently tested my Android phone and discovered it would not detect either. One thing of note is under 'Devices and Printers' there is listed a 'HID Keyboard Device' which under properties is listed as a 'Bluetooth HID Device'. This was not previously there before this problem started. Each time I remove it, or uninstall it from Device Manager it will quickly re-install itself, even with all my Bluetooth devices switched off. My (Google and searching this site) research of this issue has not yielded any definitive answers. I have changed the setting under Device Manager - Bluetooth - Properties - Power Management - 'Allow the computer to turn off this device to save power' to off. I have attempted to un-install and re-install the Bluetooth hardware, including the 'remove drivers' option and downloading and running the Lenovo Bluetooth installer package (found @ http://support.lenovo.com/en_US/downloads/detail.page?DocID=DS014997). Bluetooth is turned on. All items under Bluetooth properties (Discovery and Connections) are checked. I have tried changing the batteries. I'm not sure what else I can try, apart from perhaps doing a fresh install of Windows. Any suggestions?

    Read the article

  • Rewrite a url on Nginx

    - by Ido B
    I tried to use this - location / { root /path.to.app/; index index.php index.html; rewrite ^/(.*)$ /check_register.php?key=$1 break; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /path.to.app/$fastcgi_script_name; include fastcgi_params; } And its didn't work , This is my full config - user www-data www-data; worker_processes 4; events { worker_connections 3072; } http { include mime.types; default_type application/octet-stream; access_log off; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 15; gzip on; gzip_comp_level 3; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; server { listen 80; server_name localhost; location / { root html; index index.html index.htm; } location / { root /path.to.app/; index index.php index.html; rewrite ^/(.*)$ /check_register.php?key=$1 break; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /path.to.app/$fastcgi_script_name; include fastcgi_params; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } include /usr/local/nginx/sites-enabled/*; } How can i make it work?

    Read the article

  • EPP Protocol create multiple domains in one command

    - by yannis hristofakis
    I've seen <domain:check> command can check multiple domains in one command. Is it possible to do the same for the <domain:create>? <?xml version="1.0" encoding="UTF-8" standalone="no"?> <epp xmlns="urn:ietf:params:xml:ns:epp-1.0"> <command> <create> <domain:create xmlns:domain="urn:ietf:params:xml:ns:domain-1.0"> <domain:name>example.com</domain:name> <domain:period unit="y">2</domain:period> <domain:ns> <domain:hostObj>ns1.example.com</domain:hostObj> <domain:hostObj>ns1.example.net</domain:hostObj> </domain:ns> <domain:registrant>jd1234</domain:registrant> <domain:contact type="admin">sh8013</domain:contact> <domain:contact type="tech">sh8013</domain:contact> <domain:authInfo> <domain:pw>2fooBAR</domain:pw> </domain:authInfo> </domain:create> </create> <clTRID>ABC-12345</clTRID> </command> </epp>

    Read the article

  • Wireshark Not Displaying Packets From Other Network Devices, Even in Promisc Mode

    - by eb80
    System Setup: 1. MacBook running Mountain Lion. 2. Wireshark installed and capturing packets (I have "capture all in promiscuous mode" checked) 3. I filter out all packets with my source and destination IP using the following filter ("ip.dst != 192.168.1.104 && ip.src != 192.168.1.104") 4. On the same network as the MacBook, I use an Android device (connecting via WiFi) to make HTTP requests. Expected Results: 1. Wireshark running on the MacBook sees the HTTP request from the Android device. Actual Results: 1. I only see SSDP broadcasts from 192.168.1.1 Question: What do I need to do so that Wireshark, like Firesheep, can see and use the packets (particularly HTTP) from other network devices on the same network?

    Read the article

  • NGINX: How do I calculate an optimal no. of worker processes and worker connections?

    - by bodacious
    Our web app is running on a Linode 2048 server at the moment (~ 2048 GB of RAM) The MYSQL database is on another linode of it's own so this server is really only handling NGINX and and the Rails application. The application itself uses about 185976 of memory per instance (RSS). Our traffic is < 1000 per day and the pages are mostly cached so there are fewer hits to the rails app itself. My question is - how can I calculate optimal NGINX config settings for my app? Below is the current config: worker_processes 1; # pid of nginx master process pid /var/run/nginx.pid; events { worker_connections 1024; } http { access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; passenger_root /home/user/.rvm/gems/ree-1.8.7-2011.01@URTV/gems/passenger-3.0.3; passenger_ruby /home/user/.rvm/rubies/ree-1.8.7-2011.01/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay on; # gzip settings gzip on; gzip_http_version 1.0; gzip_comp_level 2; gzip_vary on; gzip_proxied any; gzip_types text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; # load extra modules from the vhosts directory include /opt/nginx/vhosts/*.conf; } Any advice would be appreciated! :)

    Read the article

  • How can I unify my email, calendar and tasks (2 exchange accounts + 1 gmail)

    - by Assaf Stone
    This is my situation: I work as a consultant, and thus work out of multiple computers: my work-laptop a desktop at my primary client my desktop at home an android smartphone an android tablet Likewise, I have multiple accounts: A Microsoft Exchange (2010 AFAIK) account A Microsoft Exchange (2007 AFAIK) account A gmail account The most important thing I need is the ability to have events in one calendar affect the free / busy status of all other accounts (so that if I am busy on Monday 9am with an event from my employer's account, it will show that time as busy in my client's account, and in the gmail account. Second thing I need is a unified view of all of my accounts' info: Appointments, email, tasks, and contacts (in that order of importance). I've already tried outlook synchronization tools such as gSyncit, to sync both exchange accounts with gmail, but this creates a mess when updating appointments (deleted appointments sometimes return, timestamps revert). Is there perhaps some way to at least synchronize the free/busy state in a way that all of my calendar apps / accounts will look there to see if I can be invited? Just solving that would be well worth my while. Thanks, Assaf

    Read the article

  • Can I create an Infrastructure access point from built-in WiFi (as opposed to Ad-Hoc) on Windows XP?

    - by evilspoons
    I want to use my Windows XP laptop as an access point. What I am trying to achieve is possible under Windows 7 with a myriad of utilities, but the wireless driver stack was different before Windows 7 and those specific APIs don't exist on XP. The reason behind me wanting to do this is that I would like my Android phone to be able to connect via WiFi to a network that is only hard-wired (reverse tethering). Unfortunately, my Android device (Galaxy S Captivate) does not support ad-hoc networks without a serious amount of screwing around. Is it possible to create an "Infrastructure" network with my Dell Latitude D830's built-in WiFi - a "Dell Wireless 1395 WLAN Card", which I am assuming is probably rebadged Broadcom, or is there some fundamental difference between a wireless adapter and an access point that would prevent this?

    Read the article

  • How to Customize the File Open/Save Dialog Box in Windows

    - by Lori Kaufman
    Generally, there are two kinds of Open/Save dialog boxes in Windows. One kind looks like Windows Explorer, with the tree on the left containing Favorites, Libraries, Computer, etc. The other kind contains a vertical toolbar, called the Places Bar. The Windows Explorer-style Open/Save dialog box can be customized by adding your own folders to the Favorites list. You can, then, click the arrows to the left of the main items, except the Favorites, to collapse them, leaving only the list of default and custom Favorites. The Places Bar is located along the left side of the File Open/Save dialog box and contains buttons providing access to frequently-used folders. The default buttons on the Places Bar are links to Recent Places, Desktop, Libraries, Computer, and Network. However, you change these links to be links to custom folders of your choice. We will show you how to customize the Places Bar using the registry and using a free tool in case you are not comfortable making changes in the registry. Use Your Android Phone to Comparison Shop: 4 Scanner Apps Reviewed How to Run Android Apps on Your Desktop the Easy Way HTG Explains: Do You Really Need to Defrag Your PC?

    Read the article

  • Inside Amazon’s Warehouses

    - by Jason Fitzpatrick
    If you’re expecting the inside of Amazon’s warehouses to be some sort of rigidly organized robot-filled warehouse of tomorrow, you’ll be quite surprised to find that storage technique they employ is called “chaotic storage”. International Business Times paid a visit to a major Amazon warehouse and took a tour. Rather than finding robots they found: Amazon must rely on barcodes and human hands to find the ordered items and drop them into the proper bins — without robots, Amazon utilizes a system known as “chaotic storage,” where products are essentially shelved at random. By storing items randomly instead of categorically, the warehouse has a much better flow of material. Even without robots or automation, Amazon can compile a “picking list” where each item needs to be taken off the shelf and scanned again before it can be shipped. The real advantage to chaotic storage is that it’s significantly more flexible than conventional storage systems. If there are big changes in a product range, the company doesn’t need to plan for more space, because the products or their sales volumes don’t need to be known or planned in advance if they’re simply being stored at random. HTG Explains: Does Your Android Phone Need an Antivirus? How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder?

    Read the article

  • MonoDroid Article in Visual Studio Magazine

    - by Wallym
    The February edition of Visual Studio magazine is now online.  In it, my article regarding MonoDroid, the implementation of C# and .NET for Android devices, is online.  I can't thank Michael Desmond enough for the opportunity.  Its fitting now that Android is the most popular smartphone platform.  This article is available online at: Intro to MonoDroid Part 1. Intro to MonoDroid Part 2. Along with the article, check out this short video that I did regarding MonoDroid on the Mac. The article(s) were written based on MonoDroid Preview 9.1, so there are a few updates necessary, but I think this gets the basics out.  I hope you enjoy the article(s). And yes, we're still working on our book on MonoDroid.  I've got a great author group and am excited about the book. If you get a chance, come to AnDevCon in San Francisco in March.  I'll be presenting on MonoDroid there.

    Read the article

  • Reclaim Vertical UI Space by Adding a Toolbar to the Left or Right Side of Firefox

    - by Asian Angel
    Do you need to make the most efficient use possible of vertical UI space on your system’s screen, but have horizontal space to spare? Now you can shift the toolbar icons and their awesome functionality to a slim sidebar in Firefox using the Vertical Toolbar extension. As you can see above the sidebar even picked up on our Personas Theme to help it blend in nicely with the rest of the browser. You can access the options for the new toolbar by right clicking within the toolbar area. These are the options for the toolbar…you can choose the side of Firefox that works best for toolbar placement, adjust display, hiding, & animation settings, define how the buttons display, and add/remove additional buttons as desired. Once you open the Customize Toolbar Window make any desired additions or removals just like you would before on the top UI section and close when finished. Note: Works with Firefox 4.0b7pre – 4.0.* Vertical Toolbar [Mozilla Add-ons] Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic] Reclaim Vertical UI Space by Adding a Toolbar to the Left or Right Side of Firefox Androidify Turns You into an Android-style Avatar Reader for Android Updates; Now with Feed Widgets and More

    Read the article

< Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >