Search Results

Search found 57458 results on 2299 pages for 'http response codes'.

Page 414/2299 | < Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >

  • My acer aspire laptop suddenly stopped working, just a blank screen

    - by mazda
    My acer aspire laptop suddenly stopped working while it was connected to my nokia mobile modem through usb. It stopped with power swich staying on and no response to clicking off, no irregular sound, nothing but just a blank screen. I disconnected the power and battery quickly but power switch did not respond at all and even the power indicator didn't turn on again. the display was also gone after disconecting the power and removing battery. It has no response or any sign of power while both battery and power supply are working. What do you think? A completely dead mainboard? Does the MB has any safty board or fuse that could be replaced? Any suggestion will be appreciated, Thank you.

    Read the article

  • Apache gives empty reply

    - by Jorge Bernal
    It happens randomly, and only on moodle installations. Apache don't add a line in the logs when this happens, and I don't know where to look. koke@escher:~/Code/eboxhq/moodle[master]$ curl -I http://training.ebox-technologies.com/login/signup.php?course=WNA001 curl: (52) Empty reply from server koke@escher:~/Code/eboxhq/moodle[master]$ curl -I http://training.ebox-technologies.com/login/signup.php?course=WNA001 HTTP/1.1 200 OK The apache conf is quite straightforward and works perfectly in the other vhosts <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /srv/apache/training.ebox-technologies.com/htdocs ServerName training.eboxhq.com ErrorLog /var/log/apache2/training.ebox-technologies.com-error.log CustomLog /var/log/apache2/training.ebox-technologies.com-access.log combined <FilesMatch "\.(ico|gif|jpe?g|png|js|css)$"> ExpiresActive On ExpiresDefault "access plus 1 week" Header add Cache-Control public </FilesMatch> </VirtualHost> Using apache 2.2.9 php 5.2.6 and moodle 1.9.5+ (Build: 20090722) Any ideas welcome :)

    Read the article

  • Autoscale Rackspace Cloud, Scalr or DIY?

    - by Andre Jay Marcelo-Tanner
    I'm looking into creating a setup on Rackspace Cloud that will allow me to autoscale my webservers (no db) on demand. Preferably using something like response time. I've read into configuration tools like Puppet/Chef, but I'm thinking I can just launch from prepared server images that are ready to go. Is there any tool out there already that can monitor my existing node response times and then launch or scale up new ones based upon certain variables like average X load over Y time? I see there are commercial offerings like Scalr, Rightscale, but how would I do this myself?

    Read the article

  • Microsoft FTP fails to connect after the client requests the list of features (FEAT)

    - by Max
    This is a really weird problem. The first few times I try to connect Filezilla just hangs on the line 211-Extended features supported: for a while before coming up and saying Error: Could not connect to server. The filezilla log below: Command: PASS *********** Response: 230 User logged in. Command: FEAT Response: 211-Extended features supported: Error: Could not connect to server The weird thing is if I keep trying to connect eventually it just works and connects fine. After Filezilla knows which features the server supports it stops asking for a while which enables you to connect first time until Filezilla decides it wants to double check the features list again. I'm at a loss on how to debug this. Has anyone experienced similar?

    Read the article

  • Boot xen server through ipxe

    - by Ghassen Telmoudi
    I'm want install Xen Server 6.2 though ipxe, I tried different configurations, no luck making to work until now. I found some may example to boot from pxe using TFTP server, and here is an example: default xenserver-auto label xenserver-auto kernel mboot.c32 append xenserver/xen.gz dom0_max_vcpus=1-2 dom0_mem=752M,max:752M com1=115200,8n1 console=com1,vga --- xenserver/vmlinuz xencons=hvc console=hvc0 console=tty0 answerfile=http://[pxehost]/answerfile.xml remotelog=[SYSLOG] install --- xenserver/install.img The problem is that ipxe uses different syntax, I could not figure out how to convert this configuration to work on ipxe. Here is my ipxe file so far: #!ipxe echo "XEN Server is booting up" initrd http://server-ip/pxe/xen/boot/xen.gz kernel http://server-ip/pxe/xen/boot/pxelinux/mboot.c32 boot Can any one supply the correct configuration?

    Read the article

  • cisco 6500 crash enabling netflow

    - by bleomycin
    Hello everyone, i have a cisco 6503 running IOS 12.2(33)sxi5 and i'm trying to enable netflow. Following the instructions here http://www.manageengine.com/products/netflow/help/cisco-netflow/cisco-ios-netflow.html enabling for interface vlan 3, shortly after ip flow-export version 5 console outputs: CPU_MONITOR-6-NOT_HEARD: CPU monitor messages have not been heard for 30 seconds crashlog here: http://pastebin.com/Niv2H8xD it then writes a crash log and reloads the router. Has anyone else experienced anything like this before? Here is my running config prior to adding the options in the above link: http://pastebin.com/AgNb1ahG Thank you for any help!

    Read the article

  • UEC - Can the Cluster Controller and Storage Controller be seperate systems?

    - by Jeremy Hajek
    My department is implementing an Ubuntu Enterprise Cloud. I have done the testing and am quite comfortable with the 4 pieces, CC/SC, CLC, WS, NC. Looking at various documents below it appears the the Storage Controller and Cluster Controller (eucalyptus-sc and eucalyptus-cc) are always installed on the same system. My question is this: can I install the storage controller and the cluster controller on separate systems? http://open.eucalyptus.com/wiki/EucalyptusAdvanced_v2.0 the picture indicates that cc and sc are two different machines http://www.canonical.com/sites/default/files/active/Whitepaper-UbuntuEnterpriseCloudArchitecture-v1.pdf P.10 1st paragraph uses the word "machine(s)" http://software.intel.com/file/31966 P. 8 indicates the same separate architecture BUT... https://help.ubuntu.com/community/UEC/PackageInstallSeparate indicates below that the SC and CC are to be on the same system.

    Read the article

  • dhcpd: varying vendor-class-identifier

    - by jessicah
    I'm having trouble selectively sending parameters in response to a DHCP Inform packet using groups (or even without, just using host declarations) for bootp stuff. My configuration file right now looks like: subnet 130.123.131.128 netmask 255.255.255.128 { allow unknown-clients; } host dev-mac-09 { option vendor-class-identifier "example-identifier"; hardware ethernet 10:9a:dd:51:ff:83; } If I put vendor-class-identifier in the global scope, using tcpdump I can see that the client receives the vendor class option successfully. If I take it out, and just keep it in the host scope (or group scope), the client never receives the option. Specifying option dhcp-parameter-request list 60 doesn't help either. I did try using a class definition inside a group, but then it applied even if the host wasn't a part of the group. As an aside, how do I get detailed logging? At least something to indicate what groups and things got used to generate the response to the client.

    Read the article

  • "Couldn't resolve host" for any external content

    - by scatteredbomb
    On our site we run a few different scripts for various sites (uploading to amazon s3, data from chartbeat, script to count twitter followers) and all of them just stop working from time to time. They work most days, but then some days (like today) they all just stop working. This simple script to get follower count into PHP $url = "http://twitter.com/users/show/username"; $response = file_get_contents ( $url ); $t_profile = new SimpleXMLElement ( $response ); $count = $t_profile->followers_count; Just sits there for a couple minutes, then finally spits out an error that says "Couldn't resolve host". Any script we use for an external site gives us this error. I'm not really sure where to check what's blocking these connections all of a sudden, and why it seems to work most times, then doesn't for a day or so, then works again. Any tips? Update: Contents of resolv.conf search 147.225.210.rdns.ubiquityservers.com nameserver 72.37.224.5 nameserver 72.37.224.6

    Read the article

  • nginx points the sub-directory of an alias folder to the base directory

    - by Starry
    I am new to Nginx. Now I have a confusion on nginx configurations: My web site contains folders in different locations: location / { root /Path1 } location ^~ /personal { alias /Path2 } When I query http://mysite/personal, I am accessing the content of /Path2 instead of /Path1 Now I want to add a sub-directory in /personal with specific configurations, so I add: location /personal/download { autoindex on; } But I got 404 error when querying http://mysite/personal/download. According to the error log, I am directed to /Path1/personal/download, which is not correct. How can I configure nginx, such that all access to http://mysite/personal/* will be directed to the same directory in /Path2?

    Read the article

  • what firefall linux distro applicance could track internet usage per device in my home?

    - by GregH
    Hello, Anyone know of a community edition/open source/free firewall/gateway software product that I could install onto an old PC to act as my firewall/gateway/proxy etc, BUT for which it has the power to track internet usage per device in my home. So: a) Mandatory - Track internet usage for devices on my home network on a per device basis (e.g. various PCs/Xbox etc) b) Mandatory - Report/graph would would give a breakdown of internet usage, per device (e.g. IP address), per day. c) Desirable - as in b) above but per hour d) Desirable - realtime graph (e.g. 5 minute measurement intervals or something) that shows current internet usage per device e) Mandatory - Handles all internal<=internet requests for all protocols (e.g. HTTP, HTTPS, xbox etc) f) Mandatory - No explicit settings in clients required - i.e. Transparent Monitoring concept (for both HTTP and non-HTTP traffic like xbox, skype etc) g) Mandatory - easy "appliance" like installation onto a dedicated low spec PC thanks in advance

    Read the article

  • what firefall linux distro applicance could track internet usage per device in my home?

    - by GregH
    Hello, Anyone know of a community edition/open source/free firewall/gateway software product that I could install onto an old PC to act as my firewall/gateway/proxy etc, BUT for which it has the power to track internet usage per device in my home. So: a) Mandatory - Track internet usage for devices on my home network on a per device basis (e.g. various PCs/Xbox etc) b) Mandatory - Report/graph would would give a breakdown of internet usage, per device (e.g. IP address), per day. c) Desirable - as in b) above but per hour d) Desirable - realtime graph (e.g. 5 minute measurement intervals or something) that shows current internet usage per device e) Mandatory - Handles all internal<=internet requests for all protocols (e.g. HTTP, HTTPS, xbox etc) f) Mandatory - No explicit settings in clients required - i.e. Transparent Monitoring concept (for both HTTP and non-HTTP traffic like xbox, skype etc) g) Mandatory - easy "appliance" like installation onto a dedicated low spec PC thanks in advance

    Read the article

  • Redircting to a url that has a question mark in it?

    - by dkmojo
    I have a somewhat strange problem. A client has moved their site to Wordpress. They use a service for link exchanges that has a Wordpress plugin. The issue is that the new links pages use a query string to display the correct content and I cannot figure out how to redirect the old URLs correctly. Old URLs look like this: domain.com/link/category-name.html The plugin makes them look like this in WP: domain.com/links/?page=category-name.html How in the world can I get the redirect to work properly? Here's what I have tried: Redirect 301 /link/actors.html http://www.artisticimages.biz/links/?page=actors.html Redirect 301 /link/actors.html http://www.artisticimages.biz/links/%3Fpage=actors.html Redirect 301 /link/actors.html http://www.artisticimages.biz/links/\?page=actors.html But none of those have worked. Any help is greatly appreciated!

    Read the article

  • Program for drawing with pen tablet, like Salman Khan's one

    - by Halst
    I do a lot of sketching with my pen-tablet. I use Microsoft Paint in Windows 7, and it is just perfect except for bad anti-aliasing. I found some videos of Salman Khan, where his sketching is really smooth and anti-aliased. Do you know what program he might use? You can see a bit of its interface here: http://www.khanacademy.org/press/chronicle.HTML and some more: http://www.khanacademy.org/ http://khanexercises.appspot.com/video?v=GW8ZPjGlk24 Else, you can recommend me something else. I hope to find something like Microsoft Paint in Windows 7, but anti-aliased, or whatever.

    Read the article

  • php file downloads instead of being processed with ajax on apache

    - by eagleon
    I have a small website where some content is displayed within a HTML tag using AJAX. The content is simply taken from another page on the same web site. However, sometimes instead of loading the parsed PHP file, the browser displays a download box instead. I downloaded the file and this is what it looks like a text file mixed with binary or gzipped data. I can't paste the binary stuff here, but here are some of the headers: Jul 2012 18:52:16 GMT Server: Apache/2 X-Powered-By: PHP/5.3.10 Content-Encoding: gzip Vary: Accept-Encoding,User-Agent Keep-Alive: timeout=1, max=95 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html HTTP/1.1 304 Not Modified Date: Sun, 01 Jul 2012 18:52:16 GMT Server: Apache/2 Connection: Keep-Alive Keep-Alive: timeout=1, max=93 ETag: "2fc857-409-4c39691c59b40" HTTP/1.1 304 Not Modified Date: Sun, 01 Jul 2012 18:52:16 GMT Server: Apache/2 Connection: Keep-Alive Keep-Alive: timeout=1, max=92 ETag: "2fc854-3e5-4c39691b65900" HTTP/1.1 304 Not Modified Date: Sun, 01 Jul 2012 18:52:16 GMT Server: Apache/2 Connection: Keep-Alive Keep-Alive: timeout=1, max=91 ETag: "2fc847-3e3-4c3969197d480" and large blocks of stuff like this: µàl]&BaËÜk#ìÏ

    Read the article

  • Vanilla TeX Live 2009 on Ubuntu

    - by reprogrammer
    I installed TeX Live 2009 by following the instructions at http://www.tug.org/texlive/quickinstall. Then, to make my local TeX Live installation work with the Ubuntu package management system, I followed the instructions on http://www.tug.org/texlive/debian.html. That is, I performed the following steps. $ sudo aptitude install equivs $ mkdir /tmp/tl-equivs && cd /tmp/tl-equivs $ equivs-control texlive-local # I replaced the contents of texlive-local by http://www.tug.org/texlive/debian-control-ex.txt $ equivs-build texlive-local $ sudo dpkg -i texlive-local_2009-1~1_all.deb However, when I go about installing kile through the Ubuntu package management system, it requires me to install a lot of dependencies that are already provided by my texlive-local package. Does any one have a suggestion to fix this problem?

    Read the article

  • puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work?

    Read the article

  • Using .htaccess to protect direct access of files

    - by claydough
    We need to prevent direct access of files on our site from someone just entering a URL in their browser. I got this to work by using an htaccess file and it is fine in IE & Safari, but for some reason Firefox doesn't cooperate. I think it has something to do with the way Firefox reports referrers. Here is my code in the .htaccess file. RewriteEngine On RewriteBase / RewriteCond %{HTTP_REFERER} !^http://(my\.)?bigtimbermedia\.com/.*$ [NC] RewriteRule \.(swf|gif|png|jpg|doc|xls|pdf|html|htm|xlsx|docx)$ http://my.bigtimbermedia.com/ [R,L] If you want to see an example of this, try accessing this first... http://my.bigtimbermedia.com/books/bpGreyWolvesflip/index.html It blocks it properly in all browsers. Now if you go to this URL and click on the link, it works in IE and Safari, but Firefox chokes and seems like it is in a loop. Any ideas how I can get this to work in Firefox? Thanks!

    Read the article

  • Setup asp.net mvc application as subdomain website

    - by a_m0d
    I'm trying to setup a local application on a subdomain on our company server. There is already an installation of sharepoint running on http://companyweb/, but I would like my application to run on http://orders.companyweb/. I tried creating a new website, leaving the IP address the same as it is for http://companyweb, and just changing the host header value to orders.companyweb. However, no matter where I try to access the site from (different computers around the network, including the server itself), I keep getting 404 errors. I then tried setting up a simple index.html and serving that up as the highest priority; however, I still got 404 errors. This makes me think that I have actually setup the site itself wrong. What should I change to be able to access this application correctly on all the local computers?

    Read the article

  • route port 3000 to apache2 alias

    - by user223470
    I have a meteor application running on port 3000. I can successfully connect to the program with www.myurl.com:3000, but would rather connect to it via www.myurl.com/myappname. I started with the instructions on this web site: http://www.andrehonsberg.com/article/deploy-meteorjs-vhosts-ubuntu1204-mongodb-apache-proxy and I have the following Apache configuration file: <VirtualHost *:80> ServerName myurl.com ProxyRequests off <Proxy *> Order deny,allow Allow from all </Proxy> <Location /> ProxyPass http://localhost:3000/ ProxyPassReverse http://localhost:3000/ </Location> </VirtualHost> I do not know how to continue from here to get the program on www.mysite.com/myapp. In other situations, I would use an Alias within the Apache configuration file, but that doesn't seem like the right direction to go in this case. How do I configure Apache to send port 3000 to www.myurl.com/myapp?

    Read the article

  • [Apache] mod_rewrite www.site.com/dir/ --> www.site.com/dir/2009/

    - by Casey
    I'm having trouble with this rewrite. I've never really used mod_rewrite before and don't have much experience with regex. Any help is appreciated! <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine on #prevent nested looping RewriteCond %{ENV:REDIRECT_STATUS} ^$ #re-route incoming requests RewriteRule ^(.*)$ %{REQUEST_URI}2009/$1 [L,NE] </IfModule> This partially works, http://www.site.com/dir/ is routed to http://www.site.com/dir/2009/ but a request like http://www.site.com/dir/css/theme.css fails. I'm hoping to rewrite all requests to the parent directory into the 2009 subdirectory but I keep encountering infinite loops and server errors messages. I haven't found any useful examples out there. I figured this would be a common rewrite... Thanks in advance!

    Read the article

  • Configuring a Jetty web application on a different port

    - by sHz
    Hi folks, I'm brand new to Jetty. I'd like to ask if its possible to have Jetty listening on port 8080, however where specified, serve a specific web application under say /var/jetty/webapps/<appname> (default on CentOS) served on say port 10000 instead of http://localhost:8080/<appname> i.e. http://localhost:10000/ = http://localhost:8080/<appname&gt; ? If so, what configuration changes would be required to make this work without an additional proxy server? I've googled away, but haven't found a solution (perhaps I've missed something obvious?).

    Read the article

  • Existing connexion on Apache and mod_proxy_balancer don't fail over second JBoss node

    - by Jean-Rémy Revy
    I have a Jboss farm, load balanced by Apache HTTP + mod_proxy_balancer and mod_proxy_ajp, with the following configuration : <VirtualHost *:80> ServerName web-gui-acceptance.myorg.com ServerAlias web-gui-acceptance ProxyRequests Off ProxyPass /web-gui balancer://jbosscluster/web-gui stickysession=JSESSIONID nofailover=On ProxyPassReverse /web-gui http://srvlnx01.myorg.com:8080/web-gui ProxyPassReverse /web-gui http://srvlnx02.myorg.com:8080/web-gui <Proxy *> AuthType Kerberos [...] </Proxy> <Proxy balancer://jbosscluster> BalancerMember ajp://srvlnx01.myorg.com:8009 route=SRVLNX01_node1 BalancerMember ajp://srvlnx01.myorg.com:8009 route=SRVLNX02_node1 ProxySet lbmethod=byrequests </Proxy> </VirtualHost> When the first JBoss node fail (the hosting VM is down), my existing connexions don't fail over the second node ... the fist route is keeped (in table / .shm ?) and that provide me 503 errors. Can someone tell me what I missed ?

    Read the article

  • How do I set a default host for nginx?

    - by ulf
    I'm trying to figure out how to set a default host for my nginx installation. I found this article in the nginx Wiki: http://wiki.nginx.org/NginxVirtualHostExample#A_Default_Catchall_Virtual_Host Unfortunately, this doesn’t work. After restarting I get this: Restarting nginx: nginx: [emerg] unknown directive "http" in /etc/nginx/sites-enabled/catchall:1 nginx: configuration file /etc/nginx/nginx.conf test failed After removing the http directive I get this: Restarting nginx: nginx: [emerg] unknown log format "main" in /etc/nginx/sites-enabled/catchall:7 nginx: configuration file /etc/nginx/nginx.conf test failed I’m on Ubuntu 10.04.3 where I’m using the official nginx PPA. Version 1.0.9 of nginx is running.

    Read the article

  • Convert from port numbers to protocol names in wireshark

    - by Berkay
    i'm simply using tshark -r botnet.pcap -T fields -E separator=';' -e ip.src -e tcp.srcport -e ip.dst -e tcp.dstport '(tcp.flags.syn == 1 and tcp.flags.ack == 0)' to see the all initiated "legal TCP" connections. However, i need the destination port number conversion to "http" "netbios" etc. i'm not using -n option, but still i get: 128.3.45.128;62259;208.233.189.150;80 This is what i'm trying to get: 128.3.45.128;62259;208.233.189.150;http or 128.3.45.128;62259;208.233.189.150;80;http is better option for me. any idea from tshark users? or any other tool suggestions?

    Read the article

< Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >