Search Results

Search found 51988 results on 2080 pages for 'http headers'.

Page 566/2080 | < Previous Page | 562 563 564 565 566 567 568 569 570 571 572 573  | Next Page >

  • "remote file operation failed" on Hudson

    - by Aveen
    I am running a Windows slave for Husdon 1.337 (Linux master). When running a project on the Windows node, it fails with the following message: Building remotely on winTestSlave Checking out a fresh workspace because there's no workspace at C:\hudson\***\ejb remote file operation failed It did work yesterday and I have not upgraded Hudson or changed its configurations (or the slave's configurations) in any way. I establish the connection between the slave and the master by running the following command on a cygwim prompt on the slave: java -jar slave.jar -jnlpUrl http://myserver/computer/winTestSlave/slave-agent.jnlp I saw the issue http://issues.hudson-ci.org/browse/HUDSON-5374 and did as instructed in the work-around but that did not work. I also tried with a newer version of slave.jar (version 1.356) but that did not work either. Does anyone please have any idea of how to fix this? I really cannot find more information anywhere else!

    Read the article

  • Setting Rails up on a Linode - Nginx Issue

    - by rctneil
    I am extremely new to this so please don't shoot me down: I have set up a Linode running Ubuntu, It is all sort of working except Nginx. I am following this guide: http://rubysource.com/deploying-a-rails-application/ And this for nginx: http://library.linode.com/web-servers/nginx/installation/ubuntu-10.04-lucid When I go to my IP, I get a 500 internal server error. I have tried starting nginx and it looks like it starts fine. I run this: ps awx | grep nginx and I get: 308 ? Ss 0:00 nginx: master process /usr/sbin/nginx 2309 ? S 0:00 nginx: worker process 2311 ? S 0:00 nginx: worker process 2312 ? S 0:00 nginx: worker process 2313 ? S 0:00 nginx: worker process 2850 pts/0 S+ 0:00 grep --color=auto nginx I really am not sure what else to do to get it running. Any help? Neil

    Read the article

  • apache domain names are case sensitive

    - by neubert
    The following HTTP request results in a "See the error log for more details; Invalid Value Found For Domain" error: GET / HTTP/1.0 Host: www.MyWebsite.com If I make the hostname all lowercase, however, it works just fine. How can I make Apache case insensitive? Here's my httpd.conf file: <VirtualHost *:80> ServerName mywebsite.com ServerAlias www.mywebsite.com ... </VirtualHost> I tried adding ServerAlias www.MyWebsite.com to that but that didn't help. And in any event, it seems like that's a poor approach anyway since the case can be mixed up in a ton of different ways and trying to account for all of them would result in a huge *.conf file. Any ideas? Thanks!

    Read the article

  • How to connect to Google App Engine server in internal network iMac?

    - by Will Merydith
    I have 3 iMacs and a Windows machine on my home network, all connected via an Airport Extreme router. I'm developing Google App Engine applications locally on one of the iMacs, and can view applications using http://localhost:8080 (or whatever port I choose). How do I connect to those applications from other iMacs and Windows machines in my network? I've located the IP for the iMac hosting Google App Engine: 10.0.1.7. But when I try http://10.0.1.7:8080 from another machine it will not load the page.

    Read the article

  • nginx errors: upstream timed out (110: Connection timed out)

    - by Sparsh Gupta
    Hi, I have a nginx server with 5 backend servers. We serve around 400-500 requests/second. I have started getting a large number of Upstream Timed out errors (110: Connection timed out) Error string in error.log looks like 2011/01/10 21:59:46 [error] 1153#0: *1699246778 upstream timed out (110: Connection timed out) while reading response header from upstream, client: {IP}, server: {domain}, request: "GET {URL} HTTP/1.1", upstream: "http://{backend_server}:80/{url}", host: "{domain}", referrer: "{referrer}" Any suggestions how to debug such errors. I am unable to find a munin plugin to keep a check on number of upstream errors. Sometime the number of errors per day is way too high and somedays its a more decent 3 digit number. A munin graph would probably help us finding out any pattern or correlation with anything else How can we make the number of such error as ZERO

    Read the article

  • VMware - Broadcom 1000Gbps NIC does not link at 100Mbps to a Cisco switch port

    - by Spirit
    Today we've stumbled on a very awkward situation with our VMWare Server. The server is with ESX 3.5 that has a 1Gbps NIC. We bought a brand new managed Cisco Linksys switch with 10/100Mbps interface ports but when we plugged the cable in one of the ports the link simply does not wanted to activate :S... Does anyone with more VMware experience have ever had similar problem? From what I know is that 1Gbps NICs are backwards compatible with 100Mbps switches. This is what we've tryed so far but with no success: Tryed: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004089 Tryed to modify the /etc/modules.conf folowing the guide from this article http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=813 After the changes I have restarted the networking services using # service network restart, # service mgmt-vmware restart and # service vmware-vpxa restart It seems that no matter how many times, or whatever approach/method (GUI or Shell) we try to change the speed and duplex of the network adapter and to force it to 100mbps it only accepts 1Gbps .. I am starting to go nuts :@

    Read the article

  • first of all nice work,, how to redirect the url of old modified directries

    - by kath
    I really appreciate your hard work after searching lot of web i cannot find the answer so if you get time please try to find what i should so the problem is its a website for classifieds ads before i modified or better word edited the category name but its still showing up in Google index and even in browser URL so is there any way to by pass or redirect it to new one i tried .htaccess but cant get the the result here is the both URL list before modification http://adsbuz.com/vehicles-cars/other-vehicles/selling-my-2010-toyota-sequioa-19500-9585.htm after editing category name (modified one) http://adsbuz.com/vehicles-cars-for-sale/other-vehicles/selling-my-2010-toyota-sequioa-19500-9585.htm (edited category name was before ""vehicles-car"" and after editing is ""vehicles-cars-for-sale"" as you can see both URL opens and not good for seo. and is there any way some one opens wrong url but page opens only with corect url automatically just like in your site.. consider me new in this market and want little help here (the website is in php) thanks Really appreciate your quick response thanks kath

    Read the article

  • .htaccess redirect, from old dirty URL to a clean new URL with parameters

    - by JustAnil
    I have the following 2 links, I'm not great with .htaccess rules yet. Old URL: http://www.mywebsite.org.uk/donate/donate.php?charity_id=885&project_id=18111 New URL: http://new.mywebsite.org.uk/donation/to/885/18111 I want all the traffic coming from the old URL to the new url (including the parameters charity_id & project_id). I'm trying to learn .htaccess rules, but finding the tutorials online to be kinda vague. I'd really like a simple explanation on the .htaccess rules. (Give a man a fish, feed him for a day, teach a man to fish, feed him for a lifetime). The correct answer will be the answer with a simple and useful explanation (along with the rules if possible!).

    Read the article

  • How to change the URL on my Amazon EC2 webserver

    - by Sarah
    I am at the point in playing around with EC2 that I have launched a webserver. Right now, the website URL looks like http://ec2-<some numbers>.compute-1.amazonaws.com/ I am evaluating the usefulness of these services for my small business purposes; is there a way I can get my URL to look something more like http://<mybusiness>.com. Ideally, I would like to get it to look cleaner, and furthermore I would rather not have "amazonaws" as part of it. Is this possible? I'm a newb to AWS, so apologies if this is an easy question

    Read the article

  • is it okay to use random URLs instead of passwords?

    - by stew
    Is it considered "safe" to use URL constructed from random characters like this? http://example.com/EU3uc654/Photos I'd like to put some files/picture galleries on a webserver that are only to be accessed by a small group of users. My main concern is that the files should not get picked up by search-engines or curious power-users that poke around my site. I've set up an .htaccess file, just to notice that clicking on http://user:pass@url/ links doesn't work well with some browsers/email clients, prompting dialogs and warnings messages that confuse my not-too-computer-savy users.

    Read the article

  • DKIM, spam probability, signing with key at mail server vs sender domain?

    - by Andreas
    I'm working on an email marketing tool and so far we've been recommending our customers to set up an SPF-record (Sender-ID) and a DKIM-record, we also have our own SPF-record on the mail server and a shared DKIM-record for those who do not set up their own DKIM-record. Those that do not set up their own DKIM-records still pass the DKIM-test, but with the notice that "identity doesn't match any headers" (according to port25), i.e, it doesn't match the textual sender domain. But does anyone know if this "discrepancy" actually has any impact on spam scoring/probability, i.e, should we continue to recommend our customers to set up a DKIM-record (as opposed to just using our shared) or is just wasted effort?

    Read the article

  • Unable to do Port Forwarding in Virtual Box

    - by dewbot
    I'm using Mac OS X 10.6. I have installed Virtual Box 4.1.0 in it. My Guest OS is Ubuntu Server 11.04. I have added a rule in Port Forwarding in Virtual Box - "guestssh" TCP 127.0.1.1 8080 127.0.0.1 1337 Inside Guest OS I'm running nodejs server. Code is nothing but simple helloworld code found on their site http://nodejs.org/. In short I'm running server on 127.0.0.1 on 1337 Port. Now according to rule I have given, from Host Machine all the requests for 127.0.1.1:8080 should be forwarded to 127.0.0.1:1337 of Guest OS. From Host I'm doing curl http://127.0.1.1:8080 and I'm getting curl: (7) couldn't connect to host Is there something am I doing wrong? Note- Don't give me suggestion to do ssh n all. As my ISP does not provide Internal LAN so its not possible in my case. All I can do it Port Forwarding.

    Read the article

  • Search Domain Not Working With Squid

    - by Kyle Brandt
    I just set up a squid proxy as a parent proxy to HAVP. When I or other users try to access a domain with an address like "http://foo" I get the following squid error in the browser: The dnsserver returned: Server Failure: The name server was unable to process this query. However, "http://foo.companyname.com" works fine. The search domain in resolv.conf on both the client and proxy host is companyname.com. (There a better term for "search domain"?) Is there a way to correct this, maybe something in the squid.conf file?.

    Read the article

  • Apache Balancing by source IP

    - by Daniel
    I am using Apache's Proxy Balancer to balance one sub domain (e.g. subdomain.domain.com) to an application which is located on 2 servers. Here an extract from my Apache configuration file: <Proxy *> Order deny,allow Allow from all </Proxy> <Proxy balancer://cluster1> BalancerMember http://server1:28081 route=w1 BalancerMember http://server2:28082 route=w2 </Proxy> ProxyPass /path balancer://cluster1/path ProxyPassReverse /path balancer://cluster1/path My question is, if it's possible to decide with the source IP-address which BalancerMember should be used for the request? To e.g. Requests from 1.2.3.4 to Member 1?

    Read the article

  • Difference between key_buffers and recommendation

    - by Typeoneerror
    I'm looking to add a bit of memory to MySQL on a Linode VPS server on which I've got a small facebook (canvas app) PHP app using MySQL running. I'm not super familiar with MySQL optimization so I'm hoping to find a simple answer. I think I want to increase the key_buffer size (the default is 16M) to something like 32M to start, but I'm not sure if I need to tweak anything else as well. All I've done so far is increase the query_cache_size to 32M from 16M. There's also key_buffer under [mysqld] and key_buffer under [isamchk]. What are the difference between those two? If I have Linode 2048MB (http://www.linode.com) VPS, what would recommend I set the buffers to? I don't expect this site to have tons of visitors, but I'd like it to be as optimized as possible. Definitely way more heavy on the database access than PHP and very few HTTP requests.

    Read the article

  • Downloading a repository for local use

    - by EBV2010
    I'm trying to get Thunderbird working in such a way that it will properly work with Kolab groupware. For that I need it to be in a fixed setup of Thunderbird and add-ons (Lightning, SyncKolab) without automatic updates and I need to present version of Thunderbird to be available for the users. What I hope to achieve is that the repository for Thunderbird as it is now on http://ppa.launchpad.net/mozillateam/thunderbird-stable/ will be available on my local server so I always use that version even if Thunderbird goes to a new stable version. What I hope to achieve is this: - I copy the content of http://ppa.launchpad.net/mozillateam/thunderbird-stable/ to my server - I make it available as a repository on my network I neither know if this is possible or allowed under the license etc.

    Read the article

  • Problem running application on windows server 2008 instance using amazon ec2 service and WAMP

    - by Siddharth
    I have a basic (small type) windows server 2008 instance running on amazon ec2. I've installed WAMP server on to it, and have also loaded my application. I did this using Remote desktop Connection from my windows machine. I'm able to run my application locally on the instance, however when I try to access it using the public DNS given to it by amazon, from my browser, I'm unable to do so. My instance has a security group that is configured to allow HTTP, HTTPS, RDP, SSH and SMTP requests on different ports. In fact I have the exact same security group as the one used in this blog, http://howto.opml.org/dave/ec2/ I did almost everything same as the blog, except for using a different Amazon Machine Image. This is my first time using amazon ec2, and i can't figure out what I'm doing wrong here

    Read the article

  • How to force or redirect to SSL in nginx?

    - by Callmeed
    I have a signup page on a subdomain like: https://signup.mysite.com It should only be accessible via HTTPS but I'm worried people might somehow stumble upon it via HTTP and get a 404. My html/server block in nginx looks like this: html { server { listen 443; server_name signup.mysite.com; ssl on; ssl_certificate /path/to/my/cert; ssl_certificate_key /path/to/my/key; ssl_session_timeout 30m; location / { root /path/to/my/rails/app/public; index index.html; passenger_enabled on; } } } What can I add so that people who go to http://signup.mysite.com get redirected to https://signup.mysite.com ? (FYI I know there are Rails plugins that can force SSL but was hoping to avoid that)

    Read the article

  • X server not starting up after new kernel compilation

    - by tech_learner
    I have compiled the Kernel on my 64-bit Debian XPS Studio 1340 Dell system. srikanth@debian:~ - 05:40:52 PM - $ uname -a Linux debian 2.6.32-5-amd64 #1 SMP Thu Mar 22 17:26:33 UTC 2012 x86_64 GNU/Linux Kernel version that I have used and compiled from kernel.org is 2.6.35.13 I have nvidia installed on old kernel. I got the old config and I used the same config to compile the new kernel. Everything went well and I got two debian packages ( image and headers ) which I have installed on my system. When I select the new kernel on the boot menu and I go into it, the X server is not starting up possibly because I have to "rebuild" ( not sure how to do that ) according to this link: http://www.linuxquestions.org/questions/slackware-14/x-server-not-starting-after-kernel-compilation-605265/ Can you suggest how to do the rebuild on nvidia module so that I can start x ( without seeing any blank screen or error saying nvidia module is missing ) ? PS: The link that I have used to compile the kernel is https://help.ubuntu.com/community/Kernel/Compile#Alternate_Build_Method:_The_Old-Fashioned_Debian_Way

    Read the article

  • Protected flash video (requires HAL) on Gentoo

    - by Mala
    I am unable to play "protected" flash video, such as Amazon Prime Instant Video. From what I've read and uncovered, this seems to be due to a lack of HAL being installed on my computer. Confirmation that it is required for protected video can be seen towards the beginning of http://helpx.adobe.com/x-productkb/multi/flash-player-11-problems-playing.html However, hal is not in the gentoo portage tree, and in any case has been deprecated and replaced by udev. How can I go about getting Amazon Prime Instant Video to work again? I was considering grabbing the source from http://www.freedesktop.org/wiki/Software/hal but the links there won't load, and trying to install it from old ebuilds or from overlays which claim to still support it (e.g. kde-sunset) result in a compilation error: In file included from addon-generic-backlight.c:38:0: /usr/include/glib-2.0/glib/gmain.h:21:2: error: #error "Only <glib.h> can be included directly." Has anyone else solved this issue?

    Read the article

  • Hide directory contents from showing when accessing the URL directly

    - by SoLoGHoST
    On my site, if you browse to http://example.com/images/ the contents of the entire directory are shown like so: How can I make it so that this doesn't show up when people browse directly to http://example.com/images/? Can I create an .htaccess file in that directory? Or is there a better way? I really don't want people being able to do this for the entire site (i.e. every directory on that site). What can I do to prevent this? I figure it's either something that has to be done in Apache or using an global .htaccess file and placing it in the public_html folder perhaps? EDIT I diverted this using an index.php file, but I still feel that security is an issue here, how can I fix this permanently?

    Read the article

  • Setting WMI permissions remotely on windows server 2003

    - by user41507
    Hello. I am a programmer , I don know the server well. I made a simple program checking the service on the remote server is started or not. by using this(http://msdn.microsoft.com/en-us/library/dwd0y33x(v=VS.90).aspx) but the permission should be set. and I can't find any document via the internet. except one document. http://msdn.microsoft.com/en-us/library/aa393266(VS.85).aspx but the engineer say that 'tell me exactly what I do. there are many DCOM are they any nice document to show him? thanks in advance

    Read the article

  • NGINX rewrite rules help. Redirect not working and want to get rid of index.php in urls

    - by Tamerax
    hey! I have 2 questions for nginx users. 1) I'm trying to setup my joomla server onto my new linode running NGINX and after much (like days) of searching and testing, I finally have a config that works with with SEF url plugins...sorta. I was using an apache system on the old server and it used mod_rewrite and life was fine in terms of SEF. Since NGINX doesn't have mod_rewrite, I found something that works BUT it constantly leaves index.php in the urls. ex: http://mysite.com/index.php/forum i want it to be just http://mysite.com/forum but without mod_rewrite it doesn't seem to be possible in joomla that i'm aware of. I know in wordpress it IS possible but I have to use a plugin. Here is my config file: server { listen 80; server_name mysite.com www.mysite.com; access_log /home/public_html/mysite.com/log/access.log; error_log /home/public_html/mysite.com/log/error.log; root /home/public_html/mysite.com/public/; large_client_header_buffers 4 8k; # prevent some 400 errors index index.php index.html; fastcgi_index index.php; location / { expires 30d; error_page 404 = @joomla; log_not_found off; } # Rewrite location @joomla { rewrite ^(.*)$ /index.php?q=last; } # Static Files location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico)$ { access_log off; expires 30d; } # PHP location ~ \.php { keepalive_timeout 0; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/public_html/mysite.com/public /$fastcgi_script_name; } } 2) second question should be easy but i can't get it to work. I want to use the same config I posted above and have either mysite.com or www.mysite.com both forward to mysite.com/portal. Basically when you hit up the front page with or without the www, it all gets forwarded to a sub directory on the server I called Portal. I have tried several variations of: rewrite ^/(.*) http://www.example.com/portal/$1 permanent; but it usually ends with firefox telling me there is some crazy loop happening the address bar saying something like mysite.com/portalportalportalportalportal.........on and on. So, any help on either of these issues would be awesome!! Thanks!!

    Read the article

  • What exactly is an invalid HTTP_HOST header

    - by rolling stone
    I've implemented Django's relatively new allowed hosts setting, which is meant to prevent attackers from submitting requests with a fake HTTP Host header. Since adding that setting, I now get anywhere from 20-100 emails a day notifying me of invalid HTTP_HOST headers. I've copied in an example of a typical error message below. I'm hosting my site on EC2, and am relatively new to setting up/maintaining a server, so my question is what exactly is happening here, and what is the best way to manage these invalid and I assume malicious requests? [Django] ERROR: Invalid HTTP_HOST header: 'www.launchastartup.com'.You may need to add u'www.launchastartup.com' to ALLOWED_HOSTS.

    Read the article

  • DHCP and Reservations in windows server 2k3!!

    - by Fri13th
    Hello everybody! I have a problem with Configuring DHCP Reservations: in the client, ipconfig: Address Leases is: 192.168.188.20 http:/i160.photobucket.com/albums/t171/dungttvn/123.png then in the client computer: ipconfig /release but when i config the Reservations with the fix IP address is: 192.168.188.100 in the sever computer (throught vmnet1) and in the client computer: ipconfig /renew ... it's not work: the address lease is still 192.168.188.20 always! http:/i160.photobucket.com/albums/t171/dungttvn/456.png Someone help me! =.= Many Thanks!

    Read the article

< Previous Page | 562 563 564 565 566 567 568 569 570 571 572 573  | Next Page >