Search Results

Search found 17054 results on 683 pages for 'jms request reply'.

Page 328/683 | < Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >

  • Why is apache refusing requests to www.mydomain.org if mydomain.org is working fine?

    - by sendos
    This is the setup I have working for mydomain.org: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName mydomain.org ServerAlias *.mydomain.org DocumentRoot /var/www/ </VirtualHost> If I request www.mydomain.org, apache throws a 403 with the following appended to the log: (13)Permission denied: /root/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable I've tried changing ServerName to www.mydomain.org, and changing ServerAlias to www.mydomain.org as well with no luck. What am I missing?

    Read the article

  • how do i install intermediate certificate

    - by getmizanur
    I have installed private key (pem encoded) and public key certificate (pem encoded) on amazon load balancer however when i check the ssl with site test tool (http://www.networking4all.com/en/support/tools/site+check/), i get the following error Error while checking the SSL Certificate!! Unable to get the local issuer of the certificate. The issuer of a locally looked up certificate could not be found. Normally this indicates that not all intermediate certificates are installed on the server. i converted crt file to pem using these command from this tutorial openssl x509 -in input.crt -out input.der -outform DER openssl x509 -in input.der -inform DER -out output.pem -outform PEM during setting up of amazon load balancer only option i left out was certificate chain (pem encoded) however this was optional. could this be cause of my issue? and if so i how do i create certificate chain? for the last question i have tried googling however i'm getting more confused than before. please help many thanks in advance. UPDATE @all thanks for the helpful advice. if you make request to verisign they will give you a certificate chain however this chain includes public crt, intermediate crt and root crt. make sure to remove the public crt from your certificate chain (which is the top most certificate) before adding it to your certification chain box of your amazon load balancer. if you are making https request from an android app then above instruction may not work for older android os such as 2.1 and 2.2. to make it work on older android os [https://knowledge.verisign.com/support/ssl-certificates-support/index?page=content&id=AR657&actp=LIST&viewlocale=en_US]. on this link click on "retail ssl" tab and then click on "secure site" "CA Bundle for Apache Server". copy and past these intermediate certs into certificate chain box. just incase if you have not found it here is the direct link [https://knowledge.verisign.com/support/ssl-certificates-support/index?page=content&id=AR1409] if you are using geo trust certificates then solution is much the same for android devices however you need to copy and past their intermediate certs for android. PS: sorry for the long urls however "new users can only post a maximum of two hyperlinks"

    Read the article

  • mixing different technologies using ARR reverse proxy

    - by Jaepetto
    I'm currently trying to put together a proof of concept on mixing various technologies onto one web site in order to ease migrations and add flexibility. The idea is to create one 'mashup' site behind an IIS 7.5 ARR reverse proxy. For the time being the ARR reverse proxy forwards all request to our main site. The request are as follow: client -> ARR: Get / ARR -> Server 1: Get / Server 1 -> ARR: 200: /index.htm ARR -> client: 200: /index.htm ...so far so good. Let's say, I want to add a new site (root of another server) as a subsite of my main website. a simple inbound rule does the trick: <rule name="sub1" stopProcessing="true"> <match url="^mySubsite(.*)" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false" /> <action type="Rewrite" url="http://server2/{R:1}" /> </rule> The requests now are: client -> ARR: Get /mySubsite ARR -> Server 2: Get / Server 2 -> ARR: 200: /index.htm ARR -> client: 200: /index.htm ... still ok. The issue comes when the site on server2 sends a redirection (e.g. to a login page). In the case of SharePoint, it will redirect the user to: /_layouts/Authenticate.aspx?Source=%2F ...which does not exists: client -> ARR: Get /mySubsite ARR -> Server 2: Get / Server 2 -> ARR: 301: /_layouts/Authenticate.aspx?Source=%2F ARR -> client: 301: /_layouts/Authenticate.aspx?Source=%2F client -> ARR: Get /_layouts/Authenticate.aspx?Source=%2F ARR -> client: 404: Not Found Does anyone know a way write the outbound rule to rewrite the response from server 2 "301: /_layouts/Authenticate.aspx?Source=%2F" to "301: /mySubsite/_layouts/Authenticate.aspx?Source=%2FmySubsite%2F"?

    Read the article

  • rhn_register through HTTP Proxy with Authentication

    - by kjloh
    Is there any limitation to the proxy authentication support of rhn_register? The proxy of the network I'm on sends the follow 407: HTTP/1.1 407 Proxy Authentication Required ( The ISA Server requires authorization to fulfill the request. Access to the Web Proxy filter is denied. ) Via: 1.1 VANESSA Proxy-Authenticate: Negotiate Proxy-Authenticate: Kerberos Proxy-Authenticate: NTLM It seems that rhn_register is not able to any of the authentication schemes above. Any advise?

    Read the article

  • setup dns to redirect all http requests on a specific machine in LAN

    - by mox601
    Hello, i should set up the following configuration with 2 machines: machine A issues HTTP requests machine B serves the pages requested by A For testing purposes, i want that EVERY HTTP request issued by machine A gets served by machine B. For example, machine A browser tries to access www.website.com/article.php?1234 machine B has a folder in its http server that has the content and replies to A. How can I set up a dns on machine B to point ALL requests to itself? Thanks

    Read the article

  • Scaling a video processing application on EC2?

    - by Stpn
    I am approaching the need to scale a video-processign application that runs on EC2. So far the setup is one machine: Backbonejs frontend Rails 3.2 Postgresql Resque + S3 for storage The flow of the app is as follows: 1) Request from frontend. Upload a video. 2) Storing video 3) Quering external APIs. 4) Processing / encoding videos. 5) Post to frontend. I can separate the backend and frontend without any problems, but when it comes to distributing the backend between several servers I am a bit puzzled. I can probably come up with a temporary solution (like just duplicating apps making several instances), but since I don't really have expertise in backend system administration, there can be some fundamental mistakes.. Also I would rather have something that is scalable. I wonder if anyone can give some feedback on the following plan: A) Frontend machine. Just frontend, talks to backend via REST Api of sorts. B) Backend server (BS), main database. Gets request from 1), posts to 2) saves uploads to 3) C) S3 storage. D) Server for quering APIs. Basically just a Resque workers, that post info back to 2) E) Server for video encoding. Processes videos uploaded on 3) and uploads them back. So I will have: A)frontend \ \ B)MAIN_APP/DB ----- C)S3 Storage (Files) / \ / / \ / D)ExternalAPI_queries E)Video_Processing (redundant DB) (redundant DB) All this will supposedly talk to each other via HTTP requests. My reason for this is that Video Processing part is really the most resource-intensive and I would just run barebones application that accepts requests and starts processing them. Questions: 1) In this setup I will have the main database at B) and all other servers will communicate with it via HTTP requests (and store duplicates of databases also I guess..for safety reasons). Is it the right approach or should I have 1 database that everyone connects to (how then?) 2) Is it a good idea to separate API queries from Video Processing part? Logically they are very close (processing is determined by the result of API queries), but resource-wise Video Processing is waaay more intensive. 3) what should I use to distribute calls between backend apps based on load?

    Read the article

  • Stripping cookies from image files ?

    - by iTech
    Hi, I want to achieve cookie free image serving as discussed here : http://code.google.com/speed/page-speed/docs/request.html#ServeFromCookielessDomain I have created a new sub-domain "static.example.com" serving only images, javscript and css (file serving restrictions made via filesmatch.conf file) , pls. tell how to make it serve cookie free images. Thanks

    Read the article

  • Awstats logformat typo?

    - by user66700
    I've been through the awstats docs for a while now, it just seems to be failing with the Logformat, http://pastebin.com/raw.php?i=J1Ecfu4c I'm using the following in awstats, LogFormat = "%host - - %host_r %time1 %methodurl %code %bytesd %refererquot %uaquot %otherquot" (from nginx) log_format main '$remote_addr - $remote_user [$time_local] $request ' '"$status" $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; sample hits: http://pastebin.com/raw.php?i=qD9PKN52

    Read the article

  • Outlook unable to print email

    - by Jason Taylor
    I have an installation of Outlook 2003 that won't print emails as of this morning. It can print calendars, and every other program can print fine. Outlook won't print the emails to any printer, not even a PDF creator. As soon as you hit "Print" nothing happens, there's no notification that the print job was sent or anything. The print server never receives a request to print from the computer.

    Read the article

  • Is there a way to make nautilus display the "recently used" files and directories?

    - by Peltier
    Is there a way to make nautilus display the "recently used" files and directories, just like the "open file" dialog does? Just to make my question clearer, here are two screenshots: The GTK open file dialog, showing the recently used items: A nautilus window, which doesn't offer to display recently used items: EDIT : This has been added as a feature request to Nautilus. Don't hesitate to make your voice heard if you want it to happen!

    Read the article

  • Add Machine Key to machine.config in Load Balancing environment to multiple versions of .net framework

    - by davidb
    I have two web servers behind a F5 load balancer. Each web server has identical applications to the other. There was no issue until the config of the load balancer changed from source address persistence to least connections. Now in some applications I receieve this error Server Error in '/' Application. Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Source Error: The source code that generated this unhandled exception can only be shown when compiled in debug mode. To enable this, please follow one of the below steps, then request the URL: Add a "Debug=true" directive at the top of the file that generated the error. Example: or: 2) Add the following section to the configuration file of your application: Note that this second technique will cause all files within a given application to be compiled in debug mode. The first technique will cause only that particular file to be compiled in debug mode. Important: Running applications in debug mode does incur a memory/performance overhead. You should make sure that an application has debugging disabled before deploying into production scenario. How do I add a machine key to the machine.config file? Do I do it at server level in IIS or at website/application level for each site? Does the validation and decryption keys have to be the same across both web servers or are they different? Should they be different for each machine.config version of .net? I cannot find any documentation of this scenario.

    Read the article

  • IP Address doesnt get passed with Squid as a reverse proxy.

    - by Arcath
    Im using squid as a reverse proxy to host multiple web servers on one internet IP. It works fine and has been doing so for the past few months. I have just noticed that every request sent to my servers is logged as comming from the squid servers IP address. Is there anyway to make squid pass the originating IP to the web servers?

    Read the article

  • IP Address doesnt get passed with Squid as a reverse proxy.

    - by Arcath
    Im using squid as a reverse proxy to host multiple web servers on one internet IP. It works fine and has been doing so for the past few months. I have just noticed that every request sent to my servers is logged as comming from the squid servers IP address. Is there anyway to make squid pass the originating IP to the web servers?

    Read the article

  • WRTP54G Bypass Login Admin

    - by vonhogen
    I've been trying to log into my WRTP54G router, but I forgot the password. Is there any way to temporarily disable the login like for the wrt54g: http://www.velocityreviews.com/forums/t519535-help-my-linksys-wrt54g-router-was-broken-into-using-the-curl-command.html If anyone has this router, could they examine the page to turn off admin login, and see what I would need to send in a POST request?

    Read the article

  • limit_req causing 503 Service Unavailable

    - by Hermione
    I'm frequently getting 503 Service Unavailable when I have limit_req turned on. On my logs: [error] 22963#0: *70136 limiting requests, excess: 1.000 by zone "blitz", client: 64.xxx.xxx.xx, server: dat.com, request: "GET /id/85 HTTP/1.1", host: "dat.com" My nginx configuration: limit_req_zone $binary_remote_addr zone=blitz:60m rate=5r/s; limit_req zone=blitz; How do I resolve this issue. Isn't 60m already big enough? All my static files are hosted on a amazon s3.

    Read the article

  • Buffer requests in nginx while a symlink switches on backend

    - by Quintin Par
    In a release deployment I would like to buffer client requests that come to nginx(in reverse proxy) mode to be buffered for possibily 1-2 seconds while a pdsh request is sent to switch symlinks on the back end server to /var/www/html/current . After the switch is complete, I would want to release the buffering while avoiding a herd clash. Is this possible in nginx? Can someone help? Edit: The idea is not to loose requests and from nginx forums I've come to know that retries can sometimes results in CPU spins

    Read the article

  • Identify Executable Creating Network Traffice

    - by jeffspost
    I've got some application on my Windows XP machine that is generating an HTTP request to aaronsw.com every half hour. We've trapped the packets in wireshark, but wireshark doesn't tell what application generated the packets. Is there any utility that looks at network traffic AND tells what executable produced the traffic?

    Read the article

  • Is it possible to add wildcard serveralias to virtualhost without modifying httpd.conf manually?

    - by Favourite Chigozie Onwuemene
    Is it possible to add wildcard serveralias (example: *.somesite.com) in an apache server without modifying httpd.conf manually? I use a DNS different from my hosting server and i have added asterisk A record to my DNS to point all request like (test.somesite.com,test2.somesite.com) to my hosting servers IP, but i don't see anyway of adding asterisk serveraliases to apache httpd.conf file in my cpanel. Pls is there a solution?

    Read the article

  • Monitor SOAP 1.2 webservice using OpManager

    - by Tim Mahy
    Hi all, does anyone here know if its possible to monitor using OpManager a SOAP 1.2 webservice and then assert on the result of the request. In the feature list I only see that a Http Get is possible for monitoring websites... what I will need is an Http Post to call the soap webservice... thx in advance, Tim

    Read the article

  • Squid free domain prompt authentication

    - by Tobia
    i have a squid proxy, i would like to leave free access (without proxy login prompt) to some domains and request a user login for all other domains. I tried this configuration: http_access allow allowDomains http_access deny !loggedUser where allowDomains is an acl with all free-domains, and loggedUsers is the user authentication acl. With this configuration i can access free domains also with an empty login, but what i would like to do is to not ask authentication at all (no prompt)... how can i do? Thanks.

    Read the article

  • MaxRequestLen error when i use https

    - by david
    When i got MaxRequestLen errors in file upload page, i set MaxRequestLen=31457280 using: <IfModule mod_fcgid.c> MaxRequestLen 31457280 FcgidIOTimeout 90 </IfModule> Now file upload works when i use http url. I have recently configured ssl for my site and when i use https url for same upload page, i get the same error: HTTP request length 131073 (so far) exceeds MaxRequestLen (131072) Is there a different setting for https? Please help. Thank you.

    Read the article

  • All internet requests in Windows time out

    - by Brandon
    So, I've run into a very strange problem with my home wireless network. Previously, at seemingly random times, the router seemed to disconnect all wireless hosts and cause all of the wired hosts to have a "limited connection" according to windows. In order to fix this, I had to unplug all of the wired hosts from the router, unplug the modem from the router, and power cycle the router. This seemed to solve the problem for a while until the exact same thing happened a day later and I had to go through the same process again. That's where I noticed something weird happening. There was one wireless host (a Windows Vista laptop) that seemed to be causing the router to disconnect the other hosts whenever it connected. When this happened, only that laptop was able to use the wireless from the router. When this happened, I disconnected it from the wireless (by disabling the wireless adapter) then reconnected it (by re-enabling it) and now it, like the other hosts, couldn't connect. I've never really seen anything this strange happen on our network before. So, I restored the router to factory settings and the problem seems to have vanished except one crucial problem. There's another host (a Windows 7 laptop) that was perfectly able to connect before all of the router issues and even in between the crashing and power-cycling events but now says its connected and says it's able to reach the Internet, but all requests time out. In any browser I've tried, the tab says connecting to [site]... for a solid minute and then tells me the request timed out. When I try to ping google.com in cmd it also says request timed out. In frustration, I booted into a dual-boot Ubuntu installation on the Windows 7 host and the connection works fine, to my surprise, as ubuntu is where I am now typing this rather long question. I haven't looked through the event log in windows but will post anything I find in an edit I haven't tried connecting (in Windows 7) to any other wireless network, since The fact that it works in Ubuntu suggests its Windows and not the router but I didn't change any wireless settings in windows before it being able to reach the Internet and not. Does anyone have any clue what could have happened. I opened to buying another router as this one is almost a year old :) but I would like to know whats going on here. Thanks in Advance! P.S. Sorry for how long my question is, I'm a little anxious (:

    Read the article

< Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >