Search Results

Search found 17054 results on 683 pages for 'jms request reply'.

Page 443/683 | < Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >

  • PHP script unable to email on OpenBSD Apache

    - by MattC
    I have a webserver running OpenBSD 4.7 and PHP 5.2.12 out of the ports tree. There is a small contact page that is supposed to send an email to a specific address. When I fill in the form using a web browser, it sends the AJAX request to the PHP page which claims it worked successfully but there is no email. The maillog is empty as well. I created a small php script that replicates this functionality and when I run it by hand using the "php -f" command, it sends an email without a problem. I think this has to do with being chrooted but I can't seem to get it to work. Furthermore, I can't seem to get PHP to log. I told it to log to /var/www/logs/php_errors.log and restarted but can't get it to send anything to the file. Does anyone have any tips for debugging these sort of things in OpenBSD?

    Read the article

  • Google is displaying "Translate this page" based on a previously registered domain inbound links

    - by crnm
    I recently started a new project with a newly registered generic tld domain. As soon as Google started indexing the page, it displayed a "translate this page" in SERP's, which tries to translate the page to the language of a small Eastern European country from the language that the site actually uses. I tried everything to prevent this: language meta headers and attributes, localisation through Google Webmaster Tools...all to no avail - nothing helped. After a couple of weeks I spotted dozens of inbound links popping up in Google Webmaster Tools all coming from that small Eastern European country, from sub-pages that are not active anymore (either sending out 404's or 301's to the main page), and also had been written in that other language. So the domain had been registered before and as it looks, it did got a lot of possibly spam links in that language. I can't even ask the sites where those links should have been to remove them as they are not active anymore physically, just in Google Webmaster Tools and/or internal data masses... Now I'm at a loss about what to do? As my site is pretty new, it does not have many links pointing towards it in my targeted language. So those are probably not enough to convince Google of attaching the right language to it as Google ignores all other signals about the page language. I'm also unsure if I should use the "disavow" tool, or a reconsideration request...or what else to do about this miserable state. I never used these tools before so I don't have any experience with them. Somehow I have to convince Google about the right language of the page and also to not count/apply/whatever all those historical links from the previous owner. (The domain had been deleted without any traces in Google before I registered it) Has anyone here ever dealt with a similar "Translate this page" problem? (I've also looked at this thread: How can I prevent Google mistakenly offering to translate a page? but didn't find a solution there)

    Read the article

  • ErrorDocument 404 not found in non-existent subdomain

    - by Question Overflow
    I am trying to get the apache server to issue a custom 404 error for invalid subdomains. The following is the relevant part of the httpd configuration: Alias /err/ "/var/www/error/" ErrorDocument 404 /err/HTTP_NOT_FOUND.html.var <VirtualHost *:80> # the default virtual host ServerName site_not_found Redirect 404 / </VirtualHost> <VirtualHost *:80> ServerName example.com ServerAlias ??.example.com </VirtualHost> What I get instead is this: Not Found The requested URL / was not found on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. I don't understand why a URL to non-existent-subdomain.example.com produces a 404 error without custom error as shown above while a URL to eg.example.com/non-existent-file produces the full custom 404 error. Can someone advise on this. Thanks.

    Read the article

  • Cisco RV016 dual WAN and VPN setup

    - by sklr
    I have a VPN of several RV016 routers and I want to set some of them with 2 ISPs. I plug the two ISP cables in WAN 1 and 2 ports and configure the router to "Intelligent Balancer(Auto Mode)". It works ok like that, but the VPNs that I set use the public IP of the provider. For example if I have 5 VPNs set for ISP1 (WAN1) and the balancer sends the request trough WAN2 it can't use any of the configured VPNs because the public IP is different. How do I deal with this problem?

    Read the article

  • Cisco RV016 dual WAN and VPN setup

    - by sklr
    I have a VPN of several RV016 routers and I want to set some of them with 2 ISPs. I plug the two ISP cables in WAN 1 and 2 ports and configure the router to "Intelligent Balancer(Auto Mode)". It works ok like that, but the VPNs that I set use the public IP of the provider. For example if I have 5 VPNs set for ISP1 (WAN1) and the balancer sends the request trough WAN2 it can't use any of the configured VPNs because the public IP is different. How do I deal with this problem?

    Read the article

  • I get a 403 when requesting a JS file from CloudFront

    - by Roland
    This is new to me so please excuse me if I have no idea what I'm talking about (: I'm trying to set up my own CDN with CloudFront and S3 through a subdomain by adding a CNAME to that subdomain to point to the CloudFront. It seems like I get a 403 when trying to load the file, this is the original s3 link : https://s3.amazonaws.com/chaoscod3r_aws_cdn/libs/polyfills/json3_polyfill.js ; which seems to be working after setting the permission to everyone to open / download. But when trying to use the subdomain to request the file : http://cdn.chaoscod3r.com/libs/polyfills/json3_polyfill.js ; it seems like I get that 403. Could anyone help me out with this one ?

    Read the article

  • Using a random string to authenticate HMAC?

    - by mrwooster
    I am designing a simple webservice and want to use HMAC for authentication to the service. For the purpose of this question we have: a web service at example.com a secret key shared between a user and the server [K] a consumer ID which is known to the user and the server (but is not necessarily secret) [D] a message which we wish to send to the server [M] The standard HMAC implementation would involve using the secret key [K] and the message [M] to create the hash [H], but I am running into issues with this. The message [M] can be quite long and tends to be read from a file. I have found its very difficult to produce a correct hash consistently across multiple operating systems and programming languages because of hidden characters which make it into various file formats. This is of course bad implementation on the client side (100%), but I would like this webservice to be easily accessible and not have trouble with different file formats. I was thinking of an alternative, which would allow the use a short (5-10 char) random string [R] rather than the message for autentication, e.g. H = HMAC(K,R) The user then passes the random string to the server and the server checks the HMAC server side (using random string + shared secret). As far as I can see, this produces the following issues: There is no message integrity - this is ok message integrity is not important for this service A user could re-use the hash with a different message - I can see 2 ways around this Combine the random string with a timestamp so the hash is only valid for a set period of time Only allow each random string to be used once Since the client is in control of the random string, it is easier to look for collisions I should point out that the principle reason for authentication is to implement rate limiting on the API service. There is zero need for message integrity, and its not a big deal if someone can forge a single request (but it is if they can forge a very large number very quickly). I know that the correct answer is to make sure the message [M] is the same on all platforms/languages before hashing it. But, taking that out of the equation, is the above proposal an acceptable 2nd best?

    Read the article

  • Wireshark Not Displaying Packets From Other Network Devices, Even in Promisc Mode

    - by eb80
    System Setup: 1. MacBook running Mountain Lion. 2. Wireshark installed and capturing packets (I have "capture all in promiscuous mode" checked) 3. I filter out all packets with my source and destination IP using the following filter ("ip.dst != 192.168.1.104 && ip.src != 192.168.1.104") 4. On the same network as the MacBook, I use an Android device (connecting via WiFi) to make HTTP requests. Expected Results: 1. Wireshark running on the MacBook sees the HTTP request from the Android device. Actual Results: 1. I only see SSDP broadcasts from 192.168.1.1 Question: What do I need to do so that Wireshark, like Firesheep, can see and use the packets (particularly HTTP) from other network devices on the same network?

    Read the article

  • Cant access EC2 hosted website

    - by Himanshu Page
    For some reason, I am unable to access our website www.doccaster.com (Bad request nginx). We are hosted on amazon EC2 with elastic ip associated to it. The weird part is a) I can access it through the public dns url http://ec2-184-73-195-180.compute-1.amazonaws.com b) My co founder who is located in another city can access it via www.doccaster.com. I observed that my instance was failing reachability check, so I launched a new one and assigned it the the elastic ip. I tried to ping the ip address 184.73.195.180 from my machine but no success. Any help will be really appreciated. More details I ran the following command on my server netstat -lntp | grep -E 'apache|httpd' and it displays :::80 for httpd . Is this accurate ? Should it be 0:0:0:80 ? or doesnt matter?

    Read the article

  • Is it possible to extend a 504 timeout in nginx on a per location basis

    - by codecowboy
    Is it possible to set timeout directives within a location block to prevent nginx returning a 504 from a long running PHP script (PHP-FPM? location /myurlsegment/ { client_body_timeout 1000000; send_timeout 1000000; fastcgi_read_timeout 1000000; } This has no effect when making a request to example.com/myurlsegment. The timeout occurs after approximately 60 seconds. PHP is configured to allow the script to run until completion (set_time_limit(0)) I don't want to set a global timeout for all scripts.

    Read the article

  • How to host ASP.NET application externally?

    - by Josh
    I have an ASP.NET application that I can get to locally by going to 192.168.1.102:81/TestApp. I would like to host the application externally by going to domain.com:81/TestApp (I already have my domain pointing to my router and this works fine - I have apache running on port 80 on another server). I modified the router settings to point any request coming through port 81 to 192.168.1.102. I am still having trouble accessing the ASP.NET site (I get the error message that "This link appears to be broken"). Am I missing something? How can I redirect domain.com:81/TestApp to my ASP.NET application? Thanks.

    Read the article

  • Kerberos authentication between 2 applications

    - by Spivi
    We work on a server 2003 and server 2008 R2 enviroment. I'm familiar with the basic usage of the Kerberos protocol where the protocol authenticates a client when he tries to use a shared resource (server, folder, printer, etc.). We have three distinct and independent .NET applications that we develop inhouse (app A, app B & app C) but they need to communicate for a given reason (A recieves messages only from B and C and C recieves messages only from B). Is it possible to configure the Kerberos services to authenticate messages/request between two .NET apps ? (Instead of a user-server authentication, we will have an application-application authentication)

    Read the article

  • Mod_rewrite issue with godaddy web hosting

    - by MrFoh
    Am trying to use laravel to build a site but my routes all redirect to the homepage. Apache error logs show this AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. And the .htaccess file is this <IfModule mod_rewrite.c> Options -MultiViews Options +FollowSymLinks RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L] </IfModule> The webroot has multiple sub-folders which are document roots for different domains. Am working with one of these sub-folders. What is causing this error and how can it be fixed

    Read the article

  • 12.10 Wireless networking

    - by user108594
    I downloaded ubuntu 12.10 using WUBI and cannot connect to the internet. I removed it and downloaded ubuntu 12.04 still cannot connect. This I assume rules out the program being the problem. I reinstalled 12.10. When loaded I get the same message W/red (x) stating Internet not connected. I went to the Settings drop down box and it does not reveal the network list but (enable networking has a ck mark). Am running a HP Laptop with a w/7/64 OS that has a kill switch that indicates (orangeno connection) I downloaded 12.10 on my desktop (on the same network) and everything OK. I tried to follow the instructions in the help menu but got lost and confused . Sincerely Dan Additional Info per request Broadcom 802.11b/g Wlan Internal pc [hp laptop] P.S. I've been out of town for about a month. TKS for your gitback I did install 12.10 via cd and everything ok,but retried alongside 7 and unable to connect to internet,also took laptop and hard wired using ethernet cable and everything ok. stumped again and running out of ideas!!!

    Read the article

  • Remove values from array on foreach PHP

    - by user104531
    I have an array like this: Array ( [0] => Array ( [id] => 68 [type] => onetype [type_id] => 131 [name] => name1 ) [1] => Array ( [id] => 32 [type] => anothertype [type_id] => 101 [name] => name2 ) ) I need to remove some arrays from it if the users has permissions or not to see that kind of type. I am thinking on doing it with a for each, and do the needed ifs inside it to remove or let it as it. My question is: What's the most efficent way to do this? The array will have no more than 100 records. But several users will request it and do the filtering over and over.

    Read the article

  • getaddrinfo(3) failed

    - by user101289
    I'm trying to connect to a webservice using a PHP wrapper (which is using curl under the covers). On my local linux machine running PHP 5.3 it works perfectly. However, when I move to a remote server (also running PHP 5.3 on Linux) the call the the webservice URL returns: getaddrinfo(3) failed for http://server.host.com:8080/login I get a similar error from a ping on the remote host: ping: unknown host http://server.host.com:8080/login But when I issue a curl request from the command line, it returns the expected URL. Can anyone shed any light on this issue? Thanks!

    Read the article

  • Dns works, can ping, but cannot load web pages in browser

    - by user1224595
    Yesterday I changed routers, and my desktop computer started acting up. I could ping websites, and nslookup was able to resolve names to addresses, but neither chrome, firefox, nor ie could load any webpages. None of my other computers connected to the same wireless router have any problems. I connect my desktop to the router through a cheap wifi dongle. I did a wireshark capture of the browser request, and I have uploaded the pcap here. https://drive.google.com/file/d/0B7AsPdhWc-SwbTV0bUJLQXo4UUE/edit?usp=sharing One strange thing I noticed was the spamming of ssdp packets. I am not super familiar with networking, but it seems that it is not a problem with the router, as dns works, and so does dhcp (the desktop is assigned an address correctly). Any help would be appreciated.

    Read the article

  • Improving Windows Authentication performance on IIS

    - by flalar
    We're struggling with performance issues with a ASP.NET MVC site that is using Windows Authentication. Response time is very slow on the first request to the site when the user is being authenticated. Further, every time the Authorization header is sent from the browser the response time increases with many seconds. The same issue occurs for both executed files and static content like CSS and JS. Access to the application is restricted to users within a certain role and we are now planning to allow access to static files for all authenticated users to see if that helps. The authentication method in use is NTLM. How should we go forward in pinpointing why authentication decreases performance drastically?

    Read the article

  • Exchange 2007 mailbox reassignment

    - by John Virgolino
    I am trying to move a mailbox from one user account to another within a single AD domain. We are using Exchange 2007. I have followed these steps: Disable the account From "Disconnected Mailbox" container, connect to the new account I get a success message When I try to login to OWA using the new user account, I get this message: Outlook Web Access could not connect to Microsoft Exchange. If the problem continues, contact technical support for your organization. Request Url: https://mail.somedomain.com:443/owa/default.aspx User host address: 1.2.3.4 Exception Exception type: Microsoft.Exchange.Data.Storage.ConnectionFailedTransientException Exception message: Cannot open mailbox /o=cgsexchangeorganization/ou=exchange administrative group (fydibohf23spdlt)/cn=recipients/cn=someuser. NOTE: I changed some identifying information for security purposes. I have tried multiple times and get to the same place. When I login to OWA with old account, I get an error that the mailbox cannot be found, which makes total sense. Does anybody have any ideas on this? Thanks!

    Read the article

  • Apache Balancing by source IP

    - by Daniel
    I am using Apache's Proxy Balancer to balance one sub domain (e.g. subdomain.domain.com) to an application which is located on 2 servers. Here an extract from my Apache configuration file: <Proxy *> Order deny,allow Allow from all </Proxy> <Proxy balancer://cluster1> BalancerMember http://server1:28081 route=w1 BalancerMember http://server2:28082 route=w2 </Proxy> ProxyPass /path balancer://cluster1/path ProxyPassReverse /path balancer://cluster1/path My question is, if it's possible to decide with the source IP-address which BalancerMember should be used for the request? To e.g. Requests from 1.2.3.4 to Member 1?

    Read the article

  • Apache LocationMatch throws 500 and AddOutputFilterByType does nothing

    - by tackleberry
    I need to add below directives to apache. But I get 500 when I add these lines. <LocationMatch "^/assets/.*$"> Header unset ETag FileETag None # RFC says only cache for 1 year ExpiresActive On ExpiresDefault "access plus 1 year" </LocationMatch> Additionally response is not gzipped when I add: AddOutputFilterByType DEFLATE text/html text/css application/javascript application/x-javascript Apache version is: Server version: Apache/2.2.22 (Unix) App: rails 3.2 app When I checked response&request for gzip problem, I see that browser requested gzip: Accept-Encoding gzip, deflate but response not gzipped.

    Read the article

  • Problems setting Hyper-V permissions

    - by Drew Burchett
    I am using a Windows 2012 Hyper-V server to host some test PCs. Our support personnel should be able to take snapshots of these machines and roll a test machine back to a specific snapshot, but they should not have any other permissions. I have followed the directions in this article and, on suggestion of another article have added the specific AD group to the local Hyper-V Administrators group, but whenever one of them attempts to connect to the server to take a snapshot, they get an error stating that they do not have permission to connect to that server. I'm sure I'm missing something, but at this point I'm at a loss as to what that would be. Can anyone tell me how to properly set these permissions? edit: Per request I am attaching a screenshot of the permissions I have set for this group.

    Read the article

  • google webmaster soft 404 on 301

    - by Daniel
    I'm looking through google webmaster that my page is generating soft 404 errors (https://support.google.com/webmasters/answer/181708?hl=en) google says: We recommend that you always return a 404 (Not found) or a 410 (Gone) response code in response to a request for a non-existing page But I've got redirects set up that handle old pages to redirect to the proper new pages using a 301. The website links changed because of a use of a framework, which allows it to be more consistent, but the old pages till have links out there to these. Should I be worried about this? IS google penalizing the site for this? (Using IIS 8, Tomcat, CF10, Win)

    Read the article

  • How to reduce the CPU load on a hosting with WordPress installed as a CMS? [on hold]

    - by Akky Awesøme
    I have been using hostgators hatchling plan for three months. I got an email from the hosting that my website is creating an over load on CPU. They said that I am eating up their processor and as a precaution, they have temporarily suspended my account. When I contacted their customer support, they said: You have to optimize your database and use some sort of caching mechanism, where the script does not need to generate a new page with every request, helps to lower the over load that a script will cause. I am not a technical geek, I am wondering how I will do this thing. I don't have any resource to hire a web developer to do this job. My website is down for 48hours. I was using wp super cache along with cloudfare's free support. Now I have intalled optimize-db plugin and optimized my database. Please provide me with some more tips on how to optimize my database to reduce CPU usage. Any help would be appreciated.

    Read the article

  • My Laptop (HP/Compaq 2510p) running ubuntu 10.04 LTS keeps losing the WLAN connection.

    - by Ernelli
    I am using Wicd and can successfully connect to my ADSL router (Thomson TG 787) using WPA PSK. But with regular interval I lose the ability to connect to Internet. I can ping the GW and can actually ping servers on the Internet but not connect to them using HTTP (Tested with both Firefox and wget). I would suspect the router unless for the fact that the problem does not show up when running Windows XP on the same computer and also, when the problem arises, a simple disconnect/connect in Wicd solves the problem, which does not involve the router (Except for the DHCP request). I have searched Ubuntu forums without luck, most problems described relate to specific network drivers or other problems. Does anyone have the same experience with Linux/Ubuntu and WLAN?

    Read the article

< Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >