Search Results

Search found 15040 results on 602 pages for 'request servervariables'.

Page 182/602 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • Capturing BizTalk 2004 SQLAdapter failures

    - by DanBedassa
    I was recently working on a BizTalk 2004 project where I encountered an issue with capturing exceptions (inside my orchestration) occurring from an external source. Like database server down, non-existing stored procedure, …   I thought I might write-up this in case it might help someone …   To reproduce an issue, I just rename the database to something different.   The orchestration was failing at the point where I make a SQL request via a Response-Request Port. The exception handlers were bypassed but I can see a warning in the event log saying: "The adapter failed to transmit message going to send port "   After scratching my head for a while (as a newbie to BTS 2004) to find a way to catch the exceptions from the SQLAdapter in an orchestration, here is the solution I had.   ·         Put the Send and Receive shapes inside a Scope shape ·         Set the Scope’s transaction type to “Long Running” ·         Add a Catch block expecting type “System.Exception” ·         Set the “Delivery Notification” of the associated Port to “Transmitted” ·         Change the “Retry Count” of the associated port to 0 (This will make sure BizTalk will raise the exception, instead of a warning, and you can capture that) ·         Now capture and do whatever with the exception inside the Catch block

    Read the article

  • Log incoming requests on Ubuntu (ports 80, 443)

    - by Maxim Eliseev
    We have Tomcat running on Ubuntu server. It runs a web service, open to the internet. Sometimes it has a sudden spike of traffic and goes down. There is nothing unusual in Tomcat access logs. I guess it is because some of the requests are so 'heavy' that they never finish and hence are not recorded to Tomcat access logs. Is there a way to configure Ubuntu to log incoming requests in the following format (below)? Date, Time, URL (with query string params), IP address (of client) There should be one line per request. Each request should be logged before it is executed. Only incoming requests to ports 80 and 443 should be logged.

    Read the article

  • Does Apache 2.2 (windows) have any default bandwidth limit?

    - by igino manfre'
    I'm running Apache on a server in cloud (Windows server 2008 R2 on VMware, 1 Gbps of BW, http://95.110.164.61 ). I'm streaming many live DVB MPEG Transport Stream, precompressed in loop, (not flash) generated by VLC on port 640xx and then reverse proxied by Apache on port 80. The server's firewall is open for VLC and Apache on all ports. Above 1.5 Mbps the reproduction is affected by continous stop & go. Please note that if you request a stream generated by VLC directly at http://95.110.164.61:64087/mpg2_6.4 you see a correct stream, while if you request http://95.110.164.61/mpg2_6.4 you do not. I know that Flash streaming Server uses Apache to stream on port 80 (and it works). I'm not an expert with Apache, can anyone tell me if any "special" module is required to increase the bandwidth?

    Read the article

  • IIS: redirect everything to another URL, except for one Directory

    - by DrStalker
    I have an IIS server (IIS 6, Win 2003) that hosts the site http://www.foo.com. I want any request to http://foo.com (no matter what path/filename is used) to redirect to http://www.bar.org/AwesomePage.html UNLESS the request is for http://www.foo.com/specialdir, in which case the HTML files in the local directory specialdir should be used. The problem I have is once the redirect is set it also affects /specialdir - even if I right click on that directory and select "content should come from ... local directory" that change does not take effect, and the directory still shows as redirecting to http://www.bar.org/AwesomePage.html. The same thing happens if I try to set individual files to load from the local system instead of redirecting - IIS gives no error, but the change does not take effect and the files still show as being redirected. How can I set specialdir to override the redirection to the new URL?

    Read the article

  • Ping myself, works with ipv6 not ipv4 in Windows 7

    - by user68546
    Hi! I've tried to solve the following problem with no luck and I need some proffesional help. The following is possible: Ping all computers (that I tried) in the domain without problem. Ping myself with localhost which use ::1. Ping myself with my given ipv6 IP. Internet access. The following is not possible: Noone can ping me (request timeout) with computername/ipv4/ipv6. I cannot ping myself with my given ipv4 IP or 127.0.0.1 (request timeout). Tried to enable/disable TCP/IPv4. Same issue. Turned off windows firewall. Added an inbound rule to allow icmp (just in case). Same same.. Is there someone out there that has any idea what the issue could be? Any help would be most appreciated!

    Read the article

  • ArchBeat Link-o-Rama for December 12, 2012

    - by Bob Rhubart
    “Cloud Integration in Minutes” – True or False? | Bruce Tierney The answer is 'True, but..." according to Bruce Tierney. "Connecting on-premise and cloud applications “in minutes” is true…provided you only consider the connectivity subset of integration and have a small number of cloud integration touch points." Get the rest of the story in Bruce's detailed post. Tech World Discovers New Species: The Cloud Architect | Wired Enterprise | Wired.com This Wired article by Cade Metz boils down to one essential conclusion: Cloud computing is a significant departure from "data center designs of the past," and the demand for the specialized skills of the cloud architect will only increase. But you already knew that, right? Oracle B2B - Synchronous Request Reply | A-Team - SOA "Beginning with Oracle SOA Suite PS5 (11.1.1.6), B2B supports synchronous request reply over http using the b2b/syncreceiver servlet," says C. D. Wright of the Fusion Middleware A-Team. His post includes a demo and everything you need to run it. Thought for the Day "Don't worry about what anybody else is going to do… The best way to predict the future is to invent it." — Alan Kay (Month Day, Year - Month Day, Year) Source: SoftwareQuotes.com

    Read the article

  • Poor backlink profile - search rankings not updated for 2+ months

    - by fistameeny
    I am carrying out some work on a website that is a PR2 with a few good quality, relevant backlinks (PR4-6). It has a presence on Twitter that is updated regularly, a Google Places listing, and listings on some decent directories (Qype etc). The site was rebuilt into Drupal 7 two months ago, with all the basics done - URL rewriting, XML Sitemap submitted to Google, and most importantly, good quality, structured content. I've noticed that Google is still showing "old" URL's from the previous version of the site that was ditched 8 weeks ago. I think the site may be penalised under the Penguin update, as a previous SEO company created many low quality links from link farms/directories. My question is what the correct way to deal with this is. Bing Webmaster Tools can "disavow" links, and I guess I can attempt to contact the link farms to have them removed. I've already submitted a request to Google to request that we have the penalty removed as we're trying to tidy up a bad history. We submit updated sitemaps to Google and Bing daily, and have built some further decent quality, relevant links. Is there anything further I can do?

    Read the article

  • Proxy HTTPS requests to a HTTP backend with NGINX.

    - by Mike
    I have nginx configured to be my externally visible webserver which talks to a backend over HTTP. The scenario I want to achieve is: Client makes HTTPS request to nginx nginx proxies request over HTTP to the backend nginx receives response from backend over HTTP. nginx passes this back to the client over HTTPS My current config (where backend is configured correctly) is: server { listen 80; server_name localhost; location ~ .* { proxy_pass http://backend; proxy_redirect http://backend https://$host; proxy_set_header Host $host; } } My problem is the response to the client (step 4) is sent over HTTP not HTTPS. Any ideas?

    Read the article

  • Please explain some of the features of URL Rewrite module for a newbie [closed]

    - by kunjaan
    I am learning to use the IIS Rewrite module and some of the "features" listed in the page is confusing me. It would be great if somebody could explain them to me and give a first hand account of when you would use the feature. Thanks a lot! Rewriting within the content of specific HTML tags Access to server variables and HTTP headers Rewriting of server variables and HTTP request headers What are the "server variables" and when would you redefine or define them? Rewriting of HTTP response headers HtmlEncode function Why would you use an HTMLEncode in the server? Reverse proxy rule template Support for IIS kernel-mode and user-mode output caching Failed Request Tracing support

    Read the article

  • Persisting high score table in flash game without a network. (Featuring: HttpListenerException)

    - by bearcdp
    Hi everyone, this question is very programming-centric, but it's for a game so I figured I might as well post it here. I'm doing polishing work on a GGJ '11 game because it will be shown at an indie arcade tomorrow afternoon, and they're expecting our final build in the morning. We'd like to have a high score table that displays during attract mode, but since it's Flash (Flixel) it would require some networking, Mochi, or something to keep a record of these scores. Only problem is the machine we'd be running on probably won't have network access. As a quick solution, I thought I'd just write up a dinky little high score server in C#/.NET that could take basic GET requests for submitting scores and getting the score list. We're talking REAL basic, like blocking while waiting for an incoming request, run & forget console app, etc. There's no guarantee that our .swf won't get reloaded, and we'd like the scores to persist, so this server would pretty much exists to keep a safe copy of the scores that the game can add to and request, and occasionally the server will write the scores to a flat text file. But, HttpListener is giving me Error Code 87 'The parameter is incorrect.' Have any idea what I'm doing wrong? Or better yet, am I barking up the wrong tree and missing an obviously simpler solution? This is all I've got so far in my Main(): HttpListener listener = new HttpListener(); listener.Prefixes.Add("http://localhost:66666/"); listener.Start(); The exception happens at listener.Start(); and the stack trace is: at System.Net.HttpListener.AddAllPrefixes() at System.Net.HttpListener.Start() at WOSEBCE_ScoreServer.Program.Main(String[] args) in C:\Users\Michael\Documents\Visual Studio 2010\VS2010 Projects\WOSEBCE_ScoreServer\WOSEBCE_ScoreServer\Program.cs:line 24 at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart()

    Read the article

  • Using command line to connect to a wireless network with an http login

    - by Shane
    I'm trying to connect to a wifi network where it hijacks all requests and redirects you to a page where you have to agree to a terms of use before it lets you connect to the actual outside world. This is a pretty common practice, and usually doesn't pose much of a problem. However, I've got a computer running Ubuntu 9.10 server with no windowing system. How can I use the command line to agree to the terms of use? I don't have internet access on the computer to download packages via apt-get or anything like that. Sure, I can think of any number of workarounds, but I suspect there's an easy way to use wget or curl or something. Basically, I need a command line solution for sending an HTTP POST request essentially clicking on a button. For future reference, it'd be helpful to know how to send a POST request with, say, a username and password if I ever find myself in that situation in another hotel or airport.

    Read the article

  • Binding services to localhost and using SSH tunnels - can requests be forged?

    - by Martin
    Given a typical webserver, with Apache2, common PHP scripts and a DNS server, would it be sufficient from a security perspective to bind administration interfaces like phpmyadmin to localhost and access it via SSH tunnels? Or could somebody, who knew eg. that phpmyadmin (or any other commonly availible script) is listening at a certain port on localhost easily forge requests that would be executed if no other authentication was present? In other words: could somebody from somewhere in the internet easily forge a request, so that the webserver would accept it, thinking it originated from 127.0.0.1 if the server is listening on 127.0.0.1 only? If there were a risk, could it be somehow dealt with on a lower level than the application, eg. by using iptables? The idea being, that if someone found a weakness in a php script or apache, the network would still block this request because it did not arrive via a SSH-tunnel?

    Read the article

  • Will spreading your servers load not just consume more recourses

    - by Saif Bechan
    I am running a heavy real-time updating website. The amount of recourses needed per user are quite high, ill give you an example. Setup Every visit The application is php/mysql so on every visit static and dynamic content is loaded. Recourses: apache,php,mysql Every second (no more than a second will just be too long) The website needs to be updated real-time so every second there is an ajax call thats updates the website. Recourses: jQuery,apache,php,mysql Avarage spending for single user (spending one minute and visited 3 pages) Apache: +/- 63 requests / responsess serving static and dynamic content (img,css,js,html) php: +/- 63 requests / responses mysql: +/- 63 requests / responses jquery: +/- 60 requests / responses Optimization I want to optimize this process, but I think that maybe it would be just the same in the end. Before implementing and testing (which will take weeks) I wanted to have some second opinions from you guys. Every visit I want to start off with having nginx in the front and work as a proxy to deliver the static content. Recources: Dynamic: apache,php,mysql Static: nginx This will spread the load on apache a lot. Every Second For the script that loads every second I want to set up Node.js server side javascript with nginx in te front. I want to set it up that jquery makes a request ones a minute, and node.js streams the data to the client every second. Recources: jQuery,nginx,node.js,mysql Avarage spending for single user (spending one minute and visited 3 pages) Nginx: 4 requests / responsess serving mostly static conetent(img,css,js) Apache: 3 requests only the pages php: 3 requests only the pages node.js: 1 request / 60 responses jquery: 1 request / 60 responses mysql: 63 requests / responses Optimization As you can see in the optimisation the load from Apache and PHP are lifted and places on nginx and node.js. These are known for there light footprint and good performance. But I am having my doubts, because there are still 2 programs extra loaded in the memory and they consume cpu. So it it better to have less programs that do the job, or more. Before I am going to spend a lot of time setting this up I would like to know if it will be worth the while.

    Read the article

  • Returning "200 OK" in Apache on HTTP OPTIONS requests

    - by i..
    I'm attempting to implement cross-domain HTTP access control without touching any code. I've got my Apache(2) server returning the correct Access Control headers with this block: Header set Access-Control-Allow-Origin "*" Header set Access-Control-Allow-Methods "POST, GET, OPTIONS" I now need to prevent Apache from executing my code when the browser sends a HTTP OPTIONS request (it's stored in the REQUEST_METHOD environment variable), returning 200 OK. How can I configure Apache to respond "200 OK" when the request method is OPTIONS? I've tried this mod_rewrite block, but the Access Control headers are lost. RewriteEngine On RewriteCond %{REQUEST_METHOD} OPTIONS RewriteRule ^(.*)$ $1 [R=200,L]

    Read the article

  • Choose IP Adress for Process to use on launch [duplicate]

    - by user1436026
    This question already has an answer here: How to set which IP to use for a HTTP request? 2 answers Say my server has the following IP addresses: 123.456.78.0 123.467.79.1 123.456.77.1 123.456.68.0 etc... Say I want to launch a process, say wget from the command line. Normally, I would do something like this: wget http://www.google.com/ Except that I would like to choose the IP address that my server uses to make this request. Is there a way to use wget or launch another command with a choice of one of my own IP addresses, like the following pseudo command: with-ip 123.456.68.0 wget http://www.google.com/

    Read the article

  • design question for transportation agency/workflow system

    - by George2
    I am designing a transportation agency/workflow system, and it including 3 types of people, customer who requests to transport some stuff, drivers who deliver the stuff, and truck manager who manages transport source/destination truck coordination and communicates/organizes drivers. The system is expected to be a web site, and 3 kinds of people could use the web site to submit request, accept request, monitor status of specific stuff transportation, etc. The web site is more like an open agency or a workflow system. I am wondering whether there are any existing technologies, tools or projects (better to be open source, but not a must) which I could build my application faster based on? I prefer to use .Net technologies, but not a must. Thanks in advance!

    Read the article

  • How to filter Varnish logs based on XID?

    - by Martijn Heemels
    I'm running into infrequent 503 errors which appear hard to pinpoint. Varnishlog is driving me mad, since I can't seem to get the information I want out of it. I'd like to see both the client- and backend-communications as seen by Varnish. I thought the XID number, which is logged on Varnish's default error page, would allow me to filter the exact request out of the logging buffer. However, no combination of varnishlog parameters gives me the output I need. The following only shows the client-side communication: varnishlog -d -c -m ReqStart:1427305652 while this only shows the resulting backend communication: varnishlog -d -b -m TxHeader:1427305652 Is there a one-liner to show the entire request?

    Read the article

  • How can I make my browser(s) finish AJAX requests instead of stopping them when I switch to another page?

    - by Tom Wijsman
    I usually need to deal with things on a page right before switching to yet another page, this ranges from "liking / upvoting a comment or post" up to "an important action" and doesn't always come with feedback on whether the action actually proceeded. This is a huge problem! I assume the action to proceed once I start the particular AJAX request, but because I switch to another page it didn't actually happen because the AJAX request got aborted. This has left me several times with coming back to the page and seeing my action didn't take place at all; to give you an idea how bad this is, this even happened once when commenting on Super User! Is there a way to tell my browser to not drop these AJAX connections but simply let them finish?

    Read the article

  • Ping reply not getting to LAN machines but getting in Linux router Gateway

    - by Kevin Parker
    I have configured Ubuntu 12.04 as Gateway machine.its having two interfaces eth0 with ip 192.168.122.39(Static) and eth1 connected to modem with ip address 192.168.2.3(through DHCP). ip-forwarding is enabled in router box. Client machine is configured as: ip address 192.168.122.5 and gateway 192.168.122.39 Client machines can ping router box(192.168.122.39).but when pinged 8.8.8.8 reply is not reaching Client machines but in the tcpdump output on gateway i can see echo request for 8.8.8.8 but never echo reply.Is this because of 122.5 not forwarding request to 2.0 network.Can u please help me in fixing this.

    Read the article

  • Apache configuration file visualization/testing

    - by Matt Holgate
    Is there a tool available (or a debug mode built into Apache) that will allow me to interactively test and explain an Apache configuration for a given request? In particular, I'd like to be able to see which directives will apply when requesting a specific URL. For example, the output for the URL http://myserver.com/foo/bar/bar.html might look something like: Allow from 192.168.0.3 <-- From <Location /foo/bar> in myserver.com vhost Require valid user <-- From <Directory /var/www/foo> in global configuration Satisfy any <-- From <File bar.html> in global configuration [Background: why do I want this? The apache merging rules for configuration directives are quite complex to get right. It would be great to have a tool which allows you to check that your rules are doing exactly what you want, and would be a good learning tool]. If there isn't such a tool, is there a debug option in Apache that will log such information for each incoming request?

    Read the article

  • How to choose between using a Domain Event, or letting the application layer orchestrate everything

    - by Mr Happy
    I'm setting my first steps into domain driven design, bought the blue book and all, and I find myself seeing three ways to implement a certain solution. For the record: I'm not using CQRS or Event Sourcing. Let's say a user request comes into the application service layer. The business logic for that request is (for whatever reason) separated into a method on an entity, and a method on a domain service. How should I go about calling those methods? The options I have gathered so far are: Let the application service call both methods Use method injection/double dispatch to inject the domain service into the entity, letting the entity do it's thing and then let it call the method of the domain service (or the other way around, letting the domain service call the method on the entity) Raise a domain event in the entity method, a handler of which calls the domain service. (The kind of domain events I'm talking about are: http://www.udidahan.com/2009/06/14/domain-events-salvation/) I think these are all viable, but I'm unable to choose between them. I've been thinking about this a long time and I've come to a point where I no longer see the semantic differences between the three. Do you know of some guidelines when to use what?

    Read the article

  • mod_rewrite ssl redirect

    - by Thomas
    Hi all, I want to use mod_rewrite to ensure that certain pages are served with SSL and all others normally, but I am having trouble getting it to work This works (redirect to SSL when request uri is for users or cart) RewriteCond %{SERVER_PORT} 80 RewriteCond %{REQUEST_URI} users [OR] RewriteCond %{REQUEST_URI} cart RewriteRule ^(.*)$ https://secure.host.tld/$1 [R,L] So, to accomodate for a user not to keep browsing the site with ssl, when requesting other uris, I thought the below, but doesn't work: (when port is 443 and request uri is not one of uris that need to be served by ssl, redirect back to normal host) RewriteCond %{SERVER_PORT} 443 RewriteCond %{REQUEST_URI} !^/users [OR] RewriteCond %{REQUEST_URI} !group RewriteRule ^/?(users|groups)(.*)$ http://host.tld/$1 [R,L] Any help? Thanks

    Read the article

  • Running Tomcat 7 and Apache 2 on the same server

    - by Thorn
    Part of my site needs to run over HTTPS and I'm creating a sub-domain for that part. I have apache httpd 2 AND Tomcat 7 running on the same server with the same IP, Apache is on port 80 of course, while Tomcat is running on port 8080. Right now I am doing domain forwarding for requests that need to run off tomcat. For example, mathteamhosting.com/mathApp can forward to mathteamhosting.com:8080/mathApp. I would like to have Tomcat handle the https requests for that subdomain. I don't think this forwarding technique can work in this case. How do I set that up so that Tomcat receives the requests on port 443 while apache handles port 80. To be more specific: http://proctinator.com == request goes to Apache web server https://private.proctinator.com == request goes to Apache web server

    Read the article

  • wamp server REDIRECT_STATUS

    - by user143039
    I have noticed my remote server returns $_SERVER[REDIRECT_STATUS] with every request. Example: [REDIRECT_STATUS] = 200 But my localhost wamp server does not. Interestingly, when I manually set a header: Example: header('HTTP/1.1 302 Redirect This'); The variable: $_SERVER[REDIRECT_STATUS] is still not set. Though I can view the headers, including the one I set, with HttpFox. $_SERVER[REDIRECT_STATUS] is set when .htaccess throws(?) an ErrorDocument, like ErrorDocument 404 for example. Any ideas how to get wamp server to set the variable $_SERVER[REDIRECT_STATUS] with every request?

    Read the article

  • Architectural advice - web camera remote access

    - by Alan Hollis
    I'm looking for architectural advice. I have a client who I've built a website for which essentially allows users to view their web cameras remotely. The current flow of data is as follows: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Ftp connection is enabled for the cameras ftp user. Web camera opens ftp connection to server. Web camera begins taking photos. Web camera sends photo to ftp server. On image url request: Server reads latest image on hard drive uploaded via ftp for camera. Server deleted any older images from the server. This is working okay at the moment for a small amount of users/cameras ( about 10 users and around the same amount of cameras), but we're starting to worrying about the scalability of this approach. My original plan was instead of having the files read from the server, the web server would open up an ftp connection to the web server and read the latest images directly from there meaning we should have been able to scale horizontally fairly easily. But ftp connection establishment times were too slow ( mainly due to the fact that PHP out of the ox is unable to persist ftp connections ) and so we abandoned this approach and went straight for reading from the hard drive. The firmware provider for the cameras state they're able to build a http client which instead of using ftp to upload the image could post the image to a web server. This seems plausible enough to me, but I'm looking for some architectural advice. My current thought is a simple Nginx/PHP/Redis stack. Web camera issues post requests of latest image to Nginx/PHP and the latest image for that camera is stored in Redis. The clients can then pull the latest image from Redis which should be extremely quick as the images will always be stored in memory. The data flow would then become: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Camera is sent an http request to start posting images to a provided url Web camera begins taking photos. Web camera sends post requests to server as fast as it can On image url request: Server reads latest image from redis Server tells redis to delete later image My questions are: Are there any greater overheads of transferring images via HTTP instead of FTP? Is there a simple way to calculate how many potential cameras we could have streaming at once? Is there any way to prevent potentially DOS'ing our own servers due to web camera requests? Is Redis a good solution to this problem? Should I abandon PHP/Ngix combination and go for something else? Is this proposed solution actually any good? Will adding HTTPs to the mix cause posting the image to become too slow? Thanks in advance Alan

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >