Search Results

Search found 17054 results on 683 pages for 'jms request reply'.

Page 217/683 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • How to install flareGet_1.0-1(beta)_deb_rpm.tar.gz?

    - by Suhail cholassery
    I have lot about Flareget download manager and I wanted to install it in my computer. I visted the website and dowloaded the the file flareGet_1.0-1(beta)_deb_rpm.tar.gz and saved it in the location /home/suhailcholassery/Download/flareGet_1.0-1(beta)_deb_rpm.tar.gz. The thing is that I am new to Ubuntu and don't know how to install it. I tried typing the following command in terminal as per the readme's instruction enclosed in file. yum install flareGet-1.0-1.i386.rpm, but the reply was the following: You need to be root to perform this command. And I don't know how that is to be done. Isn't there any simpler way for installing this package?

    Read the article

  • Is the Spark View Engine for ASP.NET dead?

    - by AUser
    Is the Spark View Engine for ASP.NET dead? I'm considering using it for a new project. I tried to get approved to the Google Groups but nobody would approve me. The last message posted there was in May. I tried emailing the developers but nobody would reply back. I'm not having happy feelings about this using SPARK for a major project of mine at the moment. Is this project now dead especially after the Razor came out?

    Read the article

  • Proxy HTTPS requests to a HTTP backend with NGINX.

    - by Mike
    I have nginx configured to be my externally visible webserver which talks to a backend over HTTP. The scenario I want to achieve is: Client makes HTTPS request to nginx nginx proxies request over HTTP to the backend nginx receives response from backend over HTTP. nginx passes this back to the client over HTTPS My current config (where backend is configured correctly) is: server { listen 80; server_name localhost; location ~ .* { proxy_pass http://backend; proxy_redirect http://backend https://$host; proxy_set_header Host $host; } } My problem is the response to the client (step 4) is sent over HTTP not HTTPS. Any ideas?

    Read the article

  • Will spreading your servers load not just consume more recourses

    - by Saif Bechan
    I am running a heavy real-time updating website. The amount of recourses needed per user are quite high, ill give you an example. Setup Every visit The application is php/mysql so on every visit static and dynamic content is loaded. Recourses: apache,php,mysql Every second (no more than a second will just be too long) The website needs to be updated real-time so every second there is an ajax call thats updates the website. Recourses: jQuery,apache,php,mysql Avarage spending for single user (spending one minute and visited 3 pages) Apache: +/- 63 requests / responsess serving static and dynamic content (img,css,js,html) php: +/- 63 requests / responses mysql: +/- 63 requests / responses jquery: +/- 60 requests / responses Optimization I want to optimize this process, but I think that maybe it would be just the same in the end. Before implementing and testing (which will take weeks) I wanted to have some second opinions from you guys. Every visit I want to start off with having nginx in the front and work as a proxy to deliver the static content. Recources: Dynamic: apache,php,mysql Static: nginx This will spread the load on apache a lot. Every Second For the script that loads every second I want to set up Node.js server side javascript with nginx in te front. I want to set it up that jquery makes a request ones a minute, and node.js streams the data to the client every second. Recources: jQuery,nginx,node.js,mysql Avarage spending for single user (spending one minute and visited 3 pages) Nginx: 4 requests / responsess serving mostly static conetent(img,css,js) Apache: 3 requests only the pages php: 3 requests only the pages node.js: 1 request / 60 responses jquery: 1 request / 60 responses mysql: 63 requests / responses Optimization As you can see in the optimisation the load from Apache and PHP are lifted and places on nginx and node.js. These are known for there light footprint and good performance. But I am having my doubts, because there are still 2 programs extra loaded in the memory and they consume cpu. So it it better to have less programs that do the job, or more. Before I am going to spend a lot of time setting this up I would like to know if it will be worth the while.

    Read the article

  • Using command line to connect to a wireless network with an http login

    - by Shane
    I'm trying to connect to a wifi network where it hijacks all requests and redirects you to a page where you have to agree to a terms of use before it lets you connect to the actual outside world. This is a pretty common practice, and usually doesn't pose much of a problem. However, I've got a computer running Ubuntu 9.10 server with no windowing system. How can I use the command line to agree to the terms of use? I don't have internet access on the computer to download packages via apt-get or anything like that. Sure, I can think of any number of workarounds, but I suspect there's an easy way to use wget or curl or something. Basically, I need a command line solution for sending an HTTP POST request essentially clicking on a button. For future reference, it'd be helpful to know how to send a POST request with, say, a username and password if I ever find myself in that situation in another hotel or airport.

    Read the article

  • Should a new programmer nowadays start with C/C++ or OOP language? [closed]

    - by deviDave
    I've been a programmer for 15+ years. In my time, we all started with C or C++ and then moved to C# or Java. At that time it was a usual practice. Now, my brother wants to follow my steps and I am not sure what advice to give him. So, I am asking the community for an opinion. Should nowadays new programmer with zero programming knowledge start with functional languages (C, C++, etc.) or he should start directly with OOP languages (Java, C#, etc.)? The reply should be considered in the context of my brother's future assignments. He will mainly work on Java mobile applications as well as ASP.NET web apps. He will have to touch with desktop apps, low level programming, drivers, etc. This is the reason I am not sure if he should ever need to learn functional languages.

    Read the article

  • design question for transportation agency/workflow system

    - by George2
    I am designing a transportation agency/workflow system, and it including 3 types of people, customer who requests to transport some stuff, drivers who deliver the stuff, and truck manager who manages transport source/destination truck coordination and communicates/organizes drivers. The system is expected to be a web site, and 3 kinds of people could use the web site to submit request, accept request, monitor status of specific stuff transportation, etc. The web site is more like an open agency or a workflow system. I am wondering whether there are any existing technologies, tools or projects (better to be open source, but not a must) which I could build my application faster based on? I prefer to use .Net technologies, but not a must. Thanks in advance!

    Read the article

  • Persisting high score table in flash game without a network. (Featuring: HttpListenerException)

    - by bearcdp
    Hi everyone, this question is very programming-centric, but it's for a game so I figured I might as well post it here. I'm doing polishing work on a GGJ '11 game because it will be shown at an indie arcade tomorrow afternoon, and they're expecting our final build in the morning. We'd like to have a high score table that displays during attract mode, but since it's Flash (Flixel) it would require some networking, Mochi, or something to keep a record of these scores. Only problem is the machine we'd be running on probably won't have network access. As a quick solution, I thought I'd just write up a dinky little high score server in C#/.NET that could take basic GET requests for submitting scores and getting the score list. We're talking REAL basic, like blocking while waiting for an incoming request, run & forget console app, etc. There's no guarantee that our .swf won't get reloaded, and we'd like the scores to persist, so this server would pretty much exists to keep a safe copy of the scores that the game can add to and request, and occasionally the server will write the scores to a flat text file. But, HttpListener is giving me Error Code 87 'The parameter is incorrect.' Have any idea what I'm doing wrong? Or better yet, am I barking up the wrong tree and missing an obviously simpler solution? This is all I've got so far in my Main(): HttpListener listener = new HttpListener(); listener.Prefixes.Add("http://localhost:66666/"); listener.Start(); The exception happens at listener.Start(); and the stack trace is: at System.Net.HttpListener.AddAllPrefixes() at System.Net.HttpListener.Start() at WOSEBCE_ScoreServer.Program.Main(String[] args) in C:\Users\Michael\Documents\Visual Studio 2010\VS2010 Projects\WOSEBCE_ScoreServer\WOSEBCE_ScoreServer\Program.cs:line 24 at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart()

    Read the article

  • Binding services to localhost and using SSH tunnels - can requests be forged?

    - by Martin
    Given a typical webserver, with Apache2, common PHP scripts and a DNS server, would it be sufficient from a security perspective to bind administration interfaces like phpmyadmin to localhost and access it via SSH tunnels? Or could somebody, who knew eg. that phpmyadmin (or any other commonly availible script) is listening at a certain port on localhost easily forge requests that would be executed if no other authentication was present? In other words: could somebody from somewhere in the internet easily forge a request, so that the webserver would accept it, thinking it originated from 127.0.0.1 if the server is listening on 127.0.0.1 only? If there were a risk, could it be somehow dealt with on a lower level than the application, eg. by using iptables? The idea being, that if someone found a weakness in a php script or apache, the network would still block this request because it did not arrive via a SSH-tunnel?

    Read the article

  • Choose IP Adress for Process to use on launch [duplicate]

    - by user1436026
    This question already has an answer here: How to set which IP to use for a HTTP request? 2 answers Say my server has the following IP addresses: 123.456.78.0 123.467.79.1 123.456.77.1 123.456.68.0 etc... Say I want to launch a process, say wget from the command line. Normally, I would do something like this: wget http://www.google.com/ Except that I would like to choose the IP address that my server uses to make this request. Is there a way to use wget or launch another command with a choice of one of my own IP addresses, like the following pseudo command: with-ip 123.456.68.0 wget http://www.google.com/

    Read the article

  • How to filter Varnish logs based on XID?

    - by Martijn Heemels
    I'm running into infrequent 503 errors which appear hard to pinpoint. Varnishlog is driving me mad, since I can't seem to get the information I want out of it. I'd like to see both the client- and backend-communications as seen by Varnish. I thought the XID number, which is logged on Varnish's default error page, would allow me to filter the exact request out of the logging buffer. However, no combination of varnishlog parameters gives me the output I need. The following only shows the client-side communication: varnishlog -d -c -m ReqStart:1427305652 while this only shows the resulting backend communication: varnishlog -d -b -m TxHeader:1427305652 Is there a one-liner to show the entire request?

    Read the article

  • Browser Compatibility Testing - 3 [on hold]

    - by user3702046
    I'm a Sr Test Engineer working for a Start up in Bangalore, India. For my current project which is related to Browser Compatibility Testing of a website and/or mobile application, I'm exploring much about IE frequently occurring issues, HTML, CSS, JS support boundary for all IE versions .. To be very specific, I'm more concentrating on IE browser at the moment. Once i get clear understanding of the IE rendering.. Then will pick up Firefox & Chrome. So requesting your valuable inputs/suggestions/advice/opinion/sharing of knowledge of it/books. I will be eagerly waiting for your reply. Thanks, Tej

    Read the article

  • Alot of Pirated Material on Someone's Website - What Can Be Done?

    - by The Russian Shop
    Hi, We've developed our website for over 10 years. Recently a website in China has begun to pirate our images and product descriptions (they are showing "new" March offerings). Nearly all of the items they "offer" are ours. Their whois leads to an outfit in France which by chance also hosts a website for Steroids, and both the Steroid and the pirate site share an 800 phone number (coincidence??). The pirate site lists "their" products as being available in mass quantities, though very often the actual product is one-of-a-kind. The images they've stolen appear in google searches alongside our own!! Clicking on any product at the pirate site leads to nowhere. Calls to the 800 number lead to a recorded answer. No reply to our emails (funny if they would have!) Any suggestions on what to do? Much obliged for any help. https://www.collectiblereview.com (the pirates) http://www.anabolic-store.com (steroids)

    Read the article

  • Returning "200 OK" in Apache on HTTP OPTIONS requests

    - by i..
    I'm attempting to implement cross-domain HTTP access control without touching any code. I've got my Apache(2) server returning the correct Access Control headers with this block: Header set Access-Control-Allow-Origin "*" Header set Access-Control-Allow-Methods "POST, GET, OPTIONS" I now need to prevent Apache from executing my code when the browser sends a HTTP OPTIONS request (it's stored in the REQUEST_METHOD environment variable), returning 200 OK. How can I configure Apache to respond "200 OK" when the request method is OPTIONS? I've tried this mod_rewrite block, but the Access Control headers are lost. RewriteEngine On RewriteCond %{REQUEST_METHOD} OPTIONS RewriteRule ^(.*)$ $1 [R=200,L]

    Read the article

  • mod_rewrite ssl redirect

    - by Thomas
    Hi all, I want to use mod_rewrite to ensure that certain pages are served with SSL and all others normally, but I am having trouble getting it to work This works (redirect to SSL when request uri is for users or cart) RewriteCond %{SERVER_PORT} 80 RewriteCond %{REQUEST_URI} users [OR] RewriteCond %{REQUEST_URI} cart RewriteRule ^(.*)$ https://secure.host.tld/$1 [R,L] So, to accomodate for a user not to keep browsing the site with ssl, when requesting other uris, I thought the below, but doesn't work: (when port is 443 and request uri is not one of uris that need to be served by ssl, redirect back to normal host) RewriteCond %{SERVER_PORT} 443 RewriteCond %{REQUEST_URI} !^/users [OR] RewriteCond %{REQUEST_URI} !group RewriteRule ^/?(users|groups)(.*)$ http://host.tld/$1 [R,L] Any help? Thanks

    Read the article

  • How can I make my browser(s) finish AJAX requests instead of stopping them when I switch to another page?

    - by Tom Wijsman
    I usually need to deal with things on a page right before switching to yet another page, this ranges from "liking / upvoting a comment or post" up to "an important action" and doesn't always come with feedback on whether the action actually proceeded. This is a huge problem! I assume the action to proceed once I start the particular AJAX request, but because I switch to another page it didn't actually happen because the AJAX request got aborted. This has left me several times with coming back to the page and seeing my action didn't take place at all; to give you an idea how bad this is, this even happened once when commenting on Super User! Is there a way to tell my browser to not drop these AJAX connections but simply let them finish?

    Read the article

  • wamp server REDIRECT_STATUS

    - by user143039
    I have noticed my remote server returns $_SERVER[REDIRECT_STATUS] with every request. Example: [REDIRECT_STATUS] = 200 But my localhost wamp server does not. Interestingly, when I manually set a header: Example: header('HTTP/1.1 302 Redirect This'); The variable: $_SERVER[REDIRECT_STATUS] is still not set. Though I can view the headers, including the one I set, with HttpFox. $_SERVER[REDIRECT_STATUS] is set when .htaccess throws(?) an ErrorDocument, like ErrorDocument 404 for example. Any ideas how to get wamp server to set the variable $_SERVER[REDIRECT_STATUS] with every request?

    Read the article

  • How do I capture and playback http web requests against multiple web servers?

    - by KevM
    My overall goal is to not interrupt a production system while capturing HTTP Posts to a web application so that I can reverse engineer the telemetry coming from a closed application. I have control over the transmitter of the HTTP Posts but not the receiving web application. It seems like I need a request "forking" proxy. Sort of a reverse proxy that pushes the request to 2 endpoints, a master and slave, only relaying the response from the master endpoint back to the requester. I am not a server geek so something like this may exist but I don't know the term of art for what I am looking for. Another possibility could be a simple logging proxy. Capture a log of the web requests. Rewrite the log to target my "slave" web application. Playback the log with curl or something. Thank you for your assistance.

    Read the article

  • Can't set up Usermin correctly to allow users to login outside of local network, what am I missing?

    - by thecraic
    I'm fairly new at creating a server, but the biggest problem I am currently having at the moment is getting Usermin set up to be accessible from outside the LAN. I talked to other people that use it and was told that all I need to do is type the url:20000 to access the login screen, but that doesn't work. I have also tried the ip:20000 and that doesn't lead to anything. Instead I get the error message: Error - Bad Request This web server is running in SSL mode. Try the URL https://hostname:10000/ instead. (where hostname is my server's hostname) I know it must be a configuration issue, but I have checked all my settings and as far as I can tell I don't have the ports blocked anywhere. I have the correct ports forwarded on my router and my server firewall doesn't have the port block either. Is there anything I am missing? Any help would be appreciated and I will add more information upon request. Thank You.

    Read the article

  • Does an incomplete update affect my programs?

    - by Junior
    I saw a pop up which said 'updates are available' so I clicked it. I completely forgot that the installation was incomplete, I logged off. NOw when I came back, it told me to do a partial update. I read the internet for information, partial update wasn't the safest thing for me. I tried to log in to skype, it said another skype may exist. That wasn't true, Skype wasn't opened. I'm not sure if it's because of the incomplete update, but I'm quite off, in trouble. Please reply. Thank you Regards Junior

    Read the article

  • How to choose between using a Domain Event, or letting the application layer orchestrate everything

    - by Mr Happy
    I'm setting my first steps into domain driven design, bought the blue book and all, and I find myself seeing three ways to implement a certain solution. For the record: I'm not using CQRS or Event Sourcing. Let's say a user request comes into the application service layer. The business logic for that request is (for whatever reason) separated into a method on an entity, and a method on a domain service. How should I go about calling those methods? The options I have gathered so far are: Let the application service call both methods Use method injection/double dispatch to inject the domain service into the entity, letting the entity do it's thing and then let it call the method of the domain service (or the other way around, letting the domain service call the method on the entity) Raise a domain event in the entity method, a handler of which calls the domain service. (The kind of domain events I'm talking about are: http://www.udidahan.com/2009/06/14/domain-events-salvation/) I think these are all viable, but I'm unable to choose between them. I've been thinking about this a long time and I've come to a point where I no longer see the semantic differences between the three. Do you know of some guidelines when to use what?

    Read the article

  • Nginx load distribution and multi-domain SSL

    - by Steve Clark
    I'm researching into the best methods of two new parts of our infrastructure, hopefully finding a single solution for both. 1) We're currently running a single application server, and we're going to be adding an additional application server and load balance between the two. 2) We handle a few thousand domains across the application server(s), and we're looking to support SSL. The best method i've come across so far is using nginx for it's Load Distribution to serve the requests to the application servers, and for it's SSL support. If a request is using SSL, nginx accepts the request on, terminates SSL and pipes to apache (app servers). Now, that's all good, but i'm yet to figure out how we can let nginx handle multiple domains using SSL. We're potentially looking at using UCC SSL Certs, so we can support 150 domains on a single certificate, with each cert on a single IP. I'm all new to this (My experience is just with physical load balancers and a single domains on SSL), so any advice would be very much appreciated.

    Read the article

  • Apache configuration file visualization/testing

    - by Matt Holgate
    Is there a tool available (or a debug mode built into Apache) that will allow me to interactively test and explain an Apache configuration for a given request? In particular, I'd like to be able to see which directives will apply when requesting a specific URL. For example, the output for the URL http://myserver.com/foo/bar/bar.html might look something like: Allow from 192.168.0.3 <-- From <Location /foo/bar> in myserver.com vhost Require valid user <-- From <Directory /var/www/foo> in global configuration Satisfy any <-- From <File bar.html> in global configuration [Background: why do I want this? The apache merging rules for configuration directives are quite complex to get right. It would be great to have a tool which allows you to check that your rules are doing exactly what you want, and would be a good learning tool]. If there isn't such a tool, is there a debug option in Apache that will log such information for each incoming request?

    Read the article

  • Running Tomcat 7 and Apache 2 on the same server

    - by Thorn
    Part of my site needs to run over HTTPS and I'm creating a sub-domain for that part. I have apache httpd 2 AND Tomcat 7 running on the same server with the same IP, Apache is on port 80 of course, while Tomcat is running on port 8080. Right now I am doing domain forwarding for requests that need to run off tomcat. For example, mathteamhosting.com/mathApp can forward to mathteamhosting.com:8080/mathApp. I would like to have Tomcat handle the https requests for that subdomain. I don't think this forwarding technique can work in this case. How do I set that up so that Tomcat receives the requests on port 443 while apache handles port 80. To be more specific: http://proctinator.com == request goes to Apache web server https://private.proctinator.com == request goes to Apache web server

    Read the article

  • Odd SVN Checkout failures occur frequenctly on VMWare virtual machines

    - by snowballhg
    We've recently been experiencing seemingly random SVN checkout failures on our Hudson build system. Google search has failed me; I'm hoping the super user community can help me out :-) We are occasionally receiving the following SVN error when our Hudson build jobs checkout source via the Hudson Subversion plug-in (which uses svn kit): ERROR: Failed to check out http://server/svnroot/trunk org.tmatesoft.svn.core.SVNException: svn: Processing REPORT request response failed: XML document structures must start and end within the same entity. (/svnroot/!svn/vcc/default) svn: REPORT request failed on '/svnroot/!svn/vcc/default' This issue seems to only occur when checking out from our Virtual Machines (Windows XP, Fedora 9, Fedora 12) using Hudson's SVN Plug-in. Systems that use the traditional SVN client seem to work. SVN Server version: 1.6.6 Hudson version: 1.377 Hudson SVN Plugin Version: 1.17 Has anyone dealt with this issue, or have any suggestions? Thanks

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >