Search Results

Search found 17054 results on 683 pages for 'jms request reply'.

Page 442/683 | < Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >

  • Cant access EC2 hosted website

    - by Himanshu Page
    For some reason, I am unable to access our website www.doccaster.com (Bad request nginx). We are hosted on amazon EC2 with elastic ip associated to it. The weird part is a) I can access it through the public dns url http://ec2-184-73-195-180.compute-1.amazonaws.com b) My co founder who is located in another city can access it via www.doccaster.com. I observed that my instance was failing reachability check, so I launched a new one and assigned it the the elastic ip. I tried to ping the ip address 184.73.195.180 from my machine but no success. Any help will be really appreciated. More details I ran the following command on my server netstat -lntp | grep -E 'apache|httpd' and it displays :::80 for httpd . Is this accurate ? Should it be 0:0:0:80 ? or doesnt matter?

    Read the article

  • How did Google get on my Mac?

    - by SamGoody
    Am running a MacBook Pro, and have never installed Chrome, Google Earth, or anything blatantly Google. Just installed Little Snitch (are there no good free firewalls for Mac?) and see that CURL is sending to Google every few minutes, as is a request to Google update and more. Little Snitch doesn't say what program setup these requests. So, how do I find out how G got on my machine, why is it sending so many requests (every minute or so) and how do I remove it (and is it there for reasons other than to help G spy on me)?

    Read the article

  • apache domain names are case sensitive

    - by neubert
    The following HTTP request results in a "See the error log for more details; Invalid Value Found For Domain" error: GET / HTTP/1.0 Host: www.MyWebsite.com If I make the hostname all lowercase, however, it works just fine. How can I make Apache case insensitive? Here's my httpd.conf file: <VirtualHost *:80> ServerName mywebsite.com ServerAlias www.mywebsite.com ... </VirtualHost> I tried adding ServerAlias www.MyWebsite.com to that but that didn't help. And in any event, it seems like that's a poor approach anyway since the case can be mixed up in a ton of different ways and trying to account for all of them would result in a huge *.conf file. Any ideas? Thanks!

    Read the article

  • Retrieving database column using JSON [migrated]

    - by arokia
    I have a database consist of 4 columns (id-symbol-name-contractnumber). All 4 columns with their data are being displayed on the user interface using JSON. There is a function which is responisble to add new column to the database e.g (countrycode). The coulmn is added successfully to the database BUT not able to show the new added coulmn in the user interface. Below is my code that is displaying the columns. Can you help me? table.php $(document).ready(function () { // prepare the data var theme = getDemoTheme(); var source = { datatype: "json", datafields: [ { name: 'id' }, { name: 'symbol' }, { name: 'name' }, { name: 'contractnumber' } ], url: 'data.php', filter: function() { // update the grid and send a request to the server. $("#jqxgrid").jqxGrid('updatebounddata', 'filter'); }, cache: false }; var dataAdapter = new $.jqx.dataAdapter(source); // initialize jqxGrid $("#jqxgrid").jqxGrid( { source: dataAdapter, width: 670, theme: theme, showfilterrow: true, filterable: true, columns: [ { text: 'id', datafield: 'id', width: 200 }, { text: 'symbol', datafield: 'symbol', width: 200 }, { text: 'name', datafield: 'name', width: 100 }, { text: 'contractnumber', filtertype: 'list', datafield: 'contractnumber' } ] }); }); data.php <?php #Include the db.php file include('db.php'); $query = "SELECT * FROM pricelist"; $result = mysql_query($query) or die("SQL Error 1: " . mysql_error()); $orders = array(); // get data and store in a json array while ($row = mysql_fetch_array($result, MYSQL_ASSOC)) { $pricelist[] = array( 'id' => $row['id'], 'symbol' => $row['symbol'], 'name' => $row['name'], 'contractnumber' => $row['contractnumber'] ); } echo json_encode($pricelist); ?>

    Read the article

  • Install/import SSL certificate on Windows Server 2003/IIS 6.0

    - by ChristianSparre
    Hi A couple of months ago we ordered an SSL certificate for a client's server using the request guide in IIS 6.0. This worked fine and the guide was completed when we received the certificate. But about 2 weeks ago the server crashed and had to be restored. Now I can't seem to get the site running. I have the .cer file, but what is the correct procedure to import the the certificate? I hope some of you can help me.. -- Christian

    Read the article

  • GLES2.0 3D Android game performance and multi threading the update?

    - by Ofer
    I have profiled my mixed Java\C++ Android game and I got the following result: https://dl.dropbox.com/u/8025882/PompiDev/AndroidProfile.png As you can see, the pink think is a C++ functions that updates the game. It does things like updating the logic but it mostly it generates a "request list" for rendering. The thing is, I generate DrawLists on C++ and then send them to Java to process and draw using GLES2.0. Since then I was able to improve update from 9ms down to about 7ms, but I would like to ask if I would benefit from multi threading the update? As I understand from that diagram is that the function that takes the most time is the one you see it's color on the timeline. So the pink area is taken mostly by update. The other area has MainOpenGL.Handle as it's main contributor(whch is my java function), but since it's not drawn to the top of the diagram I can conclude other things are happening at the same time that use the CPU? Or even GPU stuff that isn't shown in this diagram. I am not sure how the GPU works on this. Does it calculate stuff in parallel to the CPU? Or is it part of the CPU usage as in SoC? I am not sure. Anyway, in case GPU things DO happen in parallel to CPU, then I would guess that if I do this C++ Update in parallel to the thread that makes the OpenGL calls, I might make use of "dead CPU time" due to GPU stalling or maybe have the GPU calls getting processed earlier because it won't have to wait for Update to finish? How do you suggest to improve performance based on that? Thanks.

    Read the article

  • How far should one take e-mail address validation?

    - by Mike Tomasello
    I'm wondering how far people should take the validation of e-mail address. My field is primarily web-development, but this applies anywhere. I've seen a few approaches: simply checking if there is an "@" present, which is dead simply but of course not that reliable. a more complex regex test for standard e-mail formats a full regex against RFC 2822 - the problem with this is that often an e-mail address might be valid but it is probably not what the user meant DNS validation SMTP validation As many people might know (but many don't), e-mail addresses can have a lot of strange variation that most people don't usually consider (see RFC 2822 3.4.1), but you have to think about the goals of your validation: are you simply trying to ensure that an e-mail address can be sent to an address, or that it is what the user probably meant to put in (which is unlikely in a lot of the more obscure cases of otherwise 'valid' addresses). An option I've considered is simply giving a warning with a more esoteric address but still allowing the request to go through, but this does add more complexity to a form and most users are likely to be confused. While DNS validation / SMTP validation seem like no-brainers, I foresee problems where the DNS server/SMTP server is temporarily down and a user is unable to register somewhere, or the user's SMTP server doesn't support the required features. How might some experienced developers out here handle this? Are there any other approaches than the ones I've listed? Edit: I completely forgot the most obvious of all, sending a confirmation e-mail! Thanks to answerers for pointing that one out. Yes, this one is pretty foolproof, but it does require extra hassle on the part of everyone involved. The user has to fetch some e-mail, and the developer needs to remember user data before they're even confirmed as valid.

    Read the article

  • eCryptfs on ubuntu server : How to keep the home mounted without being over ssh?

    - by Bebeoix
    I have a daemon program who need to read in a file who is saved somewhere in my home folder. But every time I close my ssh connection, this daemon can't read the file because it appear that eCryptfs unmount the home. Maybe there is an option to force eCryptfs to not only mount with an ssh connection ? I didn't found it. Thanks. PS : I know this thread, http://askubuntu.com/questions/165608/why-is-ecryptfs-only-mounting-private-home-directory-over-ssh, but this is not the proper/good way to deal with the request.

    Read the article

  • Suggestion for setting web application parameters

    - by user40730
    I'm creating a web application on GWT. I'm using MVP pattern with activities and places. I have a xml config file containing some parameters to be used by the application. Content of this xml file is sent to the client using HttpRequest; I'm using a singleton class to hold the information from the xml file. Right now, the application is getting the data when the user starts the application in the home page, that is working well. Now, since I'm using activities and places, a user can bookmark a page and starts the application in any other page (Place). And here comes the problem: Since I'm using some of the information from the xml file to set some ui widgets, I have to check if the xml config file was read and the application already has the parameters (I do this by checking the singleton class). But the xml file is read by using an HttpRequest, so I got errors 'cause the application needs some parameters to initialize some ui widgets, but these parameters aren't ready on time. I was thinking on using an synchronous request to fix the problem, but it seems complicated and not recommendable to do that. So, I'd like to hear some other suggestions. Thanks.

    Read the article

  • 12.10 Wireless networking

    - by user108594
    I downloaded ubuntu 12.10 using WUBI and cannot connect to the internet. I removed it and downloaded ubuntu 12.04 still cannot connect. This I assume rules out the program being the problem. I reinstalled 12.10. When loaded I get the same message W/red (x) stating Internet not connected. I went to the Settings drop down box and it does not reveal the network list but (enable networking has a ck mark). Am running a HP Laptop with a w/7/64 OS that has a kill switch that indicates (orangeno connection) I downloaded 12.10 on my desktop (on the same network) and everything OK. I tried to follow the instructions in the help menu but got lost and confused . Sincerely Dan Additional Info per request Broadcom 802.11b/g Wlan Internal pc [hp laptop] P.S. I've been out of town for about a month. TKS for your gitback I did install 12.10 via cd and everything ok,but retried alongside 7 and unable to connect to internet,also took laptop and hard wired using ethernet cable and everything ok. stumped again and running out of ideas!!!

    Read the article

  • Unity , libgdx, or something else to develop my first game for Android?

    - by capcom
    I want to start by saying that I absolutely love Unity (even more when I team it up with Blender). I really want to start developing games for Android, but it seems like Unity poses way too many roadblocks in terms of which devices it supports (and even if it does support them, it doesn't work well on all of them). I've been looking around for alternatives, and found something called libgdx. Well, it's nothing like Unity unfortunately, but at least it seems like I may be able to reach a larger audience in the market. I'd like to start by making 2D games, but with 3D graphics (say, imported from Blender). I can do this very easily in Unity, and it seems like it should be alright with libgdx too. But I really want to know if ditching Unity is a smart idea, considering how comfortable I am with it already, and how much I like it. Finally, is libgdx something you would recommend considering my requirements/situation? BTW, I am quite familiar with Eclipse too. Many thanks. Feel free to request further details.

    Read the article

  • CPanel: Every url is being redirected to http://:2083

    - by Frank
    On my cpanel server, I restored about 50 accounts from crashed cpanel server. All of the sites were working fine, but suddenly without changing anything, every site started to get redirected to url "http://:2083/"., There is nothing in logs, no errors. when i do wget it says: wget grinfeld.com.br --2012-09-04 13:18:23-- http://grinfeld.com.br/ Resolving grinfeld.com.br... 198.101.221.254 Connecting to grinfeld.com.br|198.101.221.254|:80... connected. HTTP request sent, awaiting response... 301 Moved Location: https://:2083/ [following] https://:2083/: Invalid host name.

    Read the article

  • Facebook Like javascript related to Time Spent Downloading a page Increase in GWT?

    - by donaldthe
    Hi, I installed the Facebook Like button Javascript version on my website on December 15th. Take a look at this report from Google Webmaster Central. Crawl stats Googlebot activity in the last 90 days The crawl stats are from Googlebot which as far as I know doesn't execute Javascript. Could the Facebook Like Javascript code, "The XFBML version" be related to large spike in Time spent downloading a page? (By the way the huge spike in November was caused by a mistake where every image request was getting a 301.) I'm not sure what caused the spike to go down by half somewhere in December. It may have been related to a faulty setting in web.config. I'm at a loss as to what I can do about this or even how to tell if this is my problem or Googlebots crawl problem. Here is the Facebook code I am using to create the like button. It is right after the opening body tag <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({appId: 'xxxxx', status: true, cookie: true, xfbml: true}); }; (function() { var e = document.createElement('script'); e.async = true; e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js'; document.getElementById('fb-root').appendChild(e); }()); ` and this creates the like box: <fb:like show_faces="false"></fb:like> If the Javascript can't be the problem any ideas on where to start looking would be appreciated.

    Read the article

  • Dns works, can ping, but cannot load web pages in browser

    - by user1224595
    Yesterday I changed routers, and my desktop computer started acting up. I could ping websites, and nslookup was able to resolve names to addresses, but neither chrome, firefox, nor ie could load any webpages. None of my other computers connected to the same wireless router have any problems. I connect my desktop to the router through a cheap wifi dongle. I did a wireshark capture of the browser request, and I have uploaded the pcap here. https://drive.google.com/file/d/0B7AsPdhWc-SwbTV0bUJLQXo4UUE/edit?usp=sharing One strange thing I noticed was the spamming of ssdp packets. I am not super familiar with networking, but it seems that it is not a problem with the router, as dns works, and so does dhcp (the desktop is assigned an address correctly). Any help would be appreciated.

    Read the article

  • rewrite redirect issue in debian squeeze

    - by hd01
    My server os is debian squeeze. I have these lines to redirect non-www to www in htaccess file of my website: RewriteCond %{HTTP_HOST} !^www\.example\.com$ [NC] RewriteRule ^(.*)$ http://www.example.com/$1 [L,R=301] but it cause this error in firefox: The page isn't redirecting properly Firefox has detected that the server is redirecting the request for this address in a way that will never complete. This problem can sometimes be caused by disabling or refusing to accept cookies. when I comment those lines in htaccess mysite appears but in non-www format. I'm sure it works well before on the Ubuntu . but I don't know why it doesn't work now. would you help me?

    Read the article

  • Configure akamai to ignore favicon errors [on hold]

    - by Aki
    We have hosted our services through akamai and have configured and alert in akamai to notify us of 404 errors. We dont want to serve favicon from our services (as they are rest webservices which are not consumed by humans, hence no point in serving favicons). But whenever thesewebservices are accessed from a browser the browser would send a request for the favicon, which ends up being logged as a 404 and akamai sends us an alert for this. Is there a way to configure akamai in a way that it understands that favicon 404s should not contribute to the alert?

    Read the article

  • Is it possible to extend a 504 timeout in nginx on a per location basis

    - by codecowboy
    Is it possible to set timeout directives within a location block to prevent nginx returning a 504 from a long running PHP script (PHP-FPM? location /myurlsegment/ { client_body_timeout 1000000; send_timeout 1000000; fastcgi_read_timeout 1000000; } This has no effect when making a request to example.com/myurlsegment. The timeout occurs after approximately 60 seconds. PHP is configured to allow the script to run until completion (set_time_limit(0)) I don't want to set a global timeout for all scripts.

    Read the article

  • R12 Diagnostic Script for Purchasing Encumbrance Issues

    - by Oracle_EBS
    Do you have a Release 12 Purchasing document with an accounting encumbrance error?  Get all the relevant data in one step using the new diagnostic in DOC ID: 1483743.1 -  ‘R12: Diagnostic Script to help troubleshoot Purchasing Encumbrance Issues’.   Avoid the back and forth pinging with support for data collection.   Query the document id in My Oracle Support and add it to your Favorites using the star icon for quick access. The note includes when to use the script and how to use it.  The script will produce a user friendly html output that contains information relevant to encumbrance issues, along with some data validation checks to identify common data corruption issues on your document.  For example in this one diagnostic it will provide information on the following: Ø Cross Product Setup Ø Document Data Dump Ø Funds availability Ø Subledger accounting information Ø GL and AP Invoice Data Ø Debug and Trace This output is ideal for self service, as it provides known issues in the Data Validation section (related to the document) with links to key documentation.   Or the report can be uploaded to support when logging a Service Request. To see more about the diagnostic, attend our September 11, 2012 Webcast ‘Overview of Procurement Patching and New Tools for Issue Resolution’.  Visit Doc ID 1479718.1 to signup.  Note: This topic will not be listed as it has been just added.

    Read the article

  • nginx errors: upstream timed out (110: Connection timed out)

    - by Sparsh Gupta
    Hi, I have a nginx server with 5 backend servers. We serve around 400-500 requests/second. I have started getting a large number of Upstream Timed out errors (110: Connection timed out) Error string in error.log looks like 2011/01/10 21:59:46 [error] 1153#0: *1699246778 upstream timed out (110: Connection timed out) while reading response header from upstream, client: {IP}, server: {domain}, request: "GET {URL} HTTP/1.1", upstream: "http://{backend_server}:80/{url}", host: "{domain}", referrer: "{referrer}" Any suggestions how to debug such errors. I am unable to find a munin plugin to keep a check on number of upstream errors. Sometime the number of errors per day is way too high and somedays its a more decent 3 digit number. A munin graph would probably help us finding out any pattern or correlation with anything else How can we make the number of such error as ZERO

    Read the article

  • Improving Windows Authentication performance on IIS

    - by flalar
    We're struggling with performance issues with a ASP.NET MVC site that is using Windows Authentication. Response time is very slow on the first request to the site when the user is being authenticated. Further, every time the Authorization header is sent from the browser the response time increases with many seconds. The same issue occurs for both executed files and static content like CSS and JS. Access to the application is restricted to users within a certain role and we are now planning to allow access to static files for all authenticated users to see if that helps. The authentication method in use is NTLM. How should we go forward in pinpointing why authentication decreases performance drastically?

    Read the article

  • Mod_rewrite issue with godaddy web hosting

    - by MrFoh
    Am trying to use laravel to build a site but my routes all redirect to the homepage. Apache error logs show this AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. And the .htaccess file is this <IfModule mod_rewrite.c> Options -MultiViews Options +FollowSymLinks RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L] </IfModule> The webroot has multiple sub-folders which are document roots for different domains. Am working with one of these sub-folders. What is causing this error and how can it be fixed

    Read the article

  • Chrome Crash Investigation

    - by iamcreasy
    Chrome is crashing very very frequently(every 2-3 min). It becomes irresponsive. How can I start investigate why this is crashing so much? It feels to me that certain components of some web pages are triggering the crash. I also checked "C:\Users\irfan\AppData\Local\Google\CrashReports", but this folder is empty. Some sort of process tacking tool, and keep an eye out for which request is being made just before the crash, or something like that. Any software suggestion? Im using Windows 7. Please don't suggest, reinstall chrome. I want to know why this is happening.

    Read the article

  • Postfix sending mail back to itself? (Ubuntu 9.10)

    - by webo
    I setup Dovecot and Postfix using the "Dovecot-Postfix" package with SASL and all that. The Dovecot part seems to be working fine but I'm having issues with Postfix. Whenever I send a message to another address through the postfix server, two things happen. the message never gets to the other address (even when I request a delivery notification, it says that it's been delivered but it's not in the spam box in the other inbox or anything) The message comes back to my inbox through Dovecot as though I sent it to myself internally. e.g. I send an email through my postfix server to my gmail account, 10 minutes later nothing shows up in my gmail account but the message comes back to me as though I was sending it to my internal address (with no errors) Any ideas?

    Read the article

  • Python Web Applications: What is the way and the method to handle Registrations, Login-Logouts and Cookies? [on hold]

    - by Phil
    I am working on a simple Python web application for learning purposes. I have chosen a very minimalistic and simple framework. I have done a significant amount of research but I couldn't find a source clearly explaining what I need, which is as follows: I would like to learn more about: User registration User Log-ins User Log-outs User auto-logins I have successfully handled items 1 and 3 due to their simple nature. However, I am confused with item 2 (log-ins) and item 4 (auto-logins). When a user enters username and password, and after hashing with salts and matching it in the DB; What information should I store in the cookies in order to keep the user logged in during the session? Do I keep username+password but encrypt them? Both or just password? Do I keep username and a generated key matching their password? If I want the user to be able to auto-login (when they leave and come back to the web page), what information then is kept in the cookies? I don't want to use modules or libraries that handle these things automatically. I want to learn basics and why something is the way it is. I would also like to point out that I do not mind reading anything you might offer on the topic that explains hows and whys. Possibly with algorithm diagrams to show the process. Some information: I know about setting headers, cookies, encryption (up to some level, obviously not an expert!), request objects, SQLAlchemy etc. I don't want any data kept in a single web application server's store. I want multiple app-servers to be handle a user, and whatever needs to be kept on the server to be done with a Postgres/MySQL via SQLAlchemy (I think, this is called stateless?) Thank you.

    Read the article

  • nginx: URL rewrites and performance

    - by j0nes
    I have a website where I need to change the URL structure. The old URLs look like /olddir/part1_de.htm, the new ones will look like /newdir/sub/category/anotherpage.htm. There are a lot of URL rewrites I need to do, I assume about 500 distinct rewrites in the end. As my website gets quite a lot of traffic, my main concern is about performance at the moment. My questions are: I assume that for each request, the rewrites block will be parsed and the regex will be evaluated. Am I right? Will there be a performance penalty if I use these rewrites? Can nginx handle this? Are there any "best practices" to follow when doing a lot of rewrites?

    Read the article

< Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >