Search Results

Search found 55652 results on 2227 pages for 'http response'.

Page 595/2227 | < Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >

  • Doctrine - get the offset of an object in a collection (implementing an infinite scroll)

    - by dan
    I am using Doctrine and trying to implement an infinite scroll on a collection of notes displayed on the user's browser. The application is very dynamic, therefore when the user submits a new note, the note is added to the top of the collection straightaway, besides being sent (and stored) to the server. Which is why I can't use a traditional pagination method, where you just send the page number to the server and the server will figure out the offset and the number of results from that. To give you an example of what I mean, imagine there are 20 notes displayed, then the user adds 2 more notes, therefore there are 22 notes displayed. If I simply requests "page 2", the first 2 items of that page will be the last two items of the page currently displayed to the user. Which is why I am after a more sophisticated method, which is the one I am about to explain. Please consider the following code, which is part of the server code serving an AJAX request for more notes: // $lastNoteDisplayedId is coming from the AJAX request $lastNoteDisplayed = $repository->findBy($lastNoteDisplayedId); $allNotes = $repository->findBy($filter, array('createdAt' => 'desc')); $offset = getLastNoteDisplayedOffset($allNotes, $lastNoteDisplayedId); // retrieve the page to send back so that it can be appended to the listing $notesPerPage = 30 $notes = $repository->findBy( array(), array('createdAt' => 'desc'), $notesPerPage, $offset ); $response = json_encode($notes); return $response; Basically I would need to write the method getLastNoteDisplayedOffset, that given the whole set of notes and one particoular note, it can give me its offset, so that I can use it for the pagination of the previous Doctrine statement. I know probably a possible implementation would be: getLastNoteDisplayedOffset($allNotes, $lastNoteDisplayedId) { $i = 0; foreach ($allNotes as $note) { if ($note->getId() === $lastNoteDisplayedId->getId()) { break; } $i++; } return $i; } I would prefer not to loop through all notes because performance is an important factor. I was wondering if Doctrine has got a method itself or if you can suggest a different approach.

    Read the article

  • Is Stream.Write thread-safe?

    - by Mike Spross
    I'm working on a client/server library for a legacy RPC implementation and was running into issues where the client would sometimes hang when waiting to a receive a response message to an RPC request message. It turns out the real problem was in my message framing code (I wasn't handling message boundaries correctly when reading data off the underlying NetworkStream), but it also made me suspicious of the code I was using to send data across the network, specifically in the case where the RPC server sends a large amount of data to a client as the result of a client RPC request. My send code uses a BinaryWriter to write a complete "message" to the underlying NetworkStream. The RPC protocol also implements a heartbeat algorithm, where the RPC server sends out PING messages every 15 seconds. The pings are sent out by a separate thread, so, at least in theory, a ping can be sent while the server is in the middle of streaming a large response back to a client. Suppose I have a Send method as follows, where stream is a NetworkStream: public void Send(Message message) { //Write the message to a temporary stream so we can send it all-at-once MemoryStream tempStream = new MemoryStream(); message.WriteToStream(tempStream); //Write the serialized message to the stream. //The BinaryWriter is a little redundant in this //simplified example, but here because //the production code uses it. byte[] data = tempStream.ToArray(); BinaryWriter bw = new BinaryWriter(stream); bw.Write(data, 0, data.Length); bw.Flush(); } So the question I have is, is the call to bw.Write (and by implication the call to the underlying Stream's Write method) atomic? That is, if a lengthy Write is still in progress on the sending thread, and the heartbeat thread kicks in and sends a PING message, will that thread block until the original Write call finishes, or do I have to add explicit synchronization to the Send method to prevent the two Send calls from clobbering the stream?

    Read the article

  • why limxml2 quotes starting double slash in CDATA with javascript

    - by Vincenzo
    This is my code: <?php $data = <<<EOL <?xml version="1.0"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <script type="text/javascript"> //<![CDATA[ var a = 123; // JS code //]]> </script> </html> EOL; $dom = new DOMDocument(); $dom->preserveWhiteSpace = false; $dom->formatOutput = false; $dom->loadXml($data); echo '<pre>' . htmlspecialchars($dom->saveXML()) . '</pre>'; This is result: <?xml version="1.0"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <script type="text/javascript"><![CDATA[ //]]><![CDATA[ var a = 123; // JS code //]]><![CDATA[ ]]></script></html> If and when I remove the DOCTYPE notation from XML document, CDATA works properly and leading/trailing double slash is not turned into CDATA. What is the problem here? Bug in libxml2? PHP version is 5.2.13 on Linux. Thanks.

    Read the article

  • Is there an easier way to do Classic ASP "relative path"?

    - by Alex.Piechowski
    Right now, I'm having trouble. First of all I have a page, let's call it "http://blah.com/login". That obviously goes strait to "index.asp" A line of Main.asp: <!--#include file="resource/menu.asp"--> Page top includes all of what I need for my menu... so: Part of resource/menu.htm: <div id="colortab" class="ddcolortabs"> <ul> <li><a href="main.asp" title="Main" rel="dropmain"><span>Main</span></a></li> ... </ul> </div> <!--Main drop down menu --> <div id="dropmain" class="dropmenudiv_a"> <a href="main/announcements.asp">Announcements</a> <a href="main/contacts.asp">Contact Information</a> <a href="main/MeetingPlans.asp">Meeting Plan</a> <a href="main/photos.asp">Photo Gallery</a> <a href="main/events.asp">Upcoming Events</a> </div> Let's say I click on the "announcements" (http://blah.com/login/main/announcements.asp) link... Now I'm at the announcements page! But wait, I include the same menu file. Guess what happens: I get sent to "http://blah.com/login/main/main/announcements.asp Which doesn't exist... My solution: Make a menu_sub.asp include for any subpages. But wait a second... this WORKS, but it gets REALLY REALLY messy... What can I do to use just one main "menu.asp" instead of "menu_sub.asp"? using "/main/announcements.asp" WON'T be an option because this is a web application that will be on different directories per server. Any ideas? PLEASE

    Read the article

  • jQuery AJAX with two domains

    - by Andrew Burns
    OK here is the situation: I have an externally hosted CMS which works great for 99% of our needs. However on the more advanced things I inject my own CSS+JS and do magic. The problem I am running into is loading a simple HTML page from jQuery.ajax() calls. It appears to work in the sense that no warnings or errors are thrown; however in my success handler (which IS ran), the response is blank! I have been scratching my head for the whole morning trying to figure this out and the only thing I can think of is that is has something to do with the cross domain issue (even though it appears to work). Injected JavaScript: $(document).ready(function() { doui(); }); function doui() { $.ajax({ url: 'http://apps.natronacounty-wy.gov/css/feecalc/ui.htm', cache: false, success: ajax_createUI, charset: "utf-8", error: function(e) { alert(e); } }); } function ajax_createUI(data, textStatus) { alert(data); $("#ajax-content").html(data); } My ajax_createUI() success handler is called and textStatus is "success"; however data is empty. This JS file resides @ http://apps.natronacounty-wy.gov/css/js/feecalc.js however the CMS website (which gets the JS injected into it) resides @ http://www.natronacounty-wy.gov/ Am I just being stupid or is it a bug that it looks like it should be working but isn't?

    Read the article

  • Web development scheme for staging and production servers using Git Push

    - by ServAce85
    I am using git to manage a dynamic website (PHP + MySQL) and I want to send my files from my localhost to my staging and development servers in the most efficient and hassle-free way. I am currently convinced that the best way for me to approach this problem is to use this git branching model to organize my local git repo. From there, I will use the release branches to push to my staging server for testing. Once I am happy that the release code works on the staging server, I can then merge with my master branch and push that to my production server. Pushing to Staging Server: As noted in many introductory git posts, I could run into problems pushing into a non-bare repo, so, as suggested in this response, I plan to push the release branch to a bare repo on the server and have a post-receive hook that clones the bare repo to a non-bare repo that also acts as the web-hosted directory. Pushing to Production Server: Here's my newest source of confusion... In the response that I cited above, it made me curious as to why @Paul states that it's a completely different story when pushing to a live, development server. I guess I don't see the problem. Would it be safe and hassle-free to follow the same steps as above, but for the master branch? Where are the potential pit-falls? Config Files: With respect to configuration files that are unique to each environment (.htaccess, config.php, etc), it seems simplest to .gitignore each of those files in their respective repos on their respective servers. Can you see anything immediately wrong with this? Better solutions? Accessing Data: Finally, as I initially stated, the site uses MySQL databases to store data. How would you suggest I access that data (for testing purposes) from the staging server and localhost? I realize that I may have asked way too many questions for a single post, but since they're all related to the best way to set up this development scheme, I thought it was necessary.

    Read the article

  • Facebook not recoginising open graph tags

    - by Pratik Poddar
    My object page looks like: <html xmlns="http://www.w3.org/1999/xhtml" dir="ltr" lang="en-US" xmlns:fb="https://www.facebook.com/2008/fbml"> <head prefix="og: http://ogp.me/ns# cliprin: http://ogp.me/ns/apps/cliprin#"> <meta property="fb:app_id" content="143944345745133" /> <meta property="og:type" content="cliprin:product" /> <meta property="og:url" content="https://itsourstudio.com/" /> <meta property="og:title" content="LED Ice Cubes (Set Of 4)" /> <meta property="og:sitename" content="Its Our Studio" /> <meta property="og:image" content="https://s-static.ak.fbcdn.net/images/devsite/attachment_blank.png" /> <meta property="og:description" content="Blah Blah Blah" /> </head> </html> The JSLink Debugger of the page as shown by the link shows that of:type is website and gives following warnings: Open Graph Warnings That Should Be Fixed Inferred Property: The 'og:url' property should be explicitly provided, even if a value can be inferred from other tags. Inferred Property: The 'og:title' property should be explicitly provided, even if a value can be inferred from other tags. Inferred Property: The 'og:description' property should be explicitly provided, even if a value can be inferred from other tags. Inferred Property: The 'og:image' property should be explicitly provided, even if a value can be inferred from other tags. Tiny og:image: All the images referenced by og:image must be at least 200px in both dimensions. Please check all the images with tag og:image in the given url and ensure that it meets the minimum specification.

    Read the article

  • Get Classic ASP variale from posted JSON

    - by Will
    I'm trying to post JSON via AJAX to a Classic ASP page, which retrieves the value, checks a database and returns JSON to the original page. I can post JSON via AJAX I can return JSON from ASP I can't retrieve the posted JSON into an ASP variable POST you use Request.Form, GET you use Request.Querystring......... what do I use for JSON? I have JSON libraries but they only show creating a string in the ASP script and then parsing that. I need to parse JSON from when being passed an external variable. Javascipt var thing = $(this).val(); $.ajax({ type: "POST", url: '/ajax/check_username.asp', data: "{'userName':'" + thing + "'}", contentType: "application/json; charset=utf-8", dataType: "json", cache: false, async: false, success: function() { alert('success'); }); ASP file (check_username.asp) Response.ContentType = "application/json" sEmail = request.form() -- THE PROBLEM Set oRS = Server.CreateObject("ADODB.Recordset") SQL = "SELECT SYSUserID FROM WCE_UK.dbo.t_SYS_User WHERE Username='"&sEmail&"'" oRS.Open SQL, oConn if not oRS.EOF then sStatus = (new JSON).toJSON("username", true, false) else sStatus = (new JSON).toJSON("username", false, false) end if response.write sStatus

    Read the article

  • Google Chrome Extension : Port: Could not establish connection. Receiving end does not exist

    - by tcornelis
    I have been looking for an answer for almost a week now, but having read all the stackoverflow items i can't seem to find a solution that is working for me. The error that i'm having is : Port: Could not establish connection. Receiving end does not exist. lastError:30 set lastError:30 dispatchOnDisconnect messaging:277 folder layout : img developer_icon.png js sidebar.js main.js jquery-2.0.3.js manifest.json my the manifest.json file looks something like this (it is version 2) :` "browser_action": { "default_icon": "./img/developer_icon.png" }, "content_scripts": [ { "matches": ["*://*/*"], "js": ["./js/sidebar.js"], "run_at": "document_end" } ], "background" : { "scripts" : ["./js/main.js","./js/jquery-2.0.3.js"] }, I want to handle the user clicking the extension icon so i could inject a sidebar in the existing website (because the extension i would like to develop requires that amount of space). So in main.js : chrome.browserAction.onClicked.addListener(function(tab) { chrome.tabs.getSelected(null, function(tab){ chrome.tabs.sendMessage( //Selected tab id tab.id, //Params inside a object data {callFunction: "toggleSidebar"}, //Optional callback function function(response) { console.log(response); } ); }); }); and in sidebar.js : chrome.runtime.onMessage.addListener(function(req,sender,sendResponse){ console.log("sidebar handling request"); toggleSidebar(); }); but i'm never able to see the console.log in my console because of the error. Does someone know what i did wrong? Thanks in advance!

    Read the article

  • Proper way to scan a range of IP addresses

    - by Josh G
    Given a range of IP addresses entered by a user (through various means), I want to identify which of these machines have software running that I can talk to. Here's the basic process: Ping these addresses to find available machines Connect to a known socket on the available machines Send a message to the successfully established sockets Compare the response to the expected response Steps 2-4 are straight forward for me. What is the best way to implement the first step in .NET? I'm looking at the System.Net.NetworkInformation.Ping class. Should I ping multiple addresses simultaneously to speed up the process? If I ping one address at a time with a long timeout it could take forever. But with a small timeout, I may miss some machines that are available. Sometimes pings appear to be failing even when I know that the address points to an active machine. Do I need to ping twice in the event of the request getting discarded? To top it all off, when I scan large collections of addresses with the network cable unplugged, Ping throws a NullReferenceException in FreeUnmanagedResources(). !? Any pointers on the best approach to scanning a range of IPs like this?

    Read the article

  • Fixed div once page is scrolled is flickering

    - by jasondavis
    I am trying to have an advertisement block/div that will be hald way down the page, once you scroll do the page to this point it will stick to the top. Here is a demo of what I am trying to do and the code I am using to do it with... http://jsfiddle.net/jasondavis/6vpA7/3/embedded/result/ In the demo it works perfectly how I am wanting it to be, however when I implement it on my live site, http://goo.gl/zuaZx it works but when you scroll down the div flickers in and out of view on each scroll or down key press. On my site to see the problem live it is the blokc on the right sidebar that says "Recommended Books" Here is the code I am using... $(document).ready( function() { $(window).scroll( function() { if ($(window).scrollTop() > $('#social-container').offset().top) $('#social').addClass('floating'); else $('#social').removeClass('floating'); } ); } );? css #social.floating { position: fixed; top: 0; }? My demo jsfiddle where it works correctly http://jsfiddle.net/jasondavis/6vpA7/3/ The only thing different on my live site is the div/id name is different. As you can see it is somewhat working on my live site except the flickering in and out of view as you scroll down the page. Anyone have any ideas why this would happen on my live site and not on my jsfiddle demo?

    Read the article

  • Why does my Jabber bot only work if I'm debugging my Perl script?

    - by TheGNUGuy
    I am trying to make a jabber bot from scratch and my script is acting funny. I was originally developing the bot on a remote CentOS box, but I have switched to a local Win7 machine. Right now I'm using ActiveState Perl and I'm using Eclipse with the Perl plugin to run a debug the script. The funny behavior I'm experiencing occurs when I run or debug the script. If I run the script using the debugger it works fine, meaning I can send messages to the bot and it can send messages to me. However when I just execute the script normally the bot sends the successful connection message then it disconnects from my jabber server and the script ends. I'm a novice when it comes to Perl and I can't figure out what I'm doing wrong. My guess is it has something to do with the subroutines and sending the presence of the bot. (I know for sure that it has something to do with sending the bot's presence because if the presence code is removed, the script behaves as expected except the bot doesn't appear to be online.) If anyone can help me with this that would be great. I originally had everything in 1 file but separated them into several trying to figure out my problem here are the pastebin links to my source code. jabberBot.pl: http://pastebin.com/cVifv0mm chatRoutine.pm: http://pastebin.com/JXmMT7av trimSpaces.pm: http://pastebin.com/SkeuWtu1 Thanks again for any help!

    Read the article

  • Bookmarkable URLs after Ajax for Wicket

    - by Wolfgang
    There is this well-known problem that browsers don't put Ajax request in the request history and cause problems for bookmarkability, forward/back button, and refresh. Also, there is a common solution to that problem that appends the hash symbol # and some additional parameters to the URL by using Javascript window.location.hash = .... In this question a basic solution to this problem is proposed, for example. = My question is if such a solution has been integrated in Wicket, so that existing Wicket facilities are used and no custom Javascript had to be added. If not, I'd be interested in how this could be done. Such a solution had to answer the question what should be put after the hash. I like the idea that the bookmarkable URL that (in the non-Ajax case) were in front of the hash could be put behind it. For example, when you are on http://host/catalog and reach a page http://host/product/xyz the Ajax-triggered URL would be http://host/catalog#/product/xyz. Then it would be easy to write an onload handler that checks for the # and does a redirect to the URL after the hash.

    Read the article

  • web service client authentication

    - by Jack
    I want to consume Java based web service with c#.net client. The problem is, I couldnt authenticate to the service. it didnt work with this: mywebservice.Credentials = new System.Net.NetworkCredential(userid, userpass); I tried to write base class for my client method. public class ClientProtocols : SoapHttpClientProtocol { protected override WebRequest GetWebRequest(Uri uri) { System.Net.WebRequest request = base.GetWebRequest(uri); if (null != Credentials) request.Headers.Add("Authorization", GetAuthHeader()); return request; } protected override WebResponse GetWebResponse(WebRequest request) { WebResponse response = base.GetWebResponse(request); return response; } private string GetAuthHeader() { StringBuilder sb = new StringBuilder(); sb.Append("Basic "); NetworkCredential cred = Credentials.GetCredential(new Uri(Url), "Basic"); string s = string.Format("{0}:{1}", cred.UserName, cred.Password); sb.Append(Convert.ToBase64String(Encoding.ASCII.GetBytes(s))); return sb.ToString(); } } How can I use this class and authorize to the web service? Thanks.

    Read the article

  • regular expression for emails NOT ending with replace script

    - by corroded
    I'm currently modifying my regex for this: http://stackoverflow.com/questions/2782031/extracting-email-addresses-in-an-html-block-in-ruby-rails basically, im making another obfuscator that uses ROT13 by parsing a block of text for all links that contain a mailto referrer(using hpricot). One use case this doesn't catch is that if the user just typed in an email address(without turning it into a link via tinymce) So here's the basic flow of my method: 1. parse a block of text for all tags with href="mailto:..." 2. replace each tag with a javascript function that changes this into ROT13 (using this script: http://unixmonkey.net/?p=20) 3. once all links are obfuscated, pass the resulting block of text into another function that parses for all emails(this one has an email regex that reverses the email address and then adds a span to that email - to reverse it back) step 3 is supposed to clean the block of text for remaining emails that AREN'T in a href tags(meaning it wasn't parsed by hpricot). Problem with this is that the emails that were converted to ROT13 are still found by my regex. What i want to catch are just emails that WEREN'T CONVERTED to ROT13. How do i do this? well all emails the WERE CONVERTED have a trailing "'.replace" in them. meaning, i need to get all emails WITHOUT that string. so far i have this regex: /\b([A-Z0-9._%+-]+@[A-Z0-9.-]+.[A-Z]{2,4}('.replace))\b/i but this gets all the emails with the trailing '.replace i want to get the opposite and I'm currently stumped with this. any help from regex gurus out there? MORE INFO: Here's the regex + the block of text im parsing: http://www.rubular.com/r/NqXIHrNqjI as you can see, the first two 'email addresses' are already obfuscated using ROT13. I need a regex that gets the emails [email protected] and [email protected]

    Read the article

  • Address family not supported by protocol exception

    - by srg
    I'm trying to send a couple of values from an android application to a web service which I've setup. I'm using Http Post to send them but when I run the application I get the error- request time failed java.net.SocketException: Address family not supported by protocol. I get this while debugging with both the emulator as well as a device connected by wifi. I've already added the internet permission using: <uses-permission android:name="android.permission.INTERNET" /> This is the code i'm using to send the values void insertData(String name, String number) throws Exception { String url = "http://192.168.0.12:8000/testapp/default/call/run/insertdbdata/"; HttpClient client = new DefaultHttpClient(); HttpPost post = new HttpPost(url); try { List<NameValuePair> params = new ArrayList<NameValuePair>(2); params.add(new BasicNameValuePair("a", name)); params.add(new BasicNameValuePair("b", number)); post.setEntity(new UrlEncodedFormEntity(params)); HttpResponse response = client.execute(post); }catch(Exception e){ e.printStackTrace(); } Also I know that my web service work fine because when I send the values from an html page it works fine - <form name="form1" action="http://192.168.0.12:8000/testapp/default/call/run/insertdbdata/" method="post"> <input type="text" name="a"/> <input type="text" name="b"/> <input type="submit"/> I've seen questions of similar problems but haven't really found a solution. Thanks

    Read the article

  • How do you prevent Git from printing 'remote:' on each line of the output of a post-recieve hook?

    - by Matt Hodan
    I recently configured an EC2 instance with a Git deployment workflow that resembles Heroku, but I can't seem to figure out how Heroku prevents the Git post-receive hook from outputting 'remote:' on each line. Consider the following two examples (one from my EC2 project and one from a Heroku project): My EC2 project: git push prod master Counting objects: 9, done. Delta compression using up to 2 threads. Compressing objects: 100% (5/5), done. Writing objects: 100% (5/5), 456 bytes, done. Total 5 (delta 3), reused 0 (delta 0) remote: remote: Receiving push remote: Deploying updated files (by resetting HEAD) remote: HEAD is now at bf17da8 test commit remote: Running bundler to install gem dependencies remote: Fetching source index for http://rubygems.org/ remote: Installing rake (0.8.7) remote: Installing abstract (1.0.0) ... remote: Installing railties (3.0.0) remote: Installing rails (3.0.0) remote: Your bundle is complete! It was installed into ./.bundle/gems remote: Launching (by restarting Passenger)... done remote: To ssh://[email protected]/~/apps/app_name e8bd06f..bf17da8 master -> master Heroku: $> git push heroku master Counting objects: 179, done. Delta compression using up to 2 threads. Compressing objects: 100% (89/89), done. Writing objects: 100% (105/105), 42.70 KiB, done. Total 105 (delta 53), reused 0 (delta 0) -----> Heroku receiving push -----> Rails app detected -----> Gemfile detected, running Bundler version 1.0.3 Unresolved dependencies detected; Installing... Using --without development:test Fetching source index for http://rubygems.org/ Installing rake (0.8.7) Installing abstract (1.0.0) ... Installing railties (3.0.0) Installing rails (3.0.0) Your bundle is complete! It was installed into ./.bundle/gems Compiled slug size is 4.8MB -----> Launching... done http://your_app_name.heroku.com deployed to Heroku To [email protected]:your_app_name.git 3bf6e8d..642f01a master -> master

    Read the article

  • key-words highlight in <textarea> (again)

    - by Halst
    Wait, I know! I know that this "syntax highlight in textarea"-question was raised like a million times on stackoverflow! But, please, listen. offtopic: I'm not a web-developer, and technically I'm not a programmer at all. I study mechatronics and deal mostly with control-engineering and digital-hardware. And I'm so pissed off that whenever I want to share some application (that would be helpful in my field) and embed it into the web, I need to know such a crazy amount of technologies, like html, css, javascript, flash, etc.. that takes time, which I could have been spending for the benefit of my own field. Right now I'm playing with hardware-description-languages and I'm writing some Python-libraries to convert one HDL into another. And I wanted to embed such feature on the web: http://xhdl2vhdl.appspot.com/ I wanted to implement some basic syntax highlighting (only keywords highlighting will be enough) so that the code could be readable. But the whole idea highlighting something in textarea is not trivial at all. The other difficulty is that the languages I work with are rare, and there are no out-of-box solutions for them. I tried to dig into these solutions, but they are very complicated for me: http://www.nicolarizzo.com/gamesroom/experimental/CodeEditor.html http://marijn.haverbeke.nl/codemirror/jstest.html and there is no clear descriptions how to use them (for my level of knowledge of web-development). So, is there a simple solution, just to highlight a bunch of key-words in textarea or perform something equivalent? Thank you.

    Read the article

  • GlassFish 3: how do you change the (default) logging format?

    - by Kawu
    The question originated from here: http://www.java.net/forum/topic/glassfish/glassfish/configuring-glassfish-logging-format - without an answer. The default GlassFish 3 logging format of is very annoying, much too long. [#|2012-03-02T09:22:03.165+0100|SEVERE|glassfish3.1.2|javax.enterprise.system.std.com.sun.enterprise.server.logging|_ThreadID=113;_ThreadName=AWT-EventQueue-0;| MESSAGE... ] This is just a horrible default IMO. The docs just explain all the fields, but not how to change the format: http://docs.oracle.com/cd/E18930_01/html/821-2416/abluk.html Note, that I deploy SLF4J along with my webapp which should pick up the format as well. How do you change the logging format? FYI: The links here are outdated: Install log formater in glassfish... The question here hasn't been answered: How to configure GlassFish logging to show milliseconds in timestamps?... The posting here resulted in nothing: http://www.java.net/forum/topic/glassfish/glassfish/cant-seem-configure-... It looks like GlassFish logging configuration is an issue of its own. Can anybody help?

    Read the article

  • How do i set the proxy and SOCKs in libcurl?

    - by acidzombie24
    I am trying to configure my .NET app to use a proxy. My source is in C# but i learned CURL via C++. My question is where do i put the SOCKs IP and port? i looked through the documentation and didnt see it. I believe that is what is causing me these problems. When i run this code it will quiet literally timeout and not call my header function or writer function. If i comment out the first two curlopt lines (the two proxy lines) my code runs with no problems. In firefox i set the http proxy and SOCKs host separately, they are different IPs and ports. How do i set the sock part, the below has the dummy proxy set but i cant figure out the socks part. static void Main(string[] args) { SeasideResearch.LibCurlNet.Curl.GlobalInit((int)SeasideResearch.LibCurlNet.CURLinitFlag.CURL_GLOBAL_ALL); var curl = new Easy(); { curl.SetOpt(CURLoption.CURLOPT_PROXY, "http://127.0.0.1:1234"); curl.SetOpt(CURLoption.CURLOPT_PROXYTYPE, CURLproxyType.CURLPROXY_SOCKS5); curl.SetOpt(CURLoption.CURLOPT_URL, "http://whatismyipaddress.com/ip-lookup"); curl.SetOpt(CURLoption.CURLOPT_FOLLOWLOCATION, 1); curl.SetOpt(CURLoption.CURLOPT_USERAGENT, @"Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2b5) Gecko/20091204 Firefox/3.6b5"); curl.SetOpt(CURLoption.CURLOPT_HEADERFUNCTION, hf); curl.SetOpt(CURLoption.CURLOPT_HEADERDATA, data); curl.SetOpt(CURLoption.CURLOPT_WRITEFUNCTION, wf); curl.SetOpt(CURLoption.CURLOPT_WRITEDATA, sw); curl.SetOpt(CURLoption.CURLOPT_SSL_VERIFYPEER, 0); curl.Perform(); var sz = sw.ToString(); var myrealip = sz.IndexOf("12.34.56.78") !=-1; } //Console.WriteLine(sz); SeasideResearch.LibCurlNet.Curl.GlobalCleanup(); }

    Read the article

  • Can a page opt out of IIS 7 compression?

    - by Glen Little
    My pages are automatically being compressed by IIS7 with GZIP. That is great... but, for one particular page, I need to stream it to the user, using Response.Flush() when needed. But when the output is being compressed, the IIS server seems to collect all my output until the page is done before compressing and sending it to the client. That nullifies my attempt to Flush the content out to the user. Is there a way that I can have this one page opt out of the compression? One possible option I've determined that if I manually set the content type to one that does not match the IIS configuration at c:\windows\system32\inetsrv\config\applicationhost.config, then IIS will not compress it. Eg. Response.ContentType = "x-text/html". This works okay with IE8, as it falls back to display the HTML. But Firefox will ask the user what to do with the unknown file type. This could work, if there was another Mime Type I could use that browsers would accept as HTML, that is not matched in the applicationhost.config. For reference, these are the mime types that will be compressed: <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> Others options? Are there other options to opt out of compression?

    Read the article

  • Java getInputStream 400 errors

    - by Bill Szerdy
    When I contact a web service using a Java HttpUrlConnection it just returns a 400 Bad Request (IOException). How do I get the XML information that the server is returning; it does not appear to be in the getErrorStream of the connection nor is it in any of the exception information. When I run the following PHP code against a web service: <?php $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "https://www.myclientaddress.com/here/" ); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1 ); curl_setopt($ch, CURLOPT_POST,1 ); curl_setopt($ch, CURLOPT_POSTFIELDS,"username=ted&password=scheckler&type=consumer&id=123456789&zip=12345"); $result=curl_exec ($ch); echo $result; ?> it returns the following: <?xml version="1.0" encoding="utf-8"?> <response> <status>failure</status> <errors> <error name="RES_ZIP">Zip code is not valid.</error> <error name="ZIP">Invalid zip code for residence address.</error> </errors> </response> so I know the information exists

    Read the article

  • Fix a 404: missing parameters error from a GET request to CherryPy

    - by norabora
    I'm making a webpage using CherryPy for the server-side, HTML, CSS and jQuery on the client-side. I'm also using a mySQL database. I have a working form for users to sign up to the site - create a username and password. I use jQuery to send an AJAX POST request to the CherryPy which queries the database to see if that username exists. If the username exists, alert the user, if it doesn't, add it to the database and alert success. $.post('submit', postdata, function(data) { alert(data); }); Successful jQuery POST. I want to change the form so that instead of checking that the username exists on submit, a GET request is made as on the blur event from the username input. The function gets called, and it goes to the CherryPy, but then I get an error that says: HTTPError: (404, 'Missing parameters: username'). $.get('checkUsername', getdata, function(data) { alert(data); }); Unsuccessful jQuery GET. The CherryPy: @cherrypy.expose def submit(self, **params): cherrypy.response.headers['Content-Type'] = 'application/json' e = sqlalchemy.create_engine('mysql://mysql:pw@localhost/6470') c = e.connect() com1 = "SELECT * FROM `users` WHERE `username` = '" + params["username"] + "'" b = c.execute(com1).fetchall() if not len(b) > 0: com2 = "INSERT INTO `6470`.`users` (`username` ,`password` ,`website` ,`key`) VALUES ('" com2 += params["username"] + "', MD5( '" + params["password"] + "'), '', NULL);" a = c.execute(com2) c.close() return simplejson.dumps("Success!") #login user and send them to home page c.close() return simplejson.dumps("This username is not available.") @cherrypy.expose def checkUsername(self, username): cherrypy.response.headers['Content-Type'] = 'application/json' e = sqlalchemy.create_engine('mysql://mysql:pw@localhost/6470') c = e.connect() command = "SELECT * FROM `users` WHERE `username` = '" + username + "'" a = c.execute(command).fetchall(); c.close() sys.stdout.write(str(a)) return simplejson.dumps("") I can't see any differences between the two so I don't know why the GET request is giving me a problem. Any insight into what I might be doing wrong would be helpful. If you have ideas about the jQuery, CherryPy, config files, anything, I'd really appreciate it.

    Read the article

  • making clean page via page.tpl.php

    - by user360051
    I have a Drupal module creating a page via hook_menu(). I am trying to make it so the page has no extraneous html output, only what I want. You can view the page here, http://www.thomashansen.me/chat/thomas. If you look at the source, you can see a strange script tag at the end. My page-chat.tpl.php looks like this, <?php // $Id$ ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="<?php print $language->language ?>" lang="<?php print $language->language ?>" dir="<?php print $language->dir ?>"> <head> </head> <body> <?php print $content; ?> </body> </html> Where is that script tag coming from? and how do I get rid of it? If you need more information just ask.

    Read the article

  • How do I integrate ruby on rails app with postful using POST and XML?

    - by Angela
    Hi, this is a pretty basic question but I'm not entirely clear how to do this. I am trying to use a third-party service that has RESTful service. The service is called Postful. But I'm not clear what exactly to do? http://www.postful.com/service/mail is one of the services, but to upload an image I have to post the following (but I'm not sure how I actually do this?). Thanks! > http://www.postful.com/service/upload > > Be sure to include the Content-Type > and Content-Length headers and the > image itself as the body of the > request. > > POST /upload HTTP/1.0 Content-Type: > application/octet-stream > Content-Length: 301456 > > ... file content here ... > > If the upload is successful, you will > receive a response like the following: > > <?xml version="1.0" encoding="UTF-8"?> > <upload> > <id>290797321.waltershandy.2</id> > </upload>

    Read the article

< Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >