Search Results

Search found 19603 results on 785 pages for 'variable length'.

Page 608/785 | < Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >

  • Embedded SQL in OO languages like Java

    - by Steve De Caux
    One of the things that annoys me working with SQL in OO languages is having to define SQL statements in strings. When I used to work on IBM mainframes, the languages used an SQL preprocessor to parse SQL statements out of the native code, so the statements could be written in cleartext SQL without the obfuscation of strings, for instance in Cobol there is a EXEC SQL .... END-EXEC syntax construct that allows pure SQL statements to be embedded in the Cobol code. <pure cobol code, including assignment of value to local variable HOSTVARIABLE> EXEC SQL SELECT COL_A, COL_B, COL_C INTO :COLA, :COLB, :COLC FROM TAB_A WHERE COL_D = :HOSTVARIABLE END_EXEC <more cobol code, variables COLA, COLB, COLC have been set> ...this makes the SQL statement really easy to read & check for errors. Between the EXEC SQL .... END-EXEC tokens there are no constraints on indentation, linebreaking etc., so you can format the SQL statement according to taste. Note that this example is for a single-row select, when a multiple-row resultset is expected, the coding is different (but still v. easy to read). So, taking Java as an example What made the "old COBOL" approach undesirable ? Not only SQL, but system calls could be made much more readable with that approach. Let's call it the embedded foreign language preprocessor approach. Would an embedded foreign language preprocessor for SQL be useful to implement ? Would you see a benefit in being able to write native SQL statements inside java code ? Edit I'm really asking if you think SQL in OO languages is a throwback, and if not then what could be done to make it better.

    Read the article

  • Makefile for DOS/Windows and Cygwin

    - by Thomas Matthews
    I need to have a makefile work under DOS (Windows) and Cygwin. I having problems with the makefile detecting the OS correctly and setting appropriate variables. The objective is to set variables for the following commands, then invoke the commands in rules using the variables: Delete file: rm in Cygwin, del in DOS. Remove directory: rmdir (different parameters in Cygwin and DOS) Copy file: cp in Cygwin, copy in DOS. Testing for file existance: test in Cygwin, IF EXIST in DOS. Listing contents of a file: cat in Cygwin, type in DOS. Here is my attempt, which always uses the else clause: OS_KIND = $(OSTYPE) #OSTYPE is an environment variable set by Cygwin. ifeq ($(OS_KIND), cygwin) ENV_OS = Cygwin RM = rm -f RMDIR = rmdir -r CP = cp REN = mv IF_EXIST = test -a IF_NOT_EXIST = ! test -a LIST_FILE = cat else ENV_OS = Win_Cmd RM = del -f -Q RMDIR = rmdir /S /Q IF_EXIST = if exist IF_NOT_EXIST = if not exist LIST_FILE = type endif I'm using the forward slash character, '/', as a directory separator. This is a problem with the DOS command, as it is interpreting it as program argument rather than a separator. Anybody know how to resolve this issue? Edit: I am using make with Mingw in both Windows Console (DOS) and Cygwin.

    Read the article

  • Git with SSH on Windows

    - by pankar
    Hello all, I've went through the excellent guide provided by Tim Davis (http://www.timdavis.com.au/git/setting-up-a-msysgit-server-with-copssh-on-windows/) which is about configuring Git to work with SSH under Windows in order to produce a Git Server in order to have a main place for my DVCS. I am in the process of creating a clone for my project. I’ve went through all the steps till this point, but I keep getting this from TortoiseGit: git.exe clone -v “ssh://[email protected]:22/SSH/Home/administrator/myapp.git” “E:\GitTest\myapp” bash: [email protected]: command not found Initialized empty Git repository in E:/GitTest/myapp/.git/ fatal: The remote end hung up unexpectedly Success and nothing gets cloned. BTW: The TortoisePLink comes up just before this message appears and asks me: “login as:” ( I thought that this info is given in the command, i.e: Administrator@blahblah. My home variable is set to the correct place: From a Git Bash shell: echo $HOME /c/SSH/home/Administrator I’ve also tried using Putty’s plink instead of TortoisePLink (in both Git’s and TortoiseGit’s installation). This time the error was narrowed down to: git.exe clone -v “ssh://[email protected]:22/c:/SSH/Home/administrator/myapp.git” “E:\GitTest\myapp” Initialized empty Git repository in E:/GitTest/myapp/.git/ fatal: The remote end hung up unexpectedly Any help is more than welcome! Thanks Panagiotis

    Read the article

  • Optimal Serialization of Primitive Types

    - by Greg Dean
    We are beginning to roll out more and more WAN deployments of our product (.Net fat client w/ IIS hosted Remoting backend). Because of this we are trying to reduce the size of the data on the wire. We have overridden the default serialization by implementing ISerializable (similar to this), we are seeing anywhere from 12% to 50% gains. Most of our efforts focus on optimizing arrays of primitive types. I would like to know if anyone knows of any fancy way of serializing primitive types, beyond the obvious? For example today we serialize an array of ints as follows: [4-bytes (array length)][4-bytes][4-bytes] Can anyone do significantly better? The most obvious example of a significant improvement, for boolean arrays, is putting 8 bools in each byte, which we already do. Note: Saving 7 bits per bool may seem like a waste of time, but when you are dealing with large magnitudes of data (which we are), it adds up very fast. Note: We want to avoid general compression algorithms because of the latency associated with it. Remoting only supports buffered requests/responses(no chunked encoding). I realize there is a fine line between compression and optimal serialization, but our tests indicate we can afford very specific serialization optimizations at very little cost in latency. Whereas reprocessing the entire buffered response into new compressed buffer is too expensive.

    Read the article

  • Javascript: Collision detection

    - by jack moore
    Hello, could someone please help me to understand how collision detection works in JS? I can't use jQuery or gameQuery - already using prototype - so, I'm looking for something very simple. Not asking for complete solution, just point me to the right direction. Let's say there's: <div id="ball"></div> and <div id="someobject0"></div> Now the ball is moving (any direction). "Someobject"(0-X) is already pre-defined and there's 20-60 of them randomly positioned like this: #someobject {position: absolute; top: RNDpx; left: RNDpx;} I can create an array with "someobject(X)" positions and test collision while the "ball" is moving... Something like: for(var c=0; c<objposArray.length; c++){ ........ and code to check ball's current position vs all objects one by one.... } But I guess this would be a "noob" solution and it looks pretty slow. Is there anything better?

    Read the article

  • Unable to load images into each MC?

    - by Hwang
    The images only loads into the last MC, how to make it load into each MC? private function imageHandler():void { imageBox=new MovieClip(); imageBox.graphics.lineStyle(5, 0xFFFFFF); imageBox.graphics.beginFill(0xFF0000); imageBox.graphics.drawRect(0,0,150,225); imageBox.graphics.endFill(); allImage.addChild(imageBox); } private function getPhoto():void { for (i=0; i<myXMLList.length(); i++) { placePhoto(); imageHandler(); imagesArray.push(imageBox); imagesArray[i].x=20+(200*i); } addChild(allImage); allImage.x=-(allImage.width+20); allImage.y=-(allImage.height+50); } private function placePhoto():void { loadedPic=myXMLList[i].@PIC; galleryLoader = new Loader(); galleryLoader.load(new URLRequest(loadedPic)); galleryLoader.contentLoaderInfo.addEventListener(Event.COMPLETE,picLoaded); } private function picLoaded(event:Event):void { bmp=new Bitmap(event.target.content.bitmapData); bmp.smoothing=true; imageBox.addChild(bmp); }

    Read the article

  • Creative ways to punish (or just curb) laziness in coworkers

    - by FerretallicA
    Like the subject suggests, what are some creative ways to curb laziness in co-workers? By laziness I'm talking about things like using variable names like "inttheemplrcd" instead of "intEmployerCode" or not keeping their projects synced with SVN, not just people who use the last of the sugar in the coffee room and don't refill the jar. So far the two most effective things I've done both involve the core library my company uses. Since most of our programs are in VB.net the lack of case sensitivity is abused a lot. I've got certain features of the library using Reflection to access data in the client apps, which has a negligible performance hit and introduces case sensitivity in a lot places where it is used. In instances where we have an agreed standard which is compromised by blatant laziness I take it a step further, like the DatabaseController class which will blatantly reject any DataTable passed to it which isn't named dtSomething (ie- must begin with dt and third letter must be capitalised). It's frustrating to have to resort to things like this but it has also gradually helped drill more attention to detail into their heads. Another is adding some code to the library's initialisation function to display a big and potentially embarrassing (only if seen by a client) message advising that the program is running in debug mode. We have had many instances where projects are sent to clients built in debug mode which has a lot of implications for us (especially with regard to error recovery) and doing that has made sure they always build to release before distributing. Any other creative (ie- not StyleCop etc) approaches like this?

    Read the article

  • Mercurial 1.5 pager on Windows

    - by alexandrul
    I'm trying to set the pager used for Mercurial but the output is empty, even if I specify the command in the [pager] section or as the PAGER environment variable. I noticed that the command provided is launched with cmd.exe. Is this the cause of empty output, and if yes, what is the right syntax? Environment: Mercurial 1.5, Mecurial 1.4.3 hgrc: [extensions] pager = [pager] pager = d:\tools\less\less.exe Sample command lines (from Process Explorer): hg diff c:\windows\system32\cmd.exe /c "d:\tools\less\less.exe 2> NUL:" d:\tools\less\less.exe UPDATE In pager.py, by replacing: sys.stderr = sys.stdout = util.popen(p, "wb") with sys.stderr = sys.stdout = subprocess.Popen(p, stdin = subprocess.PIPE, shell=False).stdin I managed to obtain the desired output for the hg status and diff. BUT, I'm sure it's wrong (or at least incomplete), and I have no control over the pager app (less.exe): the output is shown in the cmd.exe window, I can see the less prompt (:) but any further input is fed into cmd.exe. It seems that the pager app is still active in the background: after typing exit in the cmd.exe window, I have control over the pager app, and I can terminate it normally. Also, it makes no difference what I'm choosing as a pager app (more is behaving the same). UPDATE 2 Issue1677 - [PATCH] pager for "hg help" output on windows

    Read the article

  • How do I access data from local XML files in a webOS application on the Palm Pre?

    - by Brijesh Patel
    I am new at Mojo framework and Palm webOS. I want to just retrieve data from XML files using xmlhttprequest (Ajax). I am trying to get data from following script. this.items = []; var that = this; var request = new Ajax.Request("first/movies.xml", { method: 'get', evalJSON: 'false', onSuccess:function(transport){ var movieTags = transport.responseXML.getElementsByTagName('movie'); for( var i = 0; i < movieTags.length; i++ ){ var title = movieTags[i].getAttribute('title'); that.items.push({text: title}); } }, onFailure: function(){ alert('Something went wrong...') } }); My XML files are in the first/movies.xml folder. From that I am trying to access and retrieve data. but not display anything in the screen of Palm Pre emulator. So can anyone is having idea about this issue? Please give a link where can I find the source code for getting data from XML files in webOS.

    Read the article

  • What are some funny loading statements to keep users amused?

    - by Oli
    Nobody likes waiting but unfortunately in the Ajax application I'm working on at the moment, there is one fair-sized pause (1-2 seconds a go) that users have to undergo each and every time they want to load up a chunk of data. I've tried to make the load as interactive as possible. There's an animated GIF alongside a very plain, very dull "Loading..." message. So I thought it might be quite fun to come up with a batch of 50-or-so funny-looking messages and pick from them randomly so the user never knows what they're going to see. The time they would have spent growing impatient is fruitfully used. Here's what I've come up with so far, just to give you an idea. var randomLoadingMessage = function() { var lines = new Array( "Locating the required gigapixels to render...", "Spinning up the hamster...", "Shovelling coal into the server...", "Programming the flux capacitor" ); return lines[Math.round(Math.random()*(lines.length-1))]; } (Yes -- I know some of those are pretty lame -- That's why I'm here :) The funniest I see today will get the prestigious "Accepted Answer" award. Others get votes for participation. Enjoy!!

    Read the article

  • Race condition during thread start?

    - by user296353
    Hi, I'm running the following code to start my threads, but they don't start as intended. For some reason, some of the threads start with the same objects (and some don't even start). If I try to debug, they start just fine (extra delay added by me clicking F10 to step through the code). These are the functions in my forms app: private void startWorkerThreads() { int numThreads = config.getAllItems().Count; int i = 0; foreach (ConfigurationItem tmpItem in config.getAllItems()) { i++; var t = new Thread(() => WorkerThread(tmpItem, i)); t.Start(); //return t; } } private void WorkerThread(ConfigurationItem cfgItem, int mul) { for (int i = 0; i < 100; i++) { Thread.Sleep(10*mul); } this.Invoke((ThreadStart)delegate() { this.textBox1.Text += "Thread " + cfgItem.name + " Complete!\r\n"; this.textBox1.SelectionStart = textBox1.Text.Length; this.textBox1.ScrollToCaret(); }); } Anyone able to help me out? Cheers!

    Read the article

  • Params order in Foo.new(params[:foo]), need one before the other (Rails)

    - by Jeena
    I have a problem which I don't know how to fix. It has to do with the unsorted params hash. I have a object Reservation which has a virtual time= attribute and a virtual eating_session= attribute when I set the time= I also want to validate it via an external server request. I do that with help of the method times() which makes a lookup on a other server and saves all possible times in the @times variable. The problem now is that the method times() needs the eating_session attribute to find out which times are valid, but rails sometimes calls the times= method first, before there is any eating_session in the Reservation object when I just do @reservation = Reservation.new(params[:reservation]) class ReservationsController < ApplicationController def new @reservation = Reservation.new(params[:reservation]) # ... end end class Reservation < ActiveRecord::Base include SoapClient attr_accessor :date, :time belongs_to :eating_session def time=(time) @time = times.find { |t| t[:time] == time } end def times return @times if defined? @times @times = [] response = call_soap :search_availability { # eating_session is sometimes nil :session_id => eating_session.code, # <- HERE IS THE PROBLEM :dining_date => date } response[:result].each do |result| @times << { :time => "#{DateTime.parse(result[:time]).strftime("%H:%M")}", :correlation_data => result[:correlation_data] } end @times end end I have no idea how to fix this, any help is apriciated.

    Read the article

  • Accessing frame info in gdb

    - by Maelstrom
    In gdb, is there a way to access the contents of info frame in a script? I'm debugging a problem somewhere between Apache, PHP, APC and my own code, and I have about a hundred cores to choose from. Following the instructions here http://bugs.php.net/bugs-generating-backtrace.php I end up with a stacktrace like: #0 0x0121a31a in do_bind_function (opline=0xa94dd750, function_table=0x9b9cf98, compile_time=0 '\0') at /usr/src/debug/php-5.2.7/Zend/zend_compile.c:2407 #1 0x0124bb2e in ZEND_DECLARE_FUNCTION_SPEC_HANDLER (execute_data=0xbfef7990) at /usr/src/debug/php-5.2.7/Zend/zend_vm_execute.h:498 #2 0x01249dfa in execute (op_array=0xb79d5d3c) at /usr/src/debug/php-5.2.7/Zend/zend_vm_execute.h:92 #3 0x01261e31 in ZEND_INCLUDE_OR_EVAL_SPEC_VAR_HANDLER (execute_data=0xbfef80d0) at /usr/src/debug/php-5.2.7/Zend/zend_vm_execute.h:7809 #4 0x01249dfa in execute (op_array=0xb79d55ec) at /usr/src/debug/php-5.2.7/Zend/zend_vm_execute.h:92 ... #26 0x09caa894 in ?? () #27 0x00000000 in ?? () The stack will always look similar, with function execute and ZEND_something interleaved several times. I need to go up to the last instance of execute (up 2 in this case) and print myVar. Obviously gdb knows the function names, but does it surface them in any user variables I could access? Typing frame 2 shows a one-line version, and info frame shows a single stackframe in detail. I want to do something like while ($current_frame.function_name != "execute") {up;} print myVar but I don't see how to do it strictly within gdb. Is there a variable / structure / special memory location / something that allows access to gdb's information on either the whole stack (like bt) or to the current stack frame (like info frame)?

    Read the article

  • How do I detect if there is already a similar document stored in Lucene index.

    - by Jenea
    Hi. I need to exclude duplicates in my database. The problem is that duplicates are not considered exact match but rather similar documents. For this purpose I decided to use FuzzyQuery like follows: var fuzzyQuery = new global::Lucene.Net.Search.FuzzyQuery( new Term("text", queryText), 0.8f, 0); hits = _searcher.Search(query); The idea was to set the minimal similarity to 0.8 (that I think is high enough) so only similar documents will be found excluding those that are not sufficiently similar. To test this code I decided to see if it finds already existing document. To the variable queryText was assigned a value that is stored in the index. The code from above found nothing, in other words it doesn't detect even exact match. Index was build by this code: doc.Add(new global::Lucene.Net.Documents.Field( "text", text, global::Lucene.Net.Documents.Field.Store.YES, global::Lucene.Net.Documents.Field.Index.TOKENIZED, global::Lucene.Net.Documents.Field.TermVector.WITH_POSITIONS_OFFSETS)); I followed recomendations from bellow and the results are: TermQuery doesn't return any result. Query contructed with var _analyzer = new RussianAnalyzer(); var parser = new global::Lucene.Net.QueryParsers .QueryParser("text", _analyzer); var query = parser.Parse(queryText); var _searcher = new IndexSearcher (Settings.General.Default.LuceneIndexDirectoryPath); var hits = _searcher.Search(query); Returns several results with the maximum score the document that has exact match and other several documents that have similar content.

    Read the article

  • Reloading a JTree during runtime

    - by Patrick Kiernan
    I create a JTree and model for it out in a class separate to the GUI class. The data for the JTree is extracted from a file. Now in the GUI class the user can add files from the file system to an AWT list. After the user clicks on a file in the list I want the JTree to update. The variable name for the JTree is schemaTree. I have the following code for the when an item in the list is selected: private void schemaListItemStateChanged(java.awt.event.ItemEvent evt) { int selection = schemaList.getSelectedIndex(); File selectedFile = schemas.get(selection); long fileSize = selectedFile.length(); fileInfoLabel.setText("Size: " + fileSize + " bytes"); schemaParser = new XSDParser(selectedFile.getAbsolutePath()); TreeModel model = schemaParser.generateTreeModel(); schemaTree.setModel(model); } I've updated the code to correspond to the accepted answer. The JTree now updates correctly based on which file I select in the list.

    Read the article

  • Better why of looping to detect change.

    - by Dremation
    As of now I'm using a while(true) method to detect changes in memory. The problem with this is it's kill the applications performance. I have a list of 30 pointers that need checked as rapidly as possible for changes, without sacrificing a huge performance loss. Anyone have ideas on this? memScan = new Thread(ScanMem); public static void ScanMem() { int i = addy.Length; while (true) { Thread.Sleep(30000); //I do this to cut down on cpu usage for (int j = 0; j < i; j++) { string[] values = addy[j].Split(new char[] { Convert.ToChar(",") }); //MessageBox.Show(values[2]); try { if (Memory.Scanner.getIntFromMem(hwnd, (IntPtr)Convert.ToInt32(values[0], 16), 32).ToString() != values[1].ToString()) { //Ok, it changed lets do our work //work if (Globals.Working) return; SomeFunction("Results: " + values[2].ToString(), "Memory"); Globals.Working = true; }//end if }//end try catch { } }//end for }//end while }//end void

    Read the article

  • Payapl sandbox a/c in Dotnet..IPN Response Invaild

    - by Sam
    Hi, I am Integrating paypal to mysite.. i use sandbox account,One Buyer a/c and one more for seller a/c...and downloaded the below code from paypal site string strSandbox = "https://www.sandbox.paypal.com/cgi-bin/webscr"; HttpWebRequest req = (HttpWebRequest)WebRequest.Create(strSandbox); //Set values for the request back req.Method = "POST"; req.ContentType = "application/x-www-form-urlencoded"; byte[] param = Request.BinaryRead(HttpContext.Current.Request.ContentLength); string strRequest = Encoding.ASCII.GetString(param); strRequest += "&cmd=_notify-validate"; req.ContentLength = strRequest.Length; //for proxy //WebProxy proxy = new WebProxy(new Uri("http://url:port#")); //req.Proxy = proxy; //Send the request to PayPal and get the response StreamWriter streamOut = new StreamWriter(req.GetRequestStream(), System.Text.Encoding.ASCII); streamOut.Write(strRequest); streamOut.Close(); StreamReader streamIn = new StreamReader(req.GetResponse().GetResponseStream()); string strResponse = streamIn.ReadToEnd(); streamIn.Close(); if (strResponse == "VERIFIED") { //check the payment_status is Completed //check that txn_id has not been previously processed //check that receiver_email is your Primary PayPal email //check that payment_amount/payment_currency are correct //process payment } else if (strResponse == "INVALID") { //log for manual investigation } else { //log response/ipn data for manual investigation } and when add this snippets in pageload event of success page i get the ipn response as INVALID but amount paid successfully but i am getting invalid..any help..Paypal Docs in not Clear. thanks in advance

    Read the article

  • Problems with jQuery load and getJSON only when using Chrome

    - by leftend
    I'm having an issue with two jQuery calls. The first is a "load" that retrieves HTML and displays it on the page (it does include some Javascript and CSS in the code that is returned). The second is a "getJSON" that returns JSON - the JSON returned is valid. Everything works fine in every other browser I've tried - except Chrome for either Windows or Mac. The page in question is here: http://urbanistguide.com/category/Contemporary.aspx When you click on a Restaurant name in IE/FF, you should see that item expand with more info - and a map displayed to the right. However, if you do this in Chrome all you get is the JSON data printed to the screen. The first problem spot is when the "load" function is called here: var fulllisting = top.find(".listingfull"); fulllisting.load(href2, function() { fulllisting.append("<div style=\"width:99%;margin-top:10px;text-align:right;\"><a href=\"#\" class=\"" + obj.attr("id") + "\">X</a>"); itemId = fulllisting.find("a.listinglink").attr("id"); ... In the above code, the callback function doesn't seem to get invoked. The second problem spot is when the "getJSON" function is called: $.getJSON(href, function(data) { if (data.error.length > 0) { //display error message } else { ... } In this case - it just seems to follow the link instead of performing the callback... and yes, I am doing a "return false;" at the end of all of this to prevent the link from executing. All of the rest of the code is inline on that page if you want to view the source code. Any ideas?? Thanks

    Read the article

  • SQL Server CLR stored procedures in data processing tasks - good or evil?

    - by Gart
    In short - is it a good design solution to implement most of the business logic in CLR stored procedures? I have read much about them recently but I can't figure out when they should be used, what are the best practices, are they good enough or not. For example, my business application needs to parse a large fixed-length text file, extract some numbers from each line in the file, according to these numbers apply some complex business rules (involving regex matching, pattern matching against data from many tables in the database and such), and as a result of this calculation update records in the database. There is also a GUI for the user to select the file, view the results, etc. This application seems to be a good candidate to implement the classic 3-tier architecture: the Data Layer, the Logic Layer, and the GUI layer. The Data Layer would access the database The Logic Layer would run as a WCF service and implement the business rules, interacting with the Data Layer The GUI Layer would be a means of communication between the Logic Layer and the User. Now, thinking of this design, I can see that most of the business rules may be implemented in a SQL CLR and stored in SQL Server. I might store all my raw data in the database, run the processing there, and get the results. I see some advantages and disadvantages of this solution: Pros: The business logic runs close to the data, meaning less network traffic. Process all data at once, possibly utilizing parallelizm and optimal execution plan. Cons: Scattering of the business logic: some part is here, some part is there. Questionable design solution, may encounter unknown problems. Difficult to implement a progress indicator for the processing task. I would like to hear all your opinions about SQL CLR. Does anybody use it in production? Are there any problems with such design? Is it a good thing?

    Read the article

  • How to delete a large cookie that causes Apache to 400

    - by jakemcgraw
    I've come across an issue where a web application has managed to create a cookie on the client, which, when submitted by the client to Apache, causes Apache to return the following: HTTP/1.1 400 Bad Request Date: Mon, 08 Mar 2010 21:21:21 GMT Server: Apache/2.2.3 (Red Hat) Content-Length: 7274 Connection: close Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Size of a request header field exceeds server limit.<br /> <pre> Cookie: ::: A REALLY LONG COOKIE ::: </pre> </p> <hr> <address>Apache/2.2.3 (Red Hat) Server at www.foobar.com Port 80</address> </body></html> After looking into the issue, it would appear that the web application has managed to create a really long cookie, over 7000 characters. Now, don't ask me how the web application was able to do this, I was under the impression browsers were supposed to prevent this from happening. I've managed to come up with a solution to prevent the cookies from growing out of control again. The issue I'm trying to tackle is how do I reset the large cookie on the client if every time the client tries to submit a request to Apache, Apache returns a 400 client error? I've tried using the ErrorDocument directive, but it appears that Apache bails on the request before reaching any custom error handling.

    Read the article

  • Visual Studio expression containing a term named "by" cannot be evaluated in the watch window

    - by Andrei Pana
    Consider my C++ code below: int _tmain(int argc, _TCHAR* argv[]) { int by = 10; printf("%d\n", by); int bx = 20; printf("%d\n", (by + bx)); return 0; } which works fine. The funny thing is with the "by" variable. If I try to add a watch for a simple expression that contains by, the result will be CXX0030: Error: expression cannot be evaluated. For example, on a breakpoint on return 0, if I add the following watches I get the results mentioned: by : 10 bx : 20 by + 5 : CXX0030: Error: expression cannot be evaluated bx + 5 : 25 by + bx : CXX0030: Error: expression cannot be evaluated (by) + bx : 30 by + (bx) : CXX0030: Error: expression cannot be evaluated bx + (by) : CXX0014: Error: missing operrand This happens on VS2010, VS2008 on multiple computers. So, more out of curiosity, what is happening with "by"? Is it some kind of strange operator? Why doesn't bx get the same treatment? (I've tried google on this but it is quite difficult to get some relevant hits with terms like "by")

    Read the article

  • jquery image hover popup cant detect browser edge and change its direction

    - by Salman
    hi guys i am trying to implement jquery image hover popup but facing a problem when the popup is closer to browser edge it goes beyond its edge i want it to change its direction when it finds that space is not enough to show that popup, i have see this effect in many plugins where popups, tooltips and drop down menus change their direction if they are close to browser window edge can any one guide me in right direction here is the screen shot for reference http://img512.imageshack.us/img512/4990/browseredge.png here is the jquery hover code function imagePreview(){ /* CONFIG */ xOffset = 10; yOffset = 30; // these 2 variable determine popup's distance from the cursor // you might want to adjust to get the right result /* END CONFIG */ $("a.preview").hover(function(e){ this.t = this.title; this.title = ""; var c = (this.t != "") ? "<br>" + this.t : ""; var newName = this.name; //console.log(this.name); newName=newName.replace("/l/","/o/"); //console.log(newName); $("body").append("<p id='preview'><img src='"+ this.name +"' alt='Image preview' style='margin-bottom:5px;'>"+ c +"</p>"); $("#preview img").error(function () { $("#preview img").attr("src" ,newName).css({'width': '400px', 'height': 'auto'}); }); $("#preview") .css("top",(e.pageY - xOffset) + "px") .css("left",(e.pageX + yOffset) + "px") .fadeIn("fast"); }, function(){ this.title = this.t; $("#preview").remove(); }); $("a.preview").mousemove(function(e){ $("#preview") .css("top",(e.pageY - xOffset) + "px") .css("left",(e.pageX + yOffset) + "px"); }); }; any help will be appriciated Thanks Salman

    Read the article

  • jQuery Autocomplete Json Ajax cross browser issue with Google Search Appliance

    - by skyfoot
    I am implementing a jquery autocomplete on a search form and am getting the suggestions from the Google Search Appliance Autocomple suggestions service which returns a result set in json. What I am trying to do is go off to the GSA to get suggestions when the user types something in the search box. The url to get the json suggestions is as follows: http://gsaurl/suggest?q=<query>&max=10&site=default_site&client=default_frontend&access=p&format=rich The json which is returned is as follows: { "query":"re", "results": [ {"name":"red", "type":"suggest"}, {"name":"read", "type":"suggest"}] } The jQuery autocomplete code is as follows: $(#q).autocomplete(searchUrl, { width: 320, dataType: 'json', highlight: false, scroll: true, scrollHeight: 300, parse: function(data) { var array = new Array(); for(var i=0;i<data.results.length;i++) { array[i] = { data: data.results[i], value: data.results[i].name, result: data.results[i].name }; } return array; }, formatItem: function(row) { return row.name; } }); This works in IE but fails in firefox as the data returned in the parse function is null. Any ideas why this would be the case? Workaround I created an aspx page to call the GSA suggest service and to return the json from the suggest service. Using this page as a proxy and setting it as the url in the jQuery autocomplete worked in both IE and FireFox.

    Read the article

  • AS3 microphone recording/saving works, in-flash PCM playback double speed

    - by Lowgain
    I have a working mic recording script in AS3 which I have been able to successfully use to save .wav files to a server through AMF. These files playback fine in any audio player with no weird effects. For reference, here is what I am doing to capture the mic's ByteArray: (within a class called AudioRecorder) public function startRecording():void { _rawData = new ByteArray(); _microphone.addEventListener(SampleDataEvent.SAMPLE_DATA, _samplesCaptured, false, 0, true); } private function _samplesCaptured(e:SampleDataEvent):void { _rawData.writeBytes(e.data); } This works with no problems. After the recording is complete I can take the _rawData variable and run it through a WavWriter class, etc. However, if I run this same ByteArray as a sound using the following code which I adapted from the adobe cookbook: (within a class called WavPlayer) public function playSound(data:ByteArray):void { _wavData = data; _wavData.position = 0; _sound.addEventListener(SampleDataEvent.SAMPLE_DATA, _playSoundHandler); _channel = _sound.play(); _channel.addEventListener(Event.SOUND_COMPLETE, _onPlaybackComplete, false, 0, true); } private function _playSoundHandler(e:SampleDataEvent):void { if(_wavData.bytesAvailable <= 0) return; for(var i:int = 0; i < 8192; i++) { var sample:Number = 0; if(_wavData.bytesAvailable > 0) sample = _wavData.readFloat(); e.data.writeFloat(sample); } } The audio file plays at double speed! I checked recording bitrates and such and am pretty sure those are all correct, and I tried changing the buffer size and whatever other numbers I could think of. Could it be a mono vs stereo thing? Hope I was clear enough here, thanks!

    Read the article

  • Why are my attempts to open a file using open for writing failing? Ada 95

    - by mat_geek
    When I attempt to open a file to write to I get an Ada.IO_Exceptions.Name_Error. The procedure call is Ada.Text_IO.Open The file name is "C:\CC_TEST_LOG.TXT". This file does not exist. This is on Windows XP on an NTFS partition. The user has permissions to create and write to the directory. The filename is well under the WIN32 max path length. name_2 : String := "C:\CC_TEST_LOG.TXT" if name_2'last > name_2'first then begin Ada.Text_IO.Open(file, Ada.Text_IO.Out_File, name_2); Ada.Text_IO.Put_Line( "CC_Test_Utils: LogFile: ERROR: Open, File " & name_2); return; exception when The_Error : others => Ada.Text_IO.Put_Line( "CC_Test_Utils: LogFile: ERROR: Open Failed; " & Ada.Exceptions.Exception_Name(The_Error) & ", File " & name_2); end; end if;

    Read the article

< Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >