Search Results

Search found 13068 results on 523 pages for 'copy and paste'.

Page 441/523 | < Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >

  • jquery question for events

    - by OM The Eternity
    I have to copy the text from one text box to another using checkbox through jquery I have applied jquery event in my code, it works partially correct.. code is as follows: <html> <head> <script src="js/jquery.js" ></script> </head> <body> <form> <input type="text" name="startdate" id="startdate" value=""/> <input type="text" name="enddate" id="enddate" value=""/> <input type="checkbox" name="checker" id="checker" /> </form> <script> $(document).ready(function(){ $("#startdate").change(function(o){ if($("#checker").is(":checked")){ $("#enddate").val($("#startdate").val()); } }); }); </script> </body> </html> this code works as follows, I always have checkbox checked by default hence whenever i insert the start date and then tab, the start date gets copied to enddate. My Problem but now if uncheck the checkbox and change the start date and then recheck the check box, the start date is not copied, Now what should be done in this situation.. please help me....

    Read the article

  • How can I fetch Google static maps with TIdHTTP?

    - by cloudstrif3
    I'm trying to return content from maps.google.com from within Delphi 2006 using the TIdHTTP component. My code is as follows procedure TForm1.GetGoogleMap(); var t_GetRequest: String; t_Source: TStringList; t_Stream: TMemoryStream; begin t_Source := TStringList.Create; try t_Stream := TMemoryStream.Create; try t_GetRequest := 'http://maps.google.com/maps/api/staticmap?' + 'center=Brooklyn+Bridge,New+York,NY' + '&zoom=14' + '&size=512x512' + '&maptype=roadmap' + '&markers=color:blue|label:S|40.702147,-74.015794' + '&markers=color:green|label:G|40.711614,-74.012318' + '&markers=color:red|color:red|label:C|40.718217,-73.998284' + '&sensor=false'; IdHTTP1.Post(t_GetRequest, t_Source, t_Stream); t_Stream.SaveToFile('google.html'); finally t_Stream.Free; end; finally t_Source.Free; end; end; However I keep getting the response HTTP/1.0 403 Forbidden. I assume this means that I don't have permission to make this request but if I copy the url into my web browser IE 8, it works fine. Is there some header information that I need or something else?

    Read the article

  • New NSData with range of old NSData maintaining bytes.

    - by umop
    I have a fairly large NSData (or NSMutableData if necessary) object which I want to take a small chunk out of and leave the rest. Since I'm working with large amounts of NSData bytes, I don't want to make a big copy, but instead just truncate the existing bytes. Basically: NSData *source: < a few bytes I want to discard + < big chunk of bytes I want to keep NSData *destination: < big chunk of bytes I want to keep There are truncation methods in NSMutableData, but they only truncate the end of it, whereas I want to truncate the beginning. My thoughts are to do this with the methods: - getBytes:range: and - initWithBytesNoCopy:length:freeWhenDone: However, I'm trying to figure out how to manage memory with these. I'm guessing the process will be like this (I've placed ????s where I don't know what to do): void *buffer // Get range of bytes [source getBytes:buffer range:NSMakeRange(myStart, myLength)]; // Somehow (m)alloc the memory which will be freed up in the following step ????? // Release the source, now that I've allocated the bytes [source release]; // Create a new data, recycling the bytes so they don't have to be copied NSData destination = [[NSData alloc] initWithBytesNoCopy:buffer length:myLength freeWhenDone:YES]; Thanks for the help!

    Read the article

  • Finding the last focused element

    - by Joshua Cody
    I'm looking to determine which element had the last focus in a series of inputs, that are added dynamically by the user. This code can only get the inputs that are available on page load: $('input.item').focus(function(){ $(this).siblings('ul').slideDown(); }); And this code sees all elements that have ever had focus: $('input.item').live('focus', function(){ $(this).siblings('ul').slideDown(); }); The HTML structure is this: <ul> <li><input class="item" name="goals[]"> <ul> <li>long list here</li> <li>long list here</li> <li>long list here</li> </ul></li> </ul> <a href="#" id="add">Add another</a> On page load, a single input loads. Then with each add another, a new copy of the top unordered list's contents are made and appended, and the new input gets focus. When each gets focus, I'd like to show the list beneath it. But I don't seem to be able to "watch for the most recently focused element, which exists now or in the future." To clarify: I'm not looking for the last occurrence of an element in the DOM tree. I'm looking to find the element that currently has focus, even if said element is not present upon original page load. So in the above image, if I were to focus on the second element, the list of words should appear under the second element. My focus is currently on the last element, so the words are displayed there. Do I have some sort of fundamental assumption wrong?

    Read the article

  • Hibernate overriding database modifications with detached object state

    - by EugeneP
    I'm gonna go with this design: create an object and keep it alive during all web-app session. And I need to synchronize its state with database state. What I want to achieve is that : IF between my db operations, that is, modifications that I persist to a db someone intentionally spoils table rows, then on next saving to a database all those changes WOULD BE OVERWRITTEN with the object state, that always contains valid data. What Hibernate methods do you recommend me to use to persist the modifications in a database? saveOrUpdate() is a possible solution, but maybe there's anything better? Again, I repeat how it looks. First I create an object without collections. Persist it (save()). Then user provides us with additional data. In a serviceLayer, again, we modify our object in memory (say, populate it with collections) and then, persist it again. So every serviceLayer operation of the next step must simply guarantee that database contains the exact persistent copy of this object that we have in memory. If data in a database differ, it MUST BE OVERRIDDEN with the object (kept in memory) state. What Session operations do you recommend?

    Read the article

  • Efficient way to create a large number of SharePoint folders

    - by BeraCim
    Hi all: I'm currently creating a large number of SharePoint folders within a list (e.g. ~800 folders), with each folder containing a different number of items. The way it is currently done is that it programmatically reads off the content types, items, event listeners and the likes off the same folder from another web, then creates the same folder in the current web. That ran reasonably fine and fast on a dev environment. However when it goes to an environment with WFEs and farms, it slowed down a lot. I have checked that there are no leaks in the code, and that the code follows SharePoint coding best practices. At the moment I'm looking at it at the code level. From your experience, are there any efficient ways of creating a large number of SharePoint folders, lists and items? EDIT: I'm currently using SharePoint API, but will be looking at moving to using Web Service in the future. I'm interested in looking at both options though. Code wise, its just the general reading of a folder and its content types plus items and their details, then create the same folder in the same list with the same content types, then copy over the items using patch update. I want to know whether there are more efficient ways of doing the above. Thanks.

    Read the article

  • After Adding "readonly" attribute on text box not able to remove it in one event

    - by Sreedhar K
    Steps to repro USE Internet Explorer Check unlimited check box Click on text box (It will remove tick/check from check box) Try to enter text in text box We cannot enter in the text box 4. Click again on the text box. Now we will be able to enter text in the text box We tried by 1. Making attribute readOnly to flase i.e. $('#myinput').attr('readOnly', false); 2. Calling $('#myinput').click(); Below is the HTML code <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Make input read only</title> <script type="text/javascript" src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.4.min.js"></script> </head> <body> <input id="myinput" type="text" /> <input id="mycheck" type="checkbox" /> <script type="text/javascript"> /*oncheck box click*/ $('#mycheck').click(function () { if ($(this).attr('checked')) { $('#myinput').attr('readOnly', 'readOnly'); } else { $('#myinput').removeAttr('readOnly'); /* also tried * $('#myinput').attr('readOnly', false); * $('#myinput').attr('readOnly', ''); */ } }); /*on text box click*/ $('#myinput').click(function () { $('#mycheck').removeAttr('checked'); $('#myinput').removeAttr('readOnly'); /* also tried * $('#myinput').attr('readOnly', false); * $('#myinput').attr('readOnly', ''); */ }); </script> </body> </html> Live copy

    Read the article

  • How to determine which source files are required for an Eclipse run configuration

    - by isme
    When writing code in an Eclipse project, I'm usually quite messy and undisciplined in how I create and organize my classes, at least in the early hacky and experimental stages. In particular, I create more than one class with a main method for testing different ideas that share most of the same classes. If I come up with something like a useful app, I can export it to a runnable jar so I can share it with friends. But this simply packs up the whole project, which can become several megabytes big if I'm relying on large library such as httpclient. Also, if I decide to refactor my lump of code into several projects once I work out what works, and I can't remember which source files are used in a particular run configuration, all I can do it copy the main class to a new project and then keep copying missing types till the new project compiles. Is there a way in Eclipse to determine which classes are actually used in a particular run configuration? EDIT: Here's an example. Say I'm experimenting with web scraping, and so far I've tried to scrape the search-result pages of both youtube.com and wrzuta.pl. I have a bunch of classes that implement scraping in general, a few that are specific to each of youtube and wrzuta. On top of this I have a basic gui common to both scrapers, but a few wrzuta- and youtube-specific buttons and options. The WrzutaGuiMain and YoutubeGuiMain classes each contain a main method to configure and show the gui for each respective website. Can Eclipse look at each of these to determine which types are referenced?

    Read the article

  • Misalignement in the output Bitmap created from a byte array

    - by Daniel
    I am trying to understand why I have troubles creating a Bitmap from a byte array. I post this after a careful scrutiny of the existing posts about Bitmap creation from byte arrays, like the followings: Creating a bitmap from a byte[], Working with Image and Bitmap in c#?, C#: Bitmap Creation using bytes array My code is aimed to execute a filter on a digital image 8bppIndexed writing the pixel value on a byte [] buffer to be converted again (after some processing to manage gray levels) in a 8BppIndexed Bitmap My input image is a trivial image created by means of specific perl code: https://www.box.com/shared/zqt46c4pcvmxhc92i7ct Of course, after executing the filter the output image has lost the first and last rows and the first and last columns, due to the way the filter manage borders, so from the original 256 x 256 image i get a 254 x 254 image. Just to stay focused on the issue I have commented the code responsible for executing the filter so that the operation really performed is an obvious: ComputedPixel = InputImage.GetPixel(myColumn, myRow).R; I know, i should use lock and unlock but I prefer one headache one by one. Anyway this code should be a sort of identity transform, and at last i use: private unsafe void FillOutputImage() { OutputImage = new Bitmap (OutputImageCols, OutputImageRows , PixelFormat .Format8bppIndexed); ColorPalette ncp = OutputImage.Palette; for (int i = 0; i < 256; i++) ncp.Entries[i] = Color .FromArgb(255, i, i, i); OutputImage.Palette = ncp; Rectangle area = new Rectangle(0, 0, OutputImageCols, OutputImageRows); var data = OutputImage.LockBits(area, ImageLockMode.WriteOnly, OutputImage.PixelFormat); Marshal .Copy (byteBuffer, 0, data.Scan0, byteBuffer.Length); OutputImage.UnlockBits(data); } The output image I get is the following: https://www.box.com/shared/p6tubyi6dsf7cyregg9e It is quite clear that I am losing a pixel per row, but i cannot understand why: I have carefully controlled all the parameters: OutputImageCols, OutputImageRows and the byte [] byteBuffer length and content even writing known values as way to test. The code is nearly identical to other code posted in stackOverflow and elsewhere. Someone maybe could help to identify where the problem is? Thanks a lot

    Read the article

  • Who architected / designed C++'s IOStreams, and would it still be considered well-designed by today'

    - by stakx
    First off, it may seem that I'm asking for subjective opinions, but that's not what I'm after. I'd love to hear some well-grounded arguments on this topic. In the hope of getting some insight into how a modern streams / serialization framework ought to be designed, I recently got myself a copy of the book Standard C++ IOStreams and Locales by Angelika Langer and Klaus Kreft. I figured that if IOStreams wasn't well-designed, it wouldn't have made it into the C++ standard library in the first place. After having read various parts of this book, I am starting to have doubts if IOStreams can compare to e.g. the STL from an overall architectural point-of-view. Read e.g. this interview with Alexander Stepanov (the STL's "inventor") to learn about some design decisions that went into the STL. What surprises me in particular: It seems to be unknown who was responsible for IOStreams' overall design (I'd love to read some background information about this — does anyone know good resources?); Once you delve beneath the immediate surface of IOStreams, e.g. if you want to extend IOStreams with your own classes, you get to an interface with fairly cryptic and confusing member function names, e.g. getloc/imbue, uflow/underflow, snextc/sbumpc/sgetc/sgetn, pbase/pptr/epptr (and there's probably even worse examples). This makes it so much harder to understand the overall design and how the single parts co-operate. Even the book I mentioned above doesn't help that much (IMHO). Thus my question: If you had to judge by today's software engineering standards (if there actually is any general agreement on these), would C++'s IOStreams still be considered well-designed? (I wouldn't want to improve my software design skills from something that's generally considered outdated.)

    Read the article

  • DAL Layer : EF 4.0 or Normal Data access layer with Stored Procedure

    - by Harryboy
    Hello Experts, Application : I am working on one mid-large size application which will be used as a product, we need to decide on our DAL layer. Application UI is in Silverlight and DAL layer is going to be behind service layer. We are also moving ahead with domain model, so our DB tables and domain classes are not having same structure. So patterns like Data Mapper and Repository will definitely come into picture. I need to design DAL Layer considering below mentioned factors in priority manner Speed of Development with above average performance Maintenance Future support and stability of the technology Performance Limitation : 1) As we need to strictly go ahead with microsoft, we can not use NHibernate or any other ORM except EF 4.0 2) We can use any code generation tool (Should be Open source or very cheap) but it should only generate code in .Net, so there would not be any licensing issue on per copy basis. Questions I read so many articles about EF 4.0, on outset it looks like that it is still lacking in features from NHibernate but it is considerably better then EF 1.0 So, Do you people feel that we should go ahead with EF 4.0 or we should stick to ADO .Net and use any code geneartion tool like code smith or any other you feel best Also i need to answer questions like what time it will take to port application from EF 4.0 to ADO .Net if in future we stuck up with EF 4.0 for some features or we are having serious performance issue. In reverse case if we go ahead and choose ADO .Net then what time it will take to swith to EF 4.0 Lastly..as i was going through the article i found the code only approach (with POCO classes) seems to be best suited for our requirement as switching is really easy from one technology to other. Please share your thoughts on the same and please guide on the above questions

    Read the article

  • Show image from clipboard to defalut imageviewer of windows using c#.net

    - by Rajesh Rolen- DotNet Developer
    I am using below function to make a image of current form and set it in clipboard Image bit = new Bitmap(this.Width, this.Height); Graphics gs = Graphics.FromImage(bit); gs.CopyFromScreen(this.Location, new Point(0, 0), bit.Size); Guid guid = System.Guid.NewGuid(); string FileName = guid.ToString(); //Copy that image in the clipbaord. Image imgToCopy = Image.FromFile(Path.Combine(Environment.CurrentDirectory, FileName + ".jpg")); Clipboard.SetImage(imgToCopy); Now my image is in clipboard and i am able to show it in picturebox on other form using below code : mypicturebox.Image = Clipboard.GetImage(); Now the the problem is that i want to show it in default imageviewer of that system. so for that i think using "System.Diagnostics.Process.Start" we can do that.. but i dont know, how to find default imageviewer and how to set clipboard's image in that ... please help me out... if i find solution than thats good otherwise i am thinking to save that file from clipboard to harddisk and then view it in window's default imageviewer... please help me to resolve my problem.. i am using c#.net

    Read the article

  • rake test not copying development postgres db with sequences

    - by Robert Crida
    I am trying to develop a rails application on postgresql using a sequence to increment a field instead of a default ruby approach based on validates_uniqueness_of. This has proved challenging for a number of reasons: 1. This is a migration of an existing table, not a new table or column 2. Using parameter :default = "nextval('seq')" didn't work because it tries to set it in parenthesis 3. Eventually got migration working in 2 steps: change_column :work_commencement_orders, :wco_number_suffix, :integer, :null => false#, :options => "set default nextval('wco_number_suffix_seq')" execute %{ ALTER TABLE work_commencement_orders ALTER COLUMN wco_number_suffix SET DEFAULT nextval('wco_number_suffix_seq'); } Now this would appear to have done the correct thing in the development database and the schema looks like: wco_number_suffix | integer | not null default nextval('wco_number_suffix_seq'::regclass) However, the tests are failing with PGError: ERROR: null value in column "wco_number_suffix" violates not-null constraint : INSERT INTO "work_commencement_orders" ("expense_account_id", "created_at", "process_id", "vo2_issued_on", "wco_template", "updated_at", "notes", "process_type", "vo_number", "vo_issued_on", "vo2_number", "wco_type_id", "created_by", "contractor_id", "old_wco_type", "master_wco_number", "deadline", "updated_by", "detail", "elective_id", "authorization_batch_id", "delivery_lat", "delivery_long", "operational", "state", "issued_on", "delivery_detail") VALUES(226, '2010-05-31 07:02:16.764215', 728, NULL, E'Default', '2010-05-31 07:02:16.764215', NULL, E'Procurement::Process', NULL, NULL, NULL, 226, NULL, 276, NULL, E'MWCO-213', '2010-06-14 07:02:16.756952', NULL, E'Name 4597', 220, NULL, NULL, NULL, 'f', E'pending', NULL, E'728 Test Road; Test Town; 1234; Test Land') RETURNING "id" The explanation can be found when you inspect the schema of the test database: wco_number_suffix | integer | not null So what happened to the default? I tried adding task: template: smmt_ops_development to the database.yml file which has the effect of issuing create database smmt_ops_test template = "smmt_ops_development" encoding = 'utf8' I have verified that if I issue this then it does in fact copy the default nextval. So clearly rails is doing something after that to suppress it again. Any suggestions as to how to fix this? Thanks Robert

    Read the article

  • Java Bucket Sort on Strings

    - by Michael
    I can't figure out what would be the best way to use Bucket Sort to sort a list of strings that will always be the same length. An algorithm would look like this: For the last character position down to the first: For each word in the list: Place the word into the appropriate bucket by current character For each of the 26 buckets(arraylists) Copy every word back to the list I'm writing in java and I'm using an arraylist for the main list that stores the unsorted strings. The strings will be five characters long each. This is what I started. It just abrubdly stops within the second for loop because I don't know what to do next or if I did the first part right. ArrayList<String> count = new ArrayList<String>(26); for (int i = wordlen; i > 0; i--) { for (int j = 0; i < myList.size(); i++) myList.get(j).charAt(i) } Thanks in advanced.

    Read the article

  • rewrite not a member of LiftRules

    - by José Leal
    Hi guys, I was following http://www.assembla.com/wiki/show/liftweb/URL_Rewriting tutorial for url rewritting in liftweb.. but I get this error: error: value rewrite is not a member of object net.liftweb.http.LiftRules .. it is really odd.. and the documentation says that it exists. I'm using idea IDE, and I've done everything from scratch, using the lift maven blank archifact. Some more info: [INFO] ------------------------------------------------------------------------ [INFO] Building Joseph3 [INFO] task-segment: [tomcat:run] [INFO] ------------------------------------------------------------------------ [INFO] Preparing tomcat:run [INFO] [resources:resources {execution: default-resources}] [WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 0 resource [INFO] [yuicompressor:compress {execution: default}] [INFO] nb warnings: 0, nb errors: 0 [INFO] artifact org.mortbay.jetty:jetty: checking for updates from scala-tools.org [INFO] artifact org.mortbay.jetty:jetty: checking for updates from central [INFO] [compiler:compile {execution: default-compile}] [INFO] Nothing to compile - all classes are up to date [INFO] [scala:compile {execution: default}] [INFO] Checking for multiple versions of scala [INFO] /home/dpz/Scala/Doit/Joseph3/src/main/scala:-1: info: compiling [INFO] Compiling 2 source files to /home/dpz/Scala/Doit/Joseph3/target/classes at 1274922123910 [ERROR] /home/dpz/Scala/Doit/Joseph3/src/main/scala/bootstrap/liftweb/Boot.scala:16: error: value rewrite is not a member of object net.liftweb.http.LiftRules [INFO] LiftRules.rewrite.prepend(NamedPF("ProductExampleRewrite") { [INFO] ^ [ERROR] one error found [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] wrap: org.apache.commons.exec.ExecuteException: Process exited with an error: 1(Exit value: 1) [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 19 seconds [INFO] Finished at: Thu May 27 03:02:07 CEST 2010 [INFO] Final Memory: 20M/175M [INFO] ------------------------------------------------------------------------ Process finished with exit code 1 enter code here

    Read the article

  • SVN authz, path-based authentication woes

    - by Ronny
    [groups] developer = a,b,c doc = r,x [/doc] @doc = rw @developer = rw [/] @developer = rw * = If now a member of the group doc tries to check out the documentation, it does not work. I want members of doc just to be able to check out the sub-dir doc, anything else is forbidden. Any ideas howto achieve this? kind regards ronny [update] client: svn, version 1.5.4 (r33841) server: svn, Version 1.4.6 (r28521) access via svn+ssh:/user@host/fullpath-to-repos 1 perfectly works for two years 2 might be - see version numbers above (I'll contant our admin, immediatelly) 3 no? just ssh 4 nope 5 nope [update] using client version svn 1.4.6 (r28521) does not work either - same errors I use plain command line access. svn co svn+ssh://.... [update] server:Linux 2.6.16.60-0.39.3-default9 i686 athlon i386 GNU/Linux - suse 10? or something like that I think client: Kubuntu 9.04 connection via OpenSSH SSH client the server rejects svn:// connections from localhost - any connection --- gotta try it with a copy at home time soon [update 4] * this is not my own server, I cannot do what I want with it. It is a very old server 10 years at least running, with hundreds of users. Standard things should work. correct me if I am missing something. [update 5] believe it or not. I was using the wrong path and now everything works perfectly well, I am sorry to have wasted your time. I'll give the bounty to FoxyBOA for his efford.

    Read the article

  • Flash CS4 compiler Error 1120 when embedding pngs into class instance variables.

    - by theolagendijk
    I have a Flash CS4 (Flash 9 ActionScript 3.0) project that compiles and runs perfectly on my machine. However it is part of a big batch of fla's that I want to compile on another (faster) machine. When I copy the project (the fla and all actionscripts and assets files) to the faster machine, it's Flash CS4 compiler gives me compiler error 1120 "Access of undefined property ButtonPause_PauseNormal". The property "PauseNormal" is an embedded png. The PNG is available. No transcoder errors. Here's the ActionScript for class "ButtonPause"; package nl.platipus.NissanESM.buttons { import flash.display.*; import flash.events.*; public class ButtonPause extends Sprite { [Embed(source="../../../../player/pause.png")] private var PauseNormal:Class; [Embed(source="../../../../player/pause_mo.png")] private var PauseMouseOver:Class; private var stateNormal:Bitmap; private var stateMouseOver:Bitmap; public function ButtonPause() { stateNormal = new PauseNormal(); stateNormal.width = 29; stateNormal.height = 14; stateNormal.alpha = 1; addChild(stateNormal); stateMouseOver = new PauseMouseOver(); stateMouseOver.width = 29; stateMouseOver.height = 14; stateMouseOver.alpha = 0; addChild(stateMouseOver); width = 29; height = 14; addEventListener(MouseEvent.MOUSE_OVER, handleMouseOver); addEventListener(MouseEvent.MOUSE_OUT, handleMouseOut ); } private function handleMouseOver(evt:MouseEvent):void { stateNormal.alpha = 0; stateMouseOver.alpha = 1; } private function handleMouseOut(evt:MouseEvent):void { stateNormal.alpha = 1; stateMouseOver.alpha = 0; } } } (Both machines run the exact same Flash CS4 Profesional Version 10.0.2 installation and both have the exact same publish settings and ActionScript 3.0 settings.) What's going on?

    Read the article

  • Deleting a non-owned dynamic array through a pointer

    - by ayanzo
    Hello all, I'm relatively novice when it comes to C++ as I was weened on Java for much of my undergraduate curriculum (tis a shame). Memory management has been a hassle, but I've purchased a number books on ansi C and C++. I've poked around the related questions, but couldn't find one that matched this particular criteria. Maybe it's so obvious nobody mentions it? This question has been bugging me, but I feel as if there's a conceptual point i'm not utilizing. Suppose: char original[56]; cstr[0] = 'a'; cstr[1] = 'b'; cstr[2] = 'c'; cstr[3] = 'd'; cstr[4] = 'e'; cstr[5] = '\0'; char *shaved = shavecstr(cstr); delete[] cstrn; where char* shavecstr(char* cstr) { size_t len = strlen(cstr); char* ncstr = new char[len]; strcpy(ncstr,cstr); return ncstr; } In that the whole point is to have 'original' be a buffer that fills with characters and routinely has its copy shaved and used elsewhere. To prevent leaks, I want to free up the memory held by 'shaved' to be used again after it passes through some arguments. There is probably a good reason for why this is restricted, but there should be some way to free the memory as by this configuration, there is no way to access the original owner (pointer) of the data.

    Read the article

  • Invalid argument in sendfile() with two regular files

    - by Daniel Hershcovich
    I'm trying to test the sendfile() system call under Linux 2.6.32 to zero-copy data between two regular files. As far as I understand, it should work: ever since 2.6.22, sendfile() has been implemented using splice(), and both the input file and the output file can be either regular files or sockets. The following is the content of sendfile_test.c: #include <sys/sendfile.h> #include <fcntl.h> #include <stdio.h> int main(int argc, char **argv) { int result; int in_file; int out_file; in_file = open(argv[1], O_RDONLY); out_file = open(argv[2], O_WRONLY | O_CREAT | O_TRUNC, 0644); result = sendfile(out_file, in_file, NULL, 1); if (result == -1) perror("sendfile"); close(in_file); close(out_file); return 0; } And when I'm running the following commands: $ gcc sendfile_test.c $ ./a.out infile The output is sendfile: Bad file descriptor Which means that the system call resulted in errno = -EINVAL, I think. What am I doing wrong?

    Read the article

  • Issue with XSLT Processing on PHP

    - by monksy
    I'm getting a few errors from XSLTProcessor: XSLTProcessor::transformToDoc() [<a href='function.XSLTProcessor-transformToDoc'>function.XSLTProcessor-transformToDoc</a>]: Invalid or inclomplete context XSLTProcessor::transformToDoc() [<a href='function.XSLTProcessor-transformToDoc'>function.XSLTProcessor-transformToDoc</a>]: XSLTProcessor::transformToDoc() [<a href='function.XSLTProcessor-transformToDoc'>function.XSLTProcessor-transformToDoc</a>]: xsltValueOf: text copy failed in Which is parsing this XSLT Line: <xsl:apply-templates select="page/sections/section" mode="subset"/> The section is: <xsl:template match="page/sections/section" mode="subset"> <a href="#{shorttitle}"> <xsl:value-of select="title"/> </a> <xsl:if test="position() != last()"> | </xsl:if> </xsl:template> The XML that the section is parsing is: <shorttitle>About</shorttitle> <title>#~ About</title> The PHP XSLT Code is: $xslt = new XSLTProcessor(); $XSL = new DOMDocument(); $XSL->load( $xsltFile, LIBXML_NOCDATA); $xslt->importStylesheet( $XSL ); print $xslt->transformToXML( $XML ); My suspicion about the the errors is due to content. I'm not getting these errors with Firefox's XSLT rendering, nor am I getting an invalid XML document on the backend.I'm not getting errors on the load, its just on the transformToXML function. Does anyone have a clue on how to solve this? This is with PHP5.

    Read the article

  • Executing sequential stored procedures; works in query analyzer, doesn't in my .NET application

    - by evanmortland
    Hello, I have an audit record table that I am writing to. I am connecting to MyDb, which has a stored procedure called 'CreateAudit', which is a passthrough stored procedure to another database on the same machine called MyOther DB with a stored procedure called 'CreatedAudit' as well. In other words in MyDB I have CreateAudit, which does the following EXEC dbo.MyOtherDB.CreateAudit. I call the MyDb CreateAudit stored procedure from my application, using subsonic as the DAL. The first time I call it, I call it with the following (pseudocode): Result = CreateAudit(recordId, "Opened") One line after that, I call: Result2 = CreateAudit(recordId, "Closed") In my second stored procedure it is supposed to mark the record that was created by the CreateAudit(recordId, "Opened") with a status of closed. It works great if I run them independently of one another, but when they run in sequence in the application, the record is not marked as "Closed". When I run SQL profiler I see that both queries ran, and if I copy the queries out and run them from query analyzer the record gets marked as closed 100% of the time! When I run it from the application, about once every 20 times or so, the record is successfully marked closed - the other 19 times nothing happens, but I do not get an error! Is it possible for the .NET app to skip over the ouput from the first stored procedure and start executing the second stored procedure before the record in the first is created? When I add a "WAITFOR DELAY '00:00:00:003'" to the top of my stored procedure, the record is also closed 100% of the time. My head is spinning, any ideas why this is happening! Thanks for any responses, very interested in hearing how this can happen.

    Read the article

  • Getting exception when trying to monkey patch pymongo.connection._Pool

    - by Creotiv
    I use pymongo 1.9 on Ubuntu 10.10 with python 2.6.6 When i trying to monkey patch pymongo.connection._Pool i'm getting error on connection: AutoReconnect: could not find master/primary But when i change _Pool class in pymongo.connection module, it work pretty fine. Even if i copy _Pool implementation from pymongo.connection module and will try to monkey patch by the same code, it still giving same exception. I need to remove threading.local from _Pool class, because i use gevent and i need to implement Pool for all mongo connections(for all threads). I use this code: import pymongo class GPool: """A simple connection pool. Uses thread-local socket per thread. By calling return_socket() a thread can return a socket to the pool. Right now the pool size is capped at 10 sockets - we can expose this as a parameter later, if needed. """ # Non thread-locals __slots__ = ["sockets", "socket_factory", "pool_size","sock"] #sock = None def __init__(self, socket_factory): self.pool_size = 10 if not hasattr(self,"sock"): self.sock = None self.socket_factory = socket_factory if not hasattr(self, "sockets"): self.sockets = [] def socket(self): # we store the pid here to avoid issues with fork / # multiprocessing - see # test.test_connection:TestConnection.test_fork for an example # of what could go wrong otherwise pid = os.getpid() if self.sock is not None and self.sock[0] == pid: return self.sock[1] try: self.sock = (pid, self.sockets.pop()) except IndexError: self.sock = (pid, self.socket_factory()) return self.sock[1] def return_socket(self): if self.sock is not None and self.sock[0] == os.getpid(): # There's a race condition here, but we deliberately # ignore it. It means that if the pool_size is 10 we # might actually keep slightly more than that. if len(self.sockets) < self.pool_size: self.sockets.append(self.sock[1]) else: self.sock[1].close() self.sock = None pymongo.connection._Pool = GPool

    Read the article

  • Email as a view.

    - by Hal
    I've been in some discussion recently about where email (notifications, etc...) should be sent in an ASP.NET MVC application. My nemesis grin argues that it only makes sense that the email should be sent from the controller. I argue that an email is simply an alternate or augmented view through a different channel. Much like I would download a file as the payload of an ActionResult, the email is simply delivered through a different protocol. I've worked an extension method that allows me to do the following: <% Html.RenderEmail(model.FromAddress, model.ToAddress, model.Subject); %> which I actually include within my the view that is displayed on the screen. The beauty is that, based on convention, if I call RenderEmail from a parent view named MyView.ascx, I attempt to render the contents of a view named MyViewEmail.ascx, unless it is not found, in which case I simply email a copy of parent view. It certainly does make it testable (I still have an ISMTPService injected for testing), I wondered if anyone had any thoughts on whether or not this breaks from good practice. In use it has been extremely handy when we needed to easily send an email or modify the contents of the emailed results vs the browser rendered results. Thanks, Hal

    Read the article

  • Reference 3.5 assembly from 4.0 winforms phail

    - by Dean Lunz
    So I have this utility library that is compiled as a dll under .net 3.5 and it is used by my asp.net 3.5 website. I created a .net 4.0 winforms app to push data onto the website. I want to make use of the functionality in the utilities library from this winforms app. The problem lies in that when I make reference to the utilities library and use the code in it intellisense barks at me saying that it can't find the objects in that library. The thing is I would switch the winforms app to 3.5 which fixes the problem, but I am using Tasks which require 4.0. And because my website and utilities library both run 3.5 and my website is hosted at godaddy that currently only supports asp.net 3.5 so compiling my utilities library under 4.0 for my winforms app is not going to work because it breaks my website. I have tried the app.config trick ala useLegacyV2RuntimeActivationPolicy="true" ... But that did not help. Obviously I could start a new utilities project for 4.0 and and copy the code files from the existing utilities library then reference the new 4.0 utilities library in my winforms app but, that strikes me as being rather overkill when all I want to do is reference the library and use it's functionality. Not to mention that I would have two utility libraries both containing the exact same code, and if I update the code in one I will need to make sure that the other is also updated. I could use add file as link, but you get the idea. So is there anything else I could try or any other way to solve or get around this? Or am I just going to have to break down and create a identical clone of the utilities library for 4.0.

    Read the article

  • VEMap and a GeoRSS feed(hosted separately)

    - by Alexis Abril
    The scenario is as follows: A WCF web service exists that outputs a valid GeoRSS feed. This lives in its own domain as a number of different applications have access to it. A web page(on a different site) has been created with an instance of a VEMap(Bing/Virtual Earth map object). Now, VEMap can accept an input feed in this format via the following: var layer = new VEShapeLayer(); var veLayerSpec = new VEShapeSourceSpecification(VEDataType.GeoRSS, "someurl", layer); map.ImportShapeLayerData(veLayerSpec, onComplete, true); onComplete is a callback function I'm using to replace the default pin graphic with something custom. The question is in regards to "someurl", which is a path to a local xml file containing the geographic information(georss simple format). I've realized this feed and the map must be hosted in the same domain, so I've created a generic handler that reads the remote feed and returns it in the same format. var veLayerSpec = new VEShapeSourceSpecification(VEDataType.GeoRSS, "/somelocalhandler.ashx", layer); When I do this, I get the VEMap error("z is null"). This is the same error one would receive when trying to access a remote feed. When I copy the feed into a local xml file(ie, "feed.xml") there is no error. The order of operations is currently: remote feed - local handler - VEMap import If I'm over complicating this procedure, let me know! I'm a bit new to the Bing Maps API and might have missed something. Any assistance is appreciated.

    Read the article

< Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >