Search Results

Search found 10033 results on 402 pages for 'execution speed'.

Page 348/402 | < Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >

  • jQuery and executing code until mouseout is called

    - by Tomaszewski
    Good day all, I am tasked with building a slider for our site. Here is my goal: <div id="abc"> <div id="slider">...</div> </div> I need to move "slider" left 30px at a time when a button is hovered over, and right 30px when another button is hovered over. My problem is that there doesn't seem to be a reliable method for telling the code that the mouse hasn't left the are in question, unless there is something I did not think about or read yet. In other words, when the mouse is OVER the a button, the code to move "slider" left is executed until the mouseout is called. I'm not really sure how to do this. The only way I can think of is to look at the offsetTop and offsetLeft and offsetTop DOM properties and compare them to the mouse position, than run checks to see if the mouse is within the bounds of the box, and if not than it will stop the execution of code. Is there a better way to do this?

    Read the article

  • Design for fastest page download

    - by mexxican
    I have a file with millions of URLs/IPs and have to write a program to download the pages really fast. The connection rate should be at least 6000/s and file download speed at least 2000 with avg. 15kb file size. The network bandwidth is 1 Gbps. My approach so far has been: Creating 600 socket threads with each having 60 sockets and using WSAEventSelect to wait for data to read. As soon as a file download is complete, add that memory address(of the downloaded file) to a pipeline( a simple vector ) and fire another request. When the total download is more than 50Mb among all socket threads, write all the files downloaded to the disk and free the memory. So far, this approach has been not very successful with the rate at which I could hit not shooting beyond 2900 connections/s and downloaded data rate even less. Can somebody suggest an alternative approach which could give me better stats. Also I am working windows server 2008 machine with 8 Gig of memory. Also, do we need to hack the kernel so as we could use more threads and memory. Currently I can create a max. of 1500 threads and memory usage not going beyond 2 gigs [ which technically should be much more as this is a 64-bit machine ]. And IOCP is out of question as I have no experience in that so far and have to fix this application today. Thanks Guys!

    Read the article

  • List all foreign key constraints that refer to a particular column in a specific table

    - by Sid
    I would like to see a list of all the tables and columns that refer (either directly or indirectly) a specific column in the 'main' table via a foreign key constraint that has the ON DELETE=CASCADE setting missing. The tricky part is that there would be an indirect relationships buried across up to 5 levels deep. (example: ... great-grandchild- FK3 = grandchild = FK2 = child = FK1 = main table). We need to dig up the leaf tables-columns, not just the very 1st level. The 'good' part about this is that execution speed isn't of concern, it'll be run on a backup copy of the production db to fix any relational issues for the future. I did SELECT * FROM sys.foreign_keys but that gives me the name of the constraint - not the names of the child-parent tables and the columns in the relationship (the juicy bits). Plus the previous designer used short, non-descriptive/random names for the FK constraints, unlike our practice below The way we're adding constraints into SQL Server: ALTER TABLE [dbo].[UserEmailPrefs] WITH CHECK ADD CONSTRAINT [FK_UserEmailPrefs_UserMasterTable_UserId] FOREIGN KEY([UserId]) REFERENCES [dbo].[UserMasterTable] ([UserId]) ON DELETE CASCADE GO ALTER TABLE [dbo].[UserEmailPrefs] CHECK CONSTRAINT [FK_UserEmailPrefs_UserMasterTable_UserId] GO The comments in this SO question inpire this question.

    Read the article

  • java.lang.IllegalStateException: The content of the adapter has changed but ListView.... inspite of calling notifydatasetchanged()

    - by Mistaken
    What are the best practices to be followed to update the contents of a listactivty by a background thread (Async Task) ? 1) Am calling the notifyDataSetChanged() to update the adapter as soon as i manipulate the contents of the adapter but still my app force closes while the user scrolls or click on the list. Any pointers to prevent this would be very helpfull. Logcat: java.lang.IllegalStateException: The content of the adapter has changed but ListView did not receive a notification. 2) Where exaclty should i update contents of the listactivity ? inside the doInBackground() or onProgressUpdate()? 3) Am experiencing regular crashes when the user clicks the list item. So will disabling click events on the listactivty during the background operation solve the problem ? If so am not sure how to remove or set item click listeners dynamically to the listactivity. Please instruct me on the too. 4) I dont think blocking all ui interactions during the background async task execution is the only way to solve the problem. I know there is a simple way of doing this but need some help. Thanks in advance. This is my onCreate... protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.fa); tvStatus=(TextView) findViewById(R.id.tvStatus); adapter = new SimpleAdapter( this, mostPopularList, R.layout.list_item, new String[] {"title","author","views","date"}, new int[] {R.id.textView1,R.id.textView2,R.id.textView4,R.id.textView3}); //populateList(); setListAdapter(adapter); } My async task... private class LongOperation extends AsyncTask<String, Void, String> { @Override protected String doInBackground(String... params) { // code for adding new listactivty items } @Override protected void onPostExecute(String networkStatus) { adapter.notifyDataSetChanged(); } @Override protected void onPreExecute() { } @Override protected void onProgressUpdate(Void... values) { } }

    Read the article

  • JQuery transition animation

    - by kk-dev11
    This program randomly selects two employees from a json-object Employees array, winnerPos is already defined. For better user experience I programmed these functions to change pictures one by one. The animation stops when the randomly selected person is shown on the screen. The slideThrough function will be triggered when the start button is pressed. function slideThrough() { counter = 0; start = true; clearInterval(picInterval); picInterval = setInterval(function () { changePicture(); }, 500); } function changePicture() { if (start) { if (counter > winnerPos) { setWinner(); start = false; killInterval(); } else { var employee = Employees[counter]; winnerPic.fadeOut(200, function () { this.src = 'img/' + employee.image; winnerName.html(employee.name); $(this).fadeIn(300); }); counter++; } } } The problem is the animation doesn't work smoothly. At first it works, but not perfect. The second time the transition happens in an irregular way, i.e. different speed and fadeIn/fadeOut differs from picture to picture. Could anyone help me to fine-tune the transition?

    Read the article

  • Why does reusing arrays increase performance so significantly in c#?

    - by Willem
    In my code, I perform a large number of tasks, each requiring a large array of memory to temporarily store data. I have about 500 tasks. At the beginning of each task, I allocate memory for an array : double[] tempDoubleArray = new double[M]; M is a large number depending on the precise task, typically around 2000000. Now, I do some complex calculations to fill the array, and in the end I use the array to determine the result of this task. After that, the tempDoubleArray goes out of scope. Profiling reveals that the calls to construct the arrays are time consuming. So, I decide to try and reuse the array, by making it static and reusing it. It requires some additional juggling to figure out the minimum size of the array, requiring an extra pass through all tasks, but it works. Now, the program is much faster (from 80 sec to 22 sec for execution of all tasks). double[] tempDoubleArray = staticDoubleArray; However, I'm a bit in the dark of why precisely this works so well. Id say that in the original code, when the tempDoubleArray goes out of scope, it can be collected, so allocating a new array should not be that hard right? I ask this because understanding why it works might help me figuring out other ways to achieve the same effect, and because I would like to know in what cases allocation gives performance issues.

    Read the article

  • Core Data Errors vs Exceptions Part 3

    - by John Gallagher
    My question is similar to this one. Background I'm creating a large number of objects in a core data store using NSOperations to speed things up. I've followed all the Core Data multithreading rules - I've got a single persistent store coordinator and a managed object context per thread that on save is merging back to the main managed object context. The Problem When the number of threads running at once is more than 1, I get the exception logged on save of my core data store: NSExceptionHandler has recorded the following exception: NSInternalInconsistencyException -- optimistic locking failure What I've Tried My code that creates new entities is quite complex - it makes entities that have relationships with other entities that could be being created in a separate thread. If I replace my object creation routine with some very simple code just making non-related entries, everything works perfectly. Initially, as well as the exceptions, I was getting a save error saying core data couldn't save due to the merge failing. I read the docs and realised I needed a merge policy on the Managed Object Context I was saving to. I set this up and as this question states, the save error goes away, but the exception remains. My Question Do I need to worry about these exceptions? If I do need to get rid of the exceptions, any ideas on how I do it?

    Read the article

  • multi-threading in MFC

    - by kiddo
    Hello all,in my application there is a small part of function,in which it will read files to get some information,the number of filecount would be utleast 50,So I thought of implementing threading.Say if the user is giving 50 files,I wanted to separate it as 5 *10, 5 thread should be created,so that each thread can handle 10 files which can speed up the process.And also from the below code you can see that some variables are common.I read some articles about threading and I am aware that only one thread should access a variable/contorl at a me(CCriticalStiuation can be used for that).For me as a beginner,I am finding hard to imlplement what I have learned about threading.Somebody please give me some idea with code shown below..thanks in advance file read function:// void CMyClass::GetWorkFilesInfo(CStringArray& dataFilesArray,CString* dataFilesB, int* check,DWORD noOfFiles,LPWSTR path) { CString cFilePath; int cIndex =0; int exceptionInd = 0; wchar_t** filesForWork = new wchar_t*[noOfFiles]; int tempCheck; int localIndex =0; for(int index = 0;index < noOfFiles; index++) { tempCheck = *(check + index); if(tempCheck == NOCHECKBOX) { *(filesForWork+cIndex) = new TCHAR[MAX_PATH]; wcscpy(*(filesForWork+cIndex),*(dataFilesB +index)); cIndex++; } else//CHECKED or UNCHECKED { dataFilesArray.Add(*(dataFilesB+index)); *(check + localIndex) = *(check + index); localIndex++; } } WorkFiles(&cFilePath,dataFilesArray,filesForWork, path, cIndex); dataFilesArray.Add(cFilePath); *(check + localIndex) = CHECKED; }

    Read the article

  • C++ and its type system: How to deal with data with multiple types?

    - by sub
    "Introduction" I'm relatively new to C++. I went through all the basic stuff and managed to build 2-3 simple interpreters for my programming languages. The first thing that gave and still gives me a headache: Implementing the type system of my language in C++ Think of that: Ruby, Python, PHP and Co. have a lot of built-in types which obviously are implemented in C. So what I first tried was to make it possible to give a value in my language three possible types: Int, String and Nil. I came up with this: enum ValueType { Int, String, Nil }; class Value { public: ValueType type; int intVal; string stringVal; }; Yeah, wow, I know. It was extremely slow to pass this class around as the string allocator had to be called all the time. Next time I've tried something similar to this: enum ValueType { Int, String, Nil }; extern string stringTable[255]; class Value { public: ValueType type; int index; }; I would store all strings in stringTable and write their position to index. If the type of Value was Int, I just stored the integer in index, it wouldn't make sense at all using an int index to access another int, or? Anyways, the above gave me a headache too. After some time, accessing the string from the table here, referencing it there and copying it over there grew over my head - I lost control. I had to put the interpreter draft down. Now: Okay, so C and C++ are statically typed. How do the main implementations of the languages mentioned above handle the different types in their programs (fixnums, bignums, nums, strings, arrays, resources,...)? What should I do to get maximum speed with many different available types? How do the solutions compare to my simplified versions above?

    Read the article

  • Faster way to convert from a String to generic type T when T is a valuetype?

    - by Kumba
    Does anyone know of a fast way in VB to go from a string to a generic type T constrained to a valuetype (Of T as Structure), when I know that T will always be some number type? This is too slow for my taste: Return DirectCast(Convert.ChangeType(myStr, GetType(T)), T) But it seems to be the only sane method of getting from a String -- T. I've tried using Reflector to see how Convert.ChangeType works, and while I can convert from the String to a given number type via a hacked-up version of that code, I have no idea how to jam that type back into T so it can be returned. I'll add that part of the speed penalty I'm seeing (in a timing loop) is because the return value is getting assigned to a Nullable(Of T) value. If I strongly-type my class for a specific number type (i.e., UInt16), then I can vastly increase the performance, but then the class would need to be duplicated for each numeric type that I use. It'd almost be nice if there was converter to/from T while working on it in a generic method/class. Maybe there is and I'm oblivious to its existence?

    Read the article

  • GTK+: How do I process RadioMenuItem choice without marking it chosen? And vise versa

    - by eugene.shatsky
    In my program, I've got a menu with a group of RadioMenuItem entries. Choosing one of them should trigger a function which can either succeed or fail. If it fails, this RadioMenuItem shouldn't be marked chosen (the previous one should persist). Besides, sometimes I want to set marked item without running the choice processing function. Here is my current code: # Update seat menu list def update_seat_menu(self, seats, selected_seat=None): seat_menu = self.builder.get_object('seat_menu') # Delete seat menu items for menu_item in seat_menu: # TODO: is it a good way? does remove() delete obsolete menu_item from memory? if menu_item.__class__.__name__ == 'RadioMenuItem': seat_menu.remove(menu_item) # Fill menu with new items group = [] for seat in seats: menu_item = Gtk.RadioMenuItem.new_with_label(group, str(seat[0])) group = menu_item.get_group() seat_menu.append(menu_item) if str(seat[0]) == selected_seat: menu_item.activate() menu_item.connect("activate", self.choose_seat, str(seat[0])) menu_item.show() # Process item choice def choose_seat(self, entry, seat_name): # Looks like this is called when item is deselected, too; must check if active if entry.get_active(): # This can either succeed or fail self.logind.AttachDevice(seat_name, '/sys'+self.device_syspath, True) Chosen RadioMenuItem gets marked irrespective of the choose_seat() execution result; and the only way to set marked item without triggering choose_seat() is to re-run update_seat_menu() with selected_seat argument, which is an overkill. I tried to connect choose_seat() with 'button-release-event' instead of 'activate' and call entry.activate() in choose_seat() if AttachDevice() succeeds, but this resulted in whole X desktop lockup until AttachDevice() timed out, and chosen item still got marked.

    Read the article

  • Java JRE vs GCJ

    - by Martijn Courteaux
    Hi, I have this results from a speed test I wrote in Java: Java real 0m20.626s user 0m20.257s sys 0m0.244s GCJ real 3m10.567s user 3m5.168s sys 0m0.676s So, what is the but of GCJ then? With this results I'm sure I'm not going to compile it with GCJ! I tested this on Linux, are the results in Windows maybe better than that? This was the code from the application: public static void main(String[] args) { String str = ""; System.out.println("Start!!!"); for (long i = 0; i < 5000000L; i++) { Math.sqrt((double) i); Math.pow((double) i, 2.56); long j = i * 745L; String string = new String(String.valueOf(i)); string = string.concat(" kaka pipi"); // "Kaka pipi" is a kind of childly call in Dutch. string = new String(string.toUpperCase()); if (i % 300 == 0) { str = ""; } else { str += Long.toHexString(i); } } System.out.println("Stop!!!"); }

    Read the article

  • Looking for a good C++/.net book

    - by Michael Minerva
    I have recently started to feel that I need to greatly improve my C++ skills especially in the realm of .net. I graduated from a good four year university with a degree in computer science about 9 months ago and I have since been doing full time contract work for a small software company in my local area. Most of my work has been done using Java/lisp/cocoa/XML and before that most of my programming in my senior year was in java/C#. I did a decent amount of C++ in my Sophomore year and in my free time before that but I feel that my general knowledge of C++/.net is very lacking for the opportunities that are now coming my way. Can anyone recommend a good book that could help me get up too speed? I feel I do not need a very basic introduction to C++ but something that covers the fundamentals of .net would be good for me. So basically what I need is a book or books that would be good for a .net novice and a C++ developer who is just beyond novice. Also, a book that would help bein an interview by giving me a conversional understanding of C++ would be great. Thanks a lot!.

    Read the article

  • Making firefox refresh images faster

    - by Earlz
    I have a thing I'm doing where I need a webpage to stream a series of images from the local client computer. I have a very simple run here: http://jsbin.com/idowi/34 The code is extremely simple setTimeout ( "refreshImage()", 100 ); function refreshImage(){ var date = new Date() var ticks = date.getTime() $('#image').attr('src','http://127.0.0.1:2723/signature?'+ticks.toString()); setTimeout ("refreshImage()", 100 ); } Basically I have a signature pad being used on the client machine. We want for the signature to show up in the web page and for them to see themselves signing it within the web page(the pad does not have an LCD to show it to them right there). So I setup a simple local HTTP server which grabs an image of what the current state of the signature pad looks like and it gets sent to the browser. This has no problems in any browser(tested in IE7, 8, and Chrome) but Firefox where it is extremely laggy and jumpy and doesn't keep with the 10 FPS rate. Does anyone have any ideas on how to fix this? I've tried creating very simple double buffering in javascript but that just made things worse. Also for a bit more information it seems that Firefox is executing the javascript at the correct framerate as on the server the requests are coming in at a constant speed. But the images are only refreshed inconsistently ranging from 5 times per second all the way down to 0 times per second(taking 2 seconds to do a refresh) Also I have tried using different image formats all with the same results. The formats I've tried include bitmaps, PNGs, and GIFs (GIFs caused a minor problem in Chrome with flicker though) Could it be possible that Firefox is somehow caching my images causing a slight lag? I send these headers though: Pragma-directive: no-cache Cache-directive: no-cache Cache-control: no-cache Pragma: no-cache Expires: 0

    Read the article

  • Debugging Widget causes ANR

    - by Salv0
    I'm trying to debug an AppWidget but I incurred in a problem :D I placed a breakpoint on the top of the method: public void onReceive(Context context, Intent intent) { Log.v(TAG, "onReceive 1"); // BP on this line super.onReceive(context, intent); String action = intent.getAction(); // Checks on action and computations ... Log.v(TAG, "onReceive 2"); updateWidget(context); Log.v(TAG, "onReceive 3"); } The breakpoint stops the execution as expected but then the process dies. The problem is that the breakpoint ( I guess xD ) cause an ANR and the ActivityManager kills the process. That's the Log: 01-07 14:32:38.886: ERROR/ActivityManager(72): ANR in com.salvo.wifiwidget 01-07 14:32:38.886: INFO/Process(72): Sending signal. PID: 475 SIG: 9 ...... ...... 01-07 14:32:38.906: INFO/ActivityManager(72): Process com.salvo.wifiwidget (pid 475) has died. This cause the debug to stop. So the question is: there's a way to debug the widget without incurring in the ANR?? thanks in advance for the answers

    Read the article

  • What is the best (Windows) program launcher?

    - by AR
    One of the biggest general productivity boosters I've used is a good program launcher. I was a long-time user of SlickRun, and I've tried a few others. My current favorite is Executor - by far the best I've used. Other options: Executor: My current favorite Vista Start Menu: Pretty good, actually, but Executor is similar (binds to Win+Z) and much more flexible. Quicksilver: For Macs only, but it seems to be the gold standard against which most other launchers are measured. Google Desktop: Press Ctrl+Ctrl and it's a quick launcher! AutoHotKey: Much,much more than just a launcher - more than I need, really. SlickRun: simple and unobtrusive Launchy: Seems to be the launcher of choice for many StackOverflow users :) Colibri: "Type Ahead - Information at the tip of your wings". Quite a cool concept. Many, many others. Scott Hanselman outlines some more here. I realize that everyone will have their own preferences, but the question is: is there anything that really stands out in terms of speed, features, and especially productivity increase?

    Read the article

  • Pthread - setting scheduler parameters

    - by Andna
    I wanted to use read-writer locks from pthread library in a way, that writers have priority over readers. I read in my man pages that If the Thread Execution Scheduling option is supported, and the threads involved in the lock are executing with the scheduling policies SCHED_FIFO or SCHED_RR, the calling thread shall not acquire the lock if a writer holds the lock or if writers of higher or equal priority are blocked on the lock; otherwise, the calling thread shall acquire the lock. so I wrote small function that sets up thread scheduling options. void thread_set_up(int _thread) { struct sched_param *_param=malloc(sizeof (struct sched_param)); int *c=malloc(sizeof(int)); *c=sched_get_priority_min(SCHED_FIFO)+1; _param->__sched_priority=*c; long *a=malloc(sizeof(long)); *a=syscall(SYS_gettid); int *b=malloc(sizeof(int)); *b=SCHED_FIFO; if (pthread_setschedparam(*a,*b,_param) == -1) { //depending on which thread calls this functions, few thing can happen if (_thread == MAIN_THREAD) client_cleanup(); else if (_thread==ACCEPT_THREAD) { pthread_kill(params.main_thread_id,SIGINT); pthread_exit(NULL); } } } sorry for those a,b,c but I tried to malloc everything, still I get SIGSEGV on the call to pthread_setschedparam, I am wondering why?

    Read the article

  • Organizing development teams

    - by Patrick
    A long time ago, when my company was much smaller, dividing the development work over teams was quite easy: the 'application' team developed the applications-specific logic, often requiring a deep insight of specific industry problems) the 'generic' team developed the parts that were common/generic for all applications (user interface related stuff, database access, low-level Windows stuff, ...) Over the years the boundaries between the teams have become fuzzy: the 'application' teams often write application-specific functionality with a 'generic' part, so instead of asking the 'generic' team to write that part for them, they write it themselves to speed up the developments; then donate it to the 'generic' team the 'generic' team's focus seems to be more 'maintenance oriented'. All of the 'very generic' code has already been written, so no new developments are needed in it, but instead they continuously have to support all the functionality donated by the application teams. All this seems to indicate that it's not a good idea anymore to have this split in teams. Maybe the 'generic' team should evolve into a 'software quality' team (defining and guarding the rules for writing good quality software), or into a 'software deployment' team (defining how software should be deployed, installed, ...). How do you split up the work in different teams if you have different applications? everybody can write generic code and donates it to a central 'generic' team? everybody can write generic code, but nobody 'manages' this generic code (everybody is the owner) generic code is written by a 'generic' team only and the applications have to wait until the 'generic' team delivers the generic part (via a library, via a DLL) there is no overlap in code between the different applications some other way? Notice that thee advantage of having the mix (allowing everybody to write everywhere in the code) is that: code is written in a more flexible way it's easier to debug the code since you can easily step into the 'generic' code in the debugger But the big (and maybe only) disadvantage is that this generic code may become nobody's responsibility if there is no clear team that manages it anymore. What is your vision?

    Read the article

  • PHP include taking too long

    - by wxiiir
    I have a php file with around 100mb which is full of arrays (only arrays). I've made a script that includes this file (for processing), first it exhausted the default Xampp 128mb memory limit, i've raised it to 1024mb but it just takes forever and doesn't do anything. I'm sure the problem is being created by the sheer size of the file because i've tried removing all lines of code and just leaving the include and an echo for me to know when it finishes executing, and it does the same thing (which is taking forever), i've also tried to run the 100mb file in separate and same thing again. A 10mb file is taking forever as well but a similar 1mb file is almost instantly read and executed so the problem must be more than just the file size. I was avoiding using c++ for a simple project as this and would rather not to as php is easier for me and the task that will be executed doesn't need to benefit from the added speed that it would have if it had been done in c++ but if i have no luck in solving this problem i guess i'll have to. EDIT Reasons for not using a database: 1-Whoever made it didn't used a database and it will be pretty hard to store this in an organized database if i'm not able to do something with it first, like just reading it, copying parts from it or putting in memory or something. 2-I don't have experience working with databases as pretty much all stuff i've ever done in php didn't needed large amounts of stored data, 50kb at best, if i was thinking about a big project or huge chunks of data as this one, i definitely would, but i didn't made this mess to start with and now i have to undo it. 3-The logic for having to store a small portion of data like 10mb in hard drive when now every computer has pretty much enough ram to fit the whole OS in it is pretty much incomprehensible unless someone gives a good explanation about it, if i had to access a lot of said files simultaneously i would understand but like i said, this is a simple project, this is the only file that will be accessed at a given time this isn't even to make some kind of website, it's to run a few times and be done with it.

    Read the article

  • Help regarding multi-threading in MFC,please help me firends!

    - by kiddo
    Hello all,in my application there is a small part of function,in which it will read files to get some information,the number of filecount would be utleast 50,So I thought of implementing threading.Say if the user is giving 50 files,I wanted to separate it as 5 *10, 5 thread should be created,so that each thread can handle 10 files which can speed up the process.And also from the below code you can see that some variables are common.I read some articles about threading and I am aware that only one thread should access a variable/contorl at a me(CCriticalStiuation can be used for that).For me as a beginner,I am finding hard to imlplement what I have learned about threading.Somebody please give me some idea with code shown below..thanks in advance file read function:// void CMyClass::GetWorkFilesInfo(CStringArray& dataFilesArray,CString* dataFilesB, int* check,DWORD noOfFiles,LPWSTR path) { CString cFilePath; int cIndex =0; int exceptionInd = 0; wchar_t** filesForWork = new wchar_t*[noOfFiles]; int tempCheck; int localIndex =0; for(int index = 0;index < noOfFiles; index++) { tempCheck = *(check + index); if(tempCheck == NOCHECKBOX) { *(filesForWork+cIndex) = new TCHAR[MAX_PATH]; wcscpy(*(filesForWork+cIndex),*(dataFilesB +index)); cIndex++; } else//CHECKED or UNCHECKED { dataFilesArray.Add(*(dataFilesB+index)); *(check + localIndex) = *(check + index); localIndex++; } } WorkFiles(&cFilePath,dataFilesArray,filesForWork, path, cIndex); dataFilesArray.Add(cFilePath); *(check + localIndex) = CHECKED; }

    Read the article

  • Why won't JPA delete owned entities when the owner entity loses the reference to them?

    - by Nick
    Hi! I've got a JPA entity "Request", that owns a List of Answers (also JPA entities). Here's how it's defined in Request.java: @OneToMany(cascade= CascadeType.ALL, mappedBy="request") private List<Answer> answerList; And in Answer.java: @JoinColumn(name = "request", referencedColumnName="id") @ManyToOne(optional = false) private Request request; In the course of program execution, the Request's List of Answers may have Answers added or removed from it, or the actual List object may be replaced. My problem is thus: when I merge a Request to the database, the Answer objects that used to be in the List are kept in the database -- that is, Answer objects that the Request no longer holds a reference to (indirectly, via a List) are not deleted. This is not the behaviour I desire, as if I merge a Request to the database, and then fetch it again, its Answers List may not be the same. Am I making some programming mistake? Is there an annotaion or setting that will ensure that the Answers in the database are exactly the Answers in the Request's List? A solution is to keep references to the original Answers List and then use the EntityManager to remove each old Answer before merging the Request, but it seems like there should be a cleaner way. Thank you!

    Read the article

  • SQL Server insert performance with and without primary key

    - by Eric
    Summary: I have a table populated via the following: insert into the_table (...) select ... from some_other_table Running the above query with no primary key on the_table is ~15x faster than running it with a primary key, and I don't understand why. The details: I think this is best explained through code examples. I have a table: create table the_table ( a int not null, b smallint not null, c tinyint not null ); If I add a primary key, this insert query is terribly slow: alter table the_table add constraint PK_the_table primary key(a, b); -- Inserting ~880,000 rows insert into the_table (a,b,c) select a,b,c from some_view; Without the primary key, the same insert query is about 15x faster. However, after populating the_table without a primary key, I can add the primary key constraint and that only takes a few seconds. This one really makes no sense to me. More info: The estimated execution plan shows 0% total query time spent on the clustered index insert SQL Server 2008 R2 Developer edition, 10.50.1600 Any ideas?

    Read the article

  • NSView only redraws on breakpoint

    - by Jacopo
    I have a custom view inside a NSPopover. It should change according to user input and it does the first time the user interact with it but it fails to redraw the following times. I have tried to put an NSLog inside the -drawRect: method and it doesn't get called during normal execution. When I try to debug and put a breakpoint inside the method it gets called normally and the app works as it should. I explicitly call the view -setNeedsDisplay: method every time I need it to redraw. I don't understand why it should make a difference. Here is the code that update the status of the view. These methods are part of the NSTextField delegate method -textDidChange: and I checked that these get called every time the user type something in the textfield associated with popover. [tokenCloud tokensToHighlight:[NSArray arrayWithObject:completeSuggestionString]]; tokenCloud.tokens = filteredTokens; [tokenCloud setNeedsDisplay:YES]; The views is a series of recessed button. The first line update the status of all the buttons in the popover and the second add or delete buttons. They both work properly because the first time they are called the view is update properly. I have also checked that both the status of the buttons in tokenCloud and its property tokens are updated correctly. The problem is that the NSView subclass, tokenCloud, doesn't redraw so the changes are not reflected in the UI the second time. Here is the draw method of the view: - (void)drawRect:(NSRect)rect { [self recalculateButtonLocations]; NSLog(@"Redrawn"); } Again this method gets called normally every time I update the view if I place a breakpoint in [self recalculateButtonLocations];. If instead I let the app run normally nothing gets logged in the console the second time I update the view. Same thing if I include the NSLog in the recalculateButtonLocations method, nothing gets logged the second time meaning that the method is not called.

    Read the article

  • How does git fetches commits associated to a file ?

    - by liadan
    I'm writing a simple parser of .git/* files. I covered almost everything, like objects, refs, pack files etc. But I have a problem. Let's say I have a big 300M repository (in a pack file) and I want to find out all the commits which changed /some/deep/inside/file file. What I'm doing now is: fetching last commit finding a file in it by: fetching parent tree finding out a tree inside recursively repeat until I get into the file additionally I'm checking hashes of each subfolders on my way to file. If one of them is the same as in commit before, I assume that file was not changed (because it's parent dir didn't change) then I store the hash of a file and fetch parent commit finding file again and check if hash change occurs if yes then original commit (i.e. one before parent) was changing a file And I repeat it over and over until I reach very first commit. This solution works, but it sucks. In worse case scenario, first search can take even 3 minutes (for 300M pack). Is there any way to speed it up ? I tried to avoid putting so large objects in memory, but right now I don't see any other way. And even that, initial memory load will take forever :( Greets and thanks for any help!

    Read the article

  • Why won't my code segfault on Windows 7?

    - by Trevor
    This is an unusual question to ask but here goes: In my code, I accidentally dereference NULL somewhere. But instead of the application crashing with a segfault, it seems to stop execution of the current function and just return control back to the UI. This makes debugging difficult because I would normally like to be alerted to the crash so I can attach a debugger. What could be causing this? Specifically, my code is an ODBC Driver (ie. a DLL). My test application is ODBC Test (odbct32w.exe) which allows me to explicitly call the ODBC API functions in my DLL. When I call one of the functions which has a known segfault, instead of crashing the application, ODBC Test simply returns control to the UI without printing the result of the function call. I can then call any function in my driver again. I do know that technically the application calls the ODBC driver manager which loads and calls the functions in my driver. But that is beside the point as my segfault (or whatever is happening) causes the driver manager function to not return either (as evidenced by the application not printing a result). One of my co-workers with a similar machine experiences this same problem while another does not but we have not been able to determine any specific differences.

    Read the article

< Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >