Search Results

Search found 8268 results on 331 pages for 'difference'.

Page 288/331 | < Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >

  • Problem in homemade function to merge objects

    - by Eric
    I'm trying to make a function that merges arrays. The reason is, I have a function that supposed to get the settings of an entity, and merge them with the global defaults. //So for example, let's say globalOptions is something like this var globalOptions={opt1:'foo',opt2:'something'}; //and this is entityOptions var entityOptions={opt1:'foofoo',opt2:null}; The only difference is it has objects in objects and objects in objects in objects, so what I made was a function that loops through all objects, thinking I would later, easily be able to loop through it all. Please ignore the array thing. That is defected, but unneeded. function loopObj(obj, call, where, objcall, array) { if ($.isArray(obj) || $.isPlainObject(obj)) { for (i in obj) { if ($.isArray(obj)) { if (array) { loopObj(obj[i], call, where[where.length] = i, true); if (objcall) { call(obj[i],where,true); } } else { loopObj(obj[i], call, where+'['+i+']', false); if (objcall) { call(obj[i],where,true); } } } else { if (array) { loopObj(obj[i], call, where[where.length] = parseInt(i), true); if (objcall) { call(obj[i],where,true); } } else { loopObj(obj[i], call, where+'[\''+i+'\']', false); if (objcall) { call(obj[i],where,true); } } } } } else { return call(obj,where); } } Then I made this program to convert it: function mergeObj(a,b) { temp.retd = new Object(); loopObj(a,function (c,d) { if (c) { eval(d.replace('%par%','temp.retd'))=c; } else { eval(d.replace('%par%','temp.retd'))=eval(d.replace('%par%','b')); } },'%par%', true); return temp.retd(); } I get the error: Uncaught ReferenceError: Invalid left-hand side in assignment (anonymous function)base.js:51 loopObjbase.js:40 loopObjbase.js:31 mergeObjbase.js:46 (anonymous function)base.js:72 I know what it means, the eval returns an anonomys variable (copy of the variable), so I can't set it, only get it.

    Read the article

  • Writing a program which uses voice recogniton... where should I start?

    - by Katsideswide
    Hello! I'm a design student currently dabbling with Arduino code (based on c/c++) and flash AS3. What I want to do is to be able to write a program with a voice control input. So, program prompts user to spell a word. The user spells out the word. The program recognizes if this is right, adds one to a score if it's correct, and corrects the user if it's wrong. So I'm seeing a big list of words, each with an audio file of the word being read out, with the voice recognition part checking to see if the reply matches the input. Ideally i'd like to be able to interface this with an Arduino microcontroller so that a physical output with a motor could be achieved in reaction also. Thing is i'm not sure if I can make this program in flash, in Processing (associated with arduino) or if I need another CS3 program-making-program. I guess I need to download a good voice recognizing program, but how can I interface this with anything else? Also, I'm on a mac. (not sure if this makes a difference) I apologize for my cluelessness, any hints would be great! -Susan

    Read the article

  • JMS message. Model to include data or pointers to data?

    - by John
    I am trying to resolve a design difference of opinion where neither of us has experience with JMS. We want to use JMS to communicate between a j2ee application and the stand-alone application when a new event occurs. We would be using a single point-to-point queue. Both sides are Java-based. The question is whether to send the event data itself in the JMS message body or to send a pointer to the data so that the stand-alone program can retrieve it. Details below. I have a j2ee application that supports data entry of new and updated persons and related events. The person records and associated events are written to an Oracle database. There are also stand-alone, separate programs that contribute new person and event records to the database. When a new event occurs through any of 5-10 different application functions, I need to notify remote systems through an outbound interface using an industry-specific standard messaging protocol. The outbound interface has been designed as a stand-alone application to support scalability through asynchronous operation and by moving it to a separate server. The j2ee application currently has most of the data in memory at the time the event is entered. The data would consist of approximately 6 different objects; a person object and some with multiple instances for an average size in the range of 3000 to 20,000 bytes. Some special cases could be many times this amount. From a performance and reliability perspective, should I model the JMS message to pass all the data needed to create the interface message, or model the JMS message to contain record keys for the data and have the stand-alone Java application retrieve the data to create the interface message?

    Read the article

  • How do I gather data from the same collumn in multiple worksheets in a single workbook?

    - by infiniteloop91
    Okay so here is what I want to accomplish. For this example I have a single workbook composed of 4 data sheets plus a totals sheet. Each of the 4 data sheets has a similar name following the same pattern where the only difference is the date. (Ex. 9854978_1009_US.txt, where 1009 is the date that changes while the rest of the file name is the same). In each of those documents column F contains a series of number that I would like to find the sum of but I will have no idea how many cells in F actually contain numbers. (However there will never be additional information below it the numbers so I could in theory just add the entire F column together). I will also add new files to the workbook over time and do not want to have to rewrite the code of which I gather my data from column F. Essentially what I would like to accomplish is for the 'totals' document to take every column F from documents in the workbook with the name of '9854978_????_US.txt', where the question marks change based on the file name. How would I go about doing this in pure Excel code?

    Read the article

  • Domain model for an online WYSYWG webpage generator / runtime

    - by CharlieBrown
    Hi all, I'm using C#, MVC, NHibernate and StructureMap as my IoC container, and need some ideas regarding my domain model. The application I'm working has two parts: an Authoring part and a Runtime part. The idea is to allow the user to create a webpage in Authoring (mostly a form actually) by choosing from a set of predefined controls. That webpage will be later used as a form in a call center environment (Runtime part), or may be used in an intranet portal, etc. Basically something similar to what a CMS would do. The difference is, of course, that the webpage/form the author generates will be used and fulfilled in runtime, and that authros should be able to freely create the webpage they want without limitations. I have a draft working model that allows a RunController to iterate over the ScriptPage (my class for the "generated webpage") Controls collection and uses partial views to render each of them. Works kind of fine. Basically I have a common ScriptControl class, and then I can create for example a TextInputControl or a DropDownControl by inheriting from that base class. I can also figure out the Authoring part of the app, although that will surely be fun in itself for sure. :) The biggest problem I have now is persistance. In order to be flexible, I want to be able to add more controls, and template controls (think of an Address composite control) in sepparate DLLs, so I think having a relational model that handles very possible control is not the way to go. My current thinking is using a kind of ObjectStore: binary-serializing the ScriptPage object that contains the List collection and deserializing at Runtime, but I'm not sure how good will it work with NHibernate and how good the performance will be. Serializing a small "page" with 10 controls results in 7964 bytes, for example. Any ideas out there? Thanks in advance, excuse the length. ;)

    Read the article

  • Should I learn two (or more) programming languages in parallel?

    - by c_maker
    I found entries on this site about learning a new programming language, however, I have not come across anything that talks about the advantages and disadvantages of learning two languages at the same time. Let's say my goal is to learn two new languages in a year. I understand that the definition of learning a new language is different for everyone and you can probably never know everything about a language. I believe in most cases the following things are enough to include the language in your resume and say that you are proficient in it (list is not in any particular order): Know its syntax so you can write a simple program in it Compare its underlying concepts with concepts of other languages Know best practices Know what libraries are available Know in what situations to use it Understand the flow of a more complex program At least know most of what you do not know I would probably look for a good book and pick an open source project for both of these languages to start with. My questions: Is it best to spend 5 months learning language#1 then 5 months learning language#2, or should you mix the two. Mixing them I mean you work on them in parallel. Should you pick two languages that are similar or different? Are there any advantages/disadvantages of let's say learning Lisp in tandem with Ruby? Is it a good idea to pick two languages with similar syntax or would it be too confusing? Please tell me what your experiences are regarding this. Does it make a difference if you are a beginner or a senior programmer?

    Read the article

  • SQL Standard Regarding Left Outer Join and Where Conditions

    - by Ryan
    I am getting different results based on a filter condition in a query based on where I place the filter condition. My questions are: Is there a technical difference between these queries? Is there anything in the SQL standard that explains the different resultsets in the queries? Given the simplified scenario: --Table: Parent Columns: ID, Name, Description --Table: Child Columns: ID, ParentID, Name, Description --Query 1 SELECT p.ID, p.Name, p.Description, c.ID, c.Name, c.Description FROM Parent p LEFT OUTER JOIN Child c ON (p.ID = c.ParentID) WHERE c.ID IS NULL OR c.Description = 'FilterCondition' --Query 2 SELECT p.ID, p.Name, p.Description, c.ID, c.Name, c.Description FROM Parent p LEFT OUTER JOIN Child c ON (p.ID = c.ParentID AND c.Description = 'FilterCondition') I assumed the queries would return the same resultsets and I was surprised when they didn't. I am using MS SQL2005 and in the actual queries, query 1 returned ~700 rows and query 2 returned ~1100 rows and I couldn't detect a pattern on which rows were returned and which rows were excluded. There were still many rows in query 1 with child rows with data and NULL data. I prefer the style of query 2 (and I think it is more optimal), but I thought the queries would return the same results.

    Read the article

  • ARC and __unsafe_unretained

    - by J Shapiro
    I think I have a pretty good understanding of ARC and the proper use cases for selecting an appropriate lifetime qualifiers (__strong, __weak, __unsafe_unretained, and __autoreleasing). However, in my testing, I've found one example that doesn't make sense to me. As I understand it, both __weak and __unsafe_unretained do not add a retain count. Therefore, if there are no other __strong pointers to the object, it is instantly deallocated. The only difference in this process is that __weak pointers are set to nil, and __unsafe_unretained pointers are left alone. If I create a __weak pointer to a simple, custom object (composed of one NSString property), I see the expected (null) value when trying to access a property: Test * __weak myTest = [[Test alloc] init]; myTest.myVal = @"Hi!"; NSLog(@"Value: %@", myTest.myVal); // Prints Value: (null) Similarly, I would expect the __unsafe_unretained lifetime qualifier to cause a crash, due to the resulting dangling pointer. However, it doesn't. In this next test, I see the actual value: Test * __unsafe_unretained myTest = [[Test alloc] init]; myTest.myVal = @"Hi!"; NSLog(@"Value: %@", myTest.myVal); // Prints Value: Hi! Why doesn't the __unsafe_unretained object become deallocated?

    Read the article

  • Python : How do you find the CPU consumption for a piece of code?

    - by Yugal Jindle
    Background: I have a django application, it works and responds pretty well on low load, but on high load like 100 users/sec, it consumes 100% CPU and then due to lack of CPU slows down. Problem : Profiling the application gives me time taken by functions. This time increases on high load. Time consumed may be due to complex calculation or for waiting for CPU. so, how to find the CPU cycles consumed by a piece of code ? Since, reducing the CPU consumption will increase the response time. I might have written extremely efficient code and need to add more CPU power OR I might have some stupid code taking the CPU and causing the slow down ? Any help is appreciated ! Update: I am using Jmeter to profile my webapp, it gives me a throughput of 2 requests/sec. [ 100 users] I get a average time of 36 seconds on 100 request vs 1.25 sec time on 1 request. More Info Configuration Nginx + Uwsgi with 4 workers No database used, using a responses from a REST API On 1st hit the response of REST API gets cached, therefore doesn't makes a difference. Using ujson for json parsing. Curious to Know: Python-Django is used by so many orgs for so many big sites, then there must be some high end Debug / Memory-CPU analysis tools. All those I found were casual snippets of code that perform profiling.

    Read the article

  • Heapsort not working in Python for list of strings using heapq module

    - by VSN
    I was reading the python 2.7 documentation when I came across the heapq module. I was interested in the heapify() and the heappop() methods. So, I decided to write a simple heapsort program for integers: from heapq import heapify, heappop user_input = raw_input("Enter numbers to be sorted: ") data = map (int, user_input.split(",")) new_data = [] for i in range(len(data)): heapify(data) new_data.append(heappop(data)) print new_data This worked like a charm. To make it more interesting, I thought I would take away the integer conversion and leave it as a string. Logically, it should make no difference and the code should work as it did for integers: from heapq import heapify, heappop user_input = raw_input("Enter numbers to be sorted: ") data = user_input.split(",") new_data = [] for i in range(len(data)): heapify(data) print data new_data.append(heappop(data)) print new_data Note: I added a print statement in the for loop to see the heapified list. Here's the output when I ran the script: `$ python heapsort.py Enter numbers to be sorted: 4, 3, 1, 9, 6, 2 [' 1', ' 3', ' 2', ' 9', ' 6', '4'] [' 2', ' 3', '4', ' 9', ' 6'] [' 3', ' 6', '4', ' 9'] [' 6', ' 9', '4'] [' 9', '4'] ['4'] [' 1', ' 2', ' 3', ' 6', ' 9', '4']` The reasoning I applied was that since the strings are being compared, the tree should be the same if they were numbers. As is evident, the heapify didn't work correctly after the third iteration. Could someone help me figure out if I am missing something here? I'm running Python 2.4.5 on RedHat 3.4.6-9. Thanks, VSN

    Read the article

  • Color banding only on Android 4.0+

    - by threeshinyapples
    On emulators running Android 4.0 or 4.0.3, I am seeing horrible colour banding which I can't seem to get rid of. On every other Android version I have tested, gradients look smooth. I have a SurfaceView which is configured as RGBX_8888, and the banding is not present in the rendered canvas. If I manually dither the image by overlaying a noise pattern at the end of rendering I can make the gradients smooth again, though obviously at a cost to performance which I'd rather avoid. So the banding is being introduced later. I can only assume that, on 4.0+, my SurfaceView is being quantized to a lower bit-depth at some point between it being drawn and being displayed, and I can see from a screen capture that gradients are stepping 8 values at a time in each channel, suggesting a quantization to 555 (not 565). I added the following to my Activity onCreate function, but it made no difference. getWindow().setFormat(PixelFormat.RGBA_8888); getWindow().addFlags(WindowManager.LayoutParams.FLAG_DITHER); I also tried putting the above in onAttachedToWindow() instead, but there was still no change. (I believe that RGBA_8888 is the default window format anyway for 2.2 and above, so it's little surprise that explicitly setting that format has no effect on 4.0+.) Which leaves the question, if the source is 8888 and the destination is 8888, what is introducing the quantization/banding and why does it only appear on 4.0+? Very puzzling. I wonder if anyone can shed some light?

    Read the article

  • Optimize Duplicate Detection

    - by Dave Jarvis
    Background This is an optimization problem. Oracle Forms XML files have elements such as: <Trigger TriggerName="name" TriggerText="SELECT * FROM DUAL" ... /> Where the TriggerText is arbitrary SQL code. Each SQL statement has been extracted into uniquely named files such as: sql/module=DIAL_ACCESS+trigger=KEY-LISTVAL+filename=d_access.fmb.sql sql/module=REP_PAT_SEEN+trigger=KEY-LISTVAL+filename=rep_pat_seen.fmb.sql I wrote a script to generate a list of exact duplicates using a brute force approach. Problem There are 37,497 files to compare against each other; it takes 8 minutes to compare one file against all the others. Logically, if A = B and A = C, then there is no need to check if B = C. So the problem is: how do you eliminate the redundant comparisons? The script will complete in approximately 208 days. Script Source Code The comparison script is as follows: #!/bin/bash echo Loading directory ... for i in $(find sql/ -type f -name \*.sql); do echo Comparing $i ... for j in $(find sql/ -type f -name \*.sql); do if [ "$i" = "$j" ]; then continue; fi # Case insensitive compare, ignore spaces diff -IEbwBaq $i $j > /dev/null # 0 = no difference (i.e., duplicate code) if [ $? = 0 ]; then echo $i :: $j >> clones.txt fi done done Question How would you optimize the script so that checking for cloned code is a few orders of magnitude faster? System Constraints Using a quad-core CPU with an SSD; trying to avoid using cloud services if possible. The system is a Windows-based machine with Cygwin installed -- algorithms or solutions in other languages are welcome. Thank you!

    Read the article

  • Changing UIViews during UIInterfaceOrientation on iPad

    - by FreeAppl3
    I am trying to change views on rotation because my views have to be significantly different from portrait to landscape. Now the code I am using works once then the app freezes when trying to rotate back. Either direction does not make a difference. For example: If I am in Landscape and rotate to portrait everything works great until I rotate back to landscape then it freezes and does absolutely nothing. Here is the code I am using to achieve this In my "viewDidLoad" method [[UIDevice currentDevice] beginGeneratingDeviceOrientationNotifications]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(didRotate:) name:UIDeviceOrientationDidChangeNotification object:nil]; Then I call this for the rotation: - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Return YES for supported orientations return YES; } - (void)didRotate:(NSNotification *)notification { UIDeviceOrientation orientation = [[UIDevice currentDevice] orientation]; if ((orientation == UIDeviceOrientationLandscapeLeft) || (orientation == UIDeviceOrientationLandscapeLeft)) { // present the other viewController, it's only viewable in landscape [self.view addSubview:landScapeView]; } if ((orientation == UIDeviceOrientationLandscapeRight) || (orientation == UIDeviceOrientationLandscapeRight)) { // present the other viewController, it's only viewable in landscape [self.view addSubview:landScapeView]; } else if ((orientation == UIDeviceOrientationPortrait || (orientation == UIDeviceOrientationPortrait)) { // get rid of the landscape controller [self.view addSubview:portrait]; } else if ((orientation == UIDeviceOrientationPortraitUpsideDown || (orientation == UIDeviceOrientationPortraitUpsideDown)) { // get rid of the landscape controller [self.view addSubview:portrait]; } }

    Read the article

  • APC decreasing php performance??? (php 5.3, apache 2.2, windows vista 64bit)

    - by M.M.
    Hi, I have an Apache/2.2.15 (VC9) and PHP/5.3.2 (VC9 thread safe) running as an apache module on Vista 64bit machine. All running fine. Project that I'm benchmarking (with apache's ab utility) is basically standard Zend Framework project with no db connection involved. Average (median) apache response is about 0.15 seconds. After I've installed APC (3.1.4-dev VC9 thread safe) with standard settings suddenly the request response time raised to 1.3 seconds (!), which is unacceptable... All apc settings looked always good (through the apc.php script: enough shm memory, no cache full, fragmentation 0%). Only difference was to disable the stats lookup (apc.stat = 0). Then the response dropped to 0.09 seconds which was finally better than without the apc. IIRC, it's expected and obvious that the stat lookup creates some overhead, but shouldn't it still be far more performant compared to running wihout the apc extension at all? Or put it differently why is the apc.stat creating so much overhead? Apparently, something is not working as it should, I don't really know where to start looking. Thank you for your time/answers/direction in advance. Cheers, m.

    Read the article

  • Is there a set based solution for this problem?

    - by NYSystemsAnalyst
    We have a table set up as follows: |ID|EmployeeID|Date |Category |Hours| |1 |1 |1/1/2010 |Vacation Earned|2.0 | |2 |2 |2/12/2010|Vacation Earned|3.0 | |3 |1 |2/4/2010 |Vacation Used |1.0 | |4 |2 |5/18/2010|Vacation Earned|2.0 | |5 |2 |7/23/2010|Vacation Used |4.0 | The business rules are: Vacation balance is calculated by vacation earned minus vacation used. Vacation used is always applied against the oldest vacation earned amount first. We need to return the rows for Vacation Earned that have not been offset by vacation used. If vacation used has only offset part of a vacation earned record, we need to return that record showing the difference. For example, using the above table, the result set would look like: |ID|EmployeeID|Date |Category |Hours| |1 |1 |1/1/2010 |Vacation Earned|1.0 | |4 |2 |5/18/2010|Vacation Earned|1.0 | Note that record 2 was eliminated because it was completely offset by used time, but records 1 and 4 were only partially used, so they were calculated and returned as such. The only way we have thought of to do this is to get all of the vacation earned records in a temporary table. Then, get the total vacation used and loop through the temporary table, deleting the oldest record and subtracting that value from the total vacation used until the total vacation used is zero. We could clean it up for when the remaining vacation used is only part of the oldest vacation earned record. This would leave us with just the outstanding vacation earned records. This works, but it is very inefficient and performs poorly. Also, the performance will just degrade over time as more and more records are added. Are there any suggestions for a better solution, preferable set based? If not, we'll just have to go with this.

    Read the article

  • How can I compare the performance of log() and fp division in C++?

    - by Ventzi Zhechev
    Hi, I’m using a log-based class in C++ to store very small floating-point values (as the values otherwise go beyond the scope of double). As I’m performing a large number of multiplications, this has the added benefit of converting the multiplications to sums. However, at a certain point in my algorithm, I need to divide a standard double value by an integer value and than do a *= to a log-based value. I have overloaded the *= operator for my log-based class and the right-hand side value is first converted to a log-based value by running log() and than added to the left-hand side value. Thus the operations actually performed are floating-point division, log() and floating-point summation. My question whether it would be faster to first convert the denominator to a log-based value, which would replace the floating-point division with floating-point subtraction, yielding the following chain of operations: twice log(), floating-point subtraction, floating-point summation. In the end, this boils down to whether floating-point division is faster or slower than log(). I suspect that a common answer would be that this is compiler and architecture dependent, so I’ll say that I use gcc 4.2 from Apple on darwin 10.3.0. Still, I hope to get an answer with a general remark on the speed of these two operators and/or an idea on how to measure the difference myself, as there might be more going on here, e.g. executing the constructors that do the type conversion etc. Cheers!

    Read the article

  • Can I modify the way Windows draws the Aero UI?

    - by LonelyPixel
    Windows 7 with Aero Glass basically looks quite nice I think. But it has some major drawbacks regarding readability: I cannot easily tell whether a window is currently active or not. I've been tweaking the colours and transparency levels a lot recently but the only safe indicator is the close button: it's red when the window is active, it's colourless otherwise. Then there's the window title text. It is always painted black, on however dark a background. Again, regardless of whether the window is active or not. I've seen WindowBlinds and the tons of available themes you can use with it. Browsing through the most popular or highest rated in several categories I was really scared. I don't want to face Terminator every day, feel like in the Jungle or be fooled that I had an Apple computer which I do not. All I want to change is to make a greater colour difference between active and inactive windows and to invert the window title text colour for dark backgrounds. (Including that visibility hack of a spray brush background.) Is there some Windows API to alter the way Windows draws its windows or does it take the years of private research from Stardock to hook into that? I mean they say it's approved by Microsoft, so I assume there's some official documentation for that, I just couldn't find any.

    Read the article

  • How can I prevent the scaling of a UIWebview's content after reorientation?

    - by frankhermes
    I'm building an iOS 5/6 app that has a UIWebView. It loads some HTML that I have embedded in my app. Now when I rotate my device, the WebView changes its size (as I want it to fill the entire width of the screen). And then it gets weird: some content gets scaled up and some content doesn't get scaled up. See this image with some example text in it: As you can see, the header (H6) stays the same, while the paragraph gets scaled up. Does anybody have an idea how to prevent this? I want the html to look the same in landscape as it does in portrait mode. I've tried setting the viewport scaling to 1: <meta name="viewport" content="initial-scale=1.0,max-scale=1.0"> but that doesn't help. The body's font-size style is set to 14px, but changing that to 14pt or a percentage also made no difference. Setting the width of the body to 100% also didn't help. Strangely, removing the line break (<br/>) that's in the text fixes it but I need line breaks to be in there so that's no solution. The only thing that does work is reloading the UIWebView's content after an orientation change, but that doesn't prevent it from looking wrong during rotation, and it resets any scrolling that the user may have done. Any ideas?

    Read the article

  • Why is no encoding set in reponse by tomcat? How can I deal with it?

    - by Dishayloo
    I had recently a problem with encoding of websites generated by servlet, that occured if the servlets were deployed under tomcat, but not under jetty. I did a little bit of research about it and simplified the problem to the following servlet: public class TestServlet extends HttpServlet implements Servlet { @Override public void service(HttpServletRequest request, HttpServletResponse response) throws IOException { response.setContentType("text/plain"); Writer output = response.getWriter(); output.write("öäüÖÄÜß"); output.flush(); output.close(); } } If I deploy this under Jetty and direct the browser to it, it returns the expected result. The data is returned as ISO-8859-1 and if I take a look into the headers, then Jetty returns: Content-Type: text/plain; charset=iso-8859-1 The browser detects the encoding from this header. If I deploy the same servlet in a tomcat, the browser shows up strange characters. But Tomcat also returns the data as ISO-8859-1, the difference is, that no header tells about it. So the browser has to guess the encoding, and that goes wrong. My question is, is that behaviour of tomcat correct or a bug? And if it is correct, how can I avoid this problem? Sure, I can always add response.setCharacterEncoding("UTF-8"); to the servlet, but that means I set a fixed encoding, that the browser might or might not understand. The problem is more relevant, if no browser but another service accesses the servlet. So how I should deal with the problem in the most flexible way?

    Read the article

  • Please tell me what is wrong with my threading!!!

    - by kiddo
    I have a function where I will compress a bunch of files into a single compressed file..it is taking a long time(to compress),so I tried implementing threading in my application..Say if I have 20 files for compression,I separated that as 5*4=20,inorder to do that I have separate variables(which are used for compression) for all 4 threads in order to avoid locks and I will wait until the 4 thread finishes..Now..the threads are working but i see no improvement in their performance..normally it will take 1 min for 20 files(for example) after implementing threading ...there is only 5 or 3 sec difference., sometimes the same. here i will show the code for 1 thread(so it is for other3 threads) //main thread myClassObject->thread1 = AfxBeginThread((AFX_THREADPROC)MyThreadFunction1,myClassObject); .... HANDLE threadHandles[4]; threadHandles[0] = myClassObject->thread1->m_hThread; .... WaitForSingleObject(myClassObject->thread1->m_hThread,INFINITE); UINT MyThreadFunction(LPARAM lparam) { CMerger* myClassObject = (CMerger*)lparam; CString outputPath = myClassObject->compressedFilePath.GetAt(0);//contains the o/p path wchar_t* compressInputData[] = {myClassObject->thread1outPath, COMPRESS,(wchar_t*)(LPCTSTR)(outputPath)}; HINSTANCE loadmyDll; loadmydll = LoadLibrary(myClassObject->thread1outPath); fp_Decompress callCompressAction = NULL; int getCompressResult=0; myClassObject->MyCompressFunction(compressInputData,loadClient7zdll,callCompressAction,myClassObject->thread1outPath, getCompressResult,minIndex,myClassObject->firstThread,myClassObject); return 0; }

    Read the article

  • Processing data from an AJAX request

    - by Josh K
    I have a PHP API I'm working with that outputs everything as JSON. I need to call one of the API methods and parse it out using an AJAX request. I am using jQuery (though it shouldn't matter). When I make the request it errors out with a "parsererror" as the textStatus and a "Syntax Error: invalid label" when I make the request. Simplified code: $.ajax ({ type: "POST", url: "http://mydomain.com/api/get/userlist/"+mid, dataType: "json", dataFilter: function(data, type) { /* Here we assume and pray */ users = eval(data); alert(users[1].id); }, success: function(data, textStatus, XMLHttpRequest) { alert(data.length); // Should be an array, yet is undefined. }, error: function(XMLHttpRequest, textStatus, errorThrown) { alert(textStatus); alert(errorThrown); }, complete: function(XMLHttpRequest, textStatus) { alert("Done"); } }); If I leave off the eval(data) then everything works fine. Well, except for data still being undefined in success. Note that I'm taking an array of objects in PHP and then passing them out through json_encode. Would that make any difference? There has been no progress made on this. I'm willing to throw more code up if someone believes they can help. Here is the PHP side of things private function _get_user_colors($id) { $u = new User(); $u->get_where(array('id' => $id)); $bar = array(); $bar['user'] = $u->stored; foreach($user->colors as $color) { $bar['colors'][] = $color; } echo(json_encode($bar)); } I have had zero issues using this with other PHP based scripts. I don't know why Javascript would take issue with it.

    Read the article

  • GIT: head has dissapeared, want to merge it into master.

    - by samgoody
    The top image is the output of: git reflog. The bottom is what GITK in GIT GUI (msysgit) shows me when I look at all branch history. The last few commits do not show on GIT GUI. Why do they not show on GITK (at least as a branch or something)? How do I merge them into master? I gather this happened when I checked out tag 0.42. Why is that not the same as master? (I had tagged the master in its latest state) When I click push, why does the remote repo claim to be up to date.. shouldn't it try to update these commits into whatever branch they are in? The first of the questions is important - I would like to begin to understand what GIT is thinking. It's more oracle than logic at this point. If it makes a difference to see the earlier history, the project is a [pretty powerful] JS color picker that can be viewed here in its entirety.

    Read the article

  • Why my oracleParameter doesnt work?

    - by user1824356
    I'm a .NET developer and this is the first time i work with oracle provider (Oracle 10g and Framework 4.0). When i add parameter to my command in this way: objCommand.Parameters.Add("pc_cod_id", OracleType.VarChar, 4000).Value = codId; objCommand.Parameters.Add("pc_num_id", OracleType.VarChar, 4000).Value = numId; objCommand.Parameters.Add("return_value", OracleType.Number).Direction = ParameterDirection.ReturnValue; objCommand.Parameters.Add("pc_email", OracleType.VarChar, 4000).Direction = ParameterDirection.Output; I have no problem with the result. But when a add parameter in this way: objCommand.Parameters.Add(CreateParameter(PC_COD_ID, OracleType.VarChar, codId, ParameterDirection.Input)); objCommand.Parameters.Add(CreateParameter(PC_NUM_ID, OracleType.VarChar, numId, ParameterDirection.Input)); objCommand.Parameters.Add(CreateParameter(RETURN_VALUE, OracleType.Number, ParameterDirection.ReturnValue)); objCommand.Parameters.Add(CreateParameter(PC_EMAIL, OracleType.VarChar, ParameterDirection.Output)); The implementation of that function is: protected OracleParameter CreateParameter(string name, OracleType type, ParameterDirection direction) { OracleParameter objParametro = new OracleParameter(name, type); objParametro.Direction = direction; if (type== OracleType.VarChar) { objParametro.Size = 4000; } return objParametro; } All my result are a empty string. My question is, these way to add parameters are not the same? And if no, what is the difference? Thanks :) Add: Sorry i forgot mention "CreateParameter" is a function with multiple implementations the base is the above function, the other use that. protected OracleParameter CreateParameter(string name, OracleType type, object value, ParameterDirection direction) { OracleParameter objParametro = CreateParameter(name, type, value); objParametro.Direction = direction; return objParametro; } The last parameters doesn't need value because those are output parameter, those bring me data from the database.

    Read the article

  • Cookies not working for password-protected Pages on WordPress

    - by KaOSoFt
    Initially I had the issue reported in this question. Now, what I noticed is that there are some browsers that accept the password, and there are some which don't. Difference? For some reason the cookie is generated when I log in into the Administration module, but it isn't when I write down the password to access the page, forcing it to simply reload. I can see the cookie created for the log-in, but I can see none for the password-protected Page. These happens on Internet Explorer, both version 7 and 8; only on some machines, though, but most of them fail this. I already tried white-listing the URL, and even letting it accept ALL cookies, to no avail. What may be the cause? If perhaps it's got something to do with question above, please help me! Thanks in advance. PS: If you know of another, cookie-free method to make a simple authentication, please link me to it. Thanks. Oh, and by the way, this is inside an Intranet with static, class C IPs.

    Read the article

  • How can I work out if a date is on or before today?

    - by Yvonne
    My web application consists of library type system where books have due dates. I have the current date displayed on my page, simply by using this: date_default_timezone_set('Europe/London'); $date = date; print $date("d/m/Y"); I have set 'date' as a variable because I'm not sure if it makes a difference when I use it in the IF statement you're about see, on my library books page. On this page, I am simply outputting the due dates of the books, many have dates which have not yet reached todays date, and others which have dates greater than todays date. Basically, all I want is the due date to appear bold (or strong), if it has passed todays date (the system displayed date). This is what I have and thought would work: <? if ($duedate < $date) { echo '<td><strong>'; } else { echo '<td>'; } ?> <?php echo $date('d/m/Y', $timestamp);?></strong></td> I have declared $timestamp as a var which converts the date of default MySQL format to a UK version. Can anyone help me out? I thought this would've been very straight forward!

    Read the article

< Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >