Search Results

Search found 10536 results on 422 pages for 'cpu usage'.

Page 371/422 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • What are the original reasons for ToString() in Java and .NET?

    - by d.
    I've used ToString() modestly in the past and I've found it very useful in many circumstances. However, my usage of this method would hardly justify to put this method in none other than System.Object. My wild guess is that, at some point during the work carried out and meetings held to come up with the initial design of the .NET framework, it was decided that it was necessary - or at least extremely useful - to include a ToString() method that would be implemented by everything in the .NET framework. Does anyone know what the exact reasons were? Am I missing a ton of situations where ToString() proves useful enough as to be part of System.Object? What were the original reasons for ToString()? Thanks a lot! PS - Again: I'm not questioning the method or implying that it's not useful, I'm just curious to know what makes it SO useful as to be placed in System.Object. Side note - Imagine this: AnyDotNetNativeClass someInitialObject = new AnyDotNetNativeClass([some constructor parameters]); AnyDotNetNativeClass initialObjectFullCopy = AnyDotNetNativeClass.FromString(someInitialObject.ToString()); Wouldn't this be cool? EDIT(1): (A) - Based on some answers, it seems that .NET languages inherited this from Java. So, I'm adding "Java" to the subject and to the tags as well. If someone knows the reasons why this was implemented in Java then please shed some light! (B) - Static hypothetical FromString vs Serialization: sure, but that's quite a different story, right?

    Read the article

  • Can knowing C actually hurt the code you write in higher level languages?

    - by Jurily
    The question seems settled, beaten to death even. Smart people have said smart things on the subject. To be a really good programmer, you need to know C. Or do you? I was enlightened twice this week. The first one made me realize that my assumptions don't go further than my knowledge behind them, and given the complexity of software running on my machine, that's almost non-existent. But what really drove it home was this Slashdot comment: The end result is that I notice the many naive ways in which traditional C "bare metal" programmers assume that higher level languages are implemented. They make bad "optimization" decisions in projects they influence, because they have no idea how a compiler works or how different a good runtime system may be from the naive macro-assembler model they understand. Then it hit me: C is just one more abstraction, like all others. Even the CPU itself is only an abstraction! I've just never seen it break, because I don't have the tools to measure it. I'm confused. Has my mind been mutilated beyond recovery, like Dijkstra said about BASIC? Am I living in a constant state of premature optimization? Is there hope for me, now that I realized I know nothing about anything? Is there anything to know, even? And why is it so fascinating, that everything I've written in the last five years might have been fundamentally wrong? To sum it up: is there any value in knowing more than the API docs tell me? EDIT: Made CW. Of course this also means now you must post examples of the interpreter/runtime optimizing better than we do :)

    Read the article

  • Implement a threading to prevent UI block on a bug in an async function

    - by Marcx
    I think I ran up againt a bug in an async function... Precisely the getDirectoryListingAsync() of the File class... This method is supposted to return an object containing the lists of files in a specified folder. I found that calling this method on a direcory with a lot of files (in my tests more than 20k files), after few seconds there is a block on the UI until the process is completed... I think that this method is separated in two main block: 1) get the list of files 2) create the array with the details of the files The point 1 seems to be async (for a few second the ui is responsive), then when the process pass from point 1 to point 2 the block of the UI occurs until the complete event is dispathed... Here's some (simple) code: private function checkFiles(dir:File):void { if (dir.exists) { dir.addEventListener( FileListEvent.DIRECTORY_LISTING, listaImmaginiLocale); dir.getDirectoryListingAsync(); // after this point, for the firsts seconds the UI respond well (point 1), // few seconds later (point 2) the UI is frozen } } private function listaImmaginiLocale( event:FileListEvent ):void { // from this point on the UI is responsive again... } Actually in my projects there are some function that perform an heavy cpu usage and to prevent the UI block I implemented a simple function that after some iteration will wait giving time to UI to be refreshed. private var maxIteration:int = 150000; private function sampleFunct(offset:int = 0) :void { if (offset < maxIteration) { // do something // call the recursive function using a timeout.. // if the offset in multiple by 1000 the function will wait 15 millisec, // otherwise it will be called immediately // 1000 is a random number for the pourpose of this example, but I usually change the // value based on how much heavy is the function itself... setTimeout(function():void{aaa(++offset);}, (offset%1000?15:0)); } } Using this method I got a good responsive UI without afflicting performance... I'd like to implement it into the getDirectoryListingAsync method but I don't know if it's possibile how can I do it where is the file to edit or extend.. Any suggestion???

    Read the article

  • Disabling browser print options (headers, footers, margins) from page?

    - by Anthony
    I have seen this question asked in a couple of different ways on SO and several other websites, but most of them are either too specific or out-of-date. I'm hoping someone can provide a definitive answer here without pandering to speculation. Is there a way, either with CSS or javascript, to change the default printer settings when someone prints within their browser? And of course by "prints from their browser" I mean some form of HTML, not PDF or some other plug-in reliant mime-type. Please note: If some browsers offer this and others don't (or if you only know how to do it for some browsers) I welcome browser-specific solutions. Similarly, if you know of a mainstream browser that has specific restrictions against EVER doing this, that is also helpful, but some fairly up-to-date documentation would be appreciated. (simply saying "that goes against XYZ's security policy" isn't very convincing when XYZ has made significant changes in said policy in the last three years). Finally, when I say "change default print settings" I don't mean forever, just for my page, and I am referring specifically to print margins, headers, and footers. I am very aware that CSS offers the option of changing the page orientation as well as the page margins. One of the many struggles is with Firefox. If I set the page margins to 1 inch, it ADDS this to the half inch it already puts into place. I very much want to reduce the usage of PDFs on my client's site, but the infringement on presentation (as well as the lack of reliability) are their main concern.

    Read the article

  • Is there a better way to keep track of session variable creation/access throughout different pages?

    - by Brandon
    Here's what I am working on. At my website I have multiple processes with each one containing multiple steps. Now in one of the processes, there is an error checking routine executed before proceeding to the next step of that process. A session var is set indicating the error status and it will either redirect back to the referrer or display the next page's contents. Now this kind of functionality, I believe, is common throughout web development. The issue that is occurring is that session vars are left around and are not being cleaned up properly. At times this introduces undesired behavior. My website is growing and I find that I am requiring more and more session vars to keep track of different system and error states. So I was thinking about creating a kind of "session variable keeper" to keep track of session var usage. The idea is fairly simple. It will have the notion of a context (e.g. registration process) and allow access to a predefined set of session vars within that context. In addition, the var and context will be paired with an action to proceed to some form of event handling. So if you haven't noticed I'm new to web development. Any thoughts or comments on the idea that I am proposing would be greatly appreciated. The back-end is written in PHP/MySQL.

    Read the article

  • Memory Leakage using datatables

    - by Vix
    Hi, I have situation in which i'm compelled to retrieve 30,000 records each to 2 datatables.I need to do some manipulations and insert into records into the SQL server in Manipulate(dt1,dt2) function.I have to do this in 15 times as you can see in the for loop.Now I want to know what would be the effective way in terms of memory usage.I've used the first approach.Please suggest me the best approach. (1) for (int i = 0; i < 15; i++) { DataTable dt1 = GetInfo(i); DataTable dt2 = GetData(i); Manipulate(dt1,dt2); } (OR) (2) DataTable dt1 = new DataTable(); DataTable dt2 = new DataTable(); for (int i = 0; i < 15; i++) { dt1=null; dt2=null; dt1 = GetInfo(); dt2 = GetData(); Manipulate(dt1, dt2); } Thanks, Vix.

    Read the article

  • how to emulate thread local storage at user space in C++ ?

    - by vprajan
    I am working on a mobile platform over Nucleus RTOS. It uses Nucleus Threading system but it doesn't have support for explicit thread local storage i.e, TlsAlloc, TlsSetValue, TlsGetValue, TlsFree APIs. The platform doesn't have user space pthreads as well. I found that __thread storage modifier is present in most of the C++ compilers. But i don't know how to make it work for my kind of usage. How does __thread keyword can be mapped with explicit thread local storage? I read many articles but nothing is so clear for giving me the following basic information will __thread variable different for each thread ? How to write to that and read from it ? does each thread has exactly one copy of the variable ? following is the pthread based implementation: pthread_key_t m_key; struct Data : Noncopyable { Data(T* value, void* owner) : value(value), owner(owner) {} int* value; }; inline ThreadSpecific() { int error = pthread_key_create(&m_key, destroy); if (error) CRASH(); } inline ~ThreadSpecific() { pthread_key_delete(m_key); // Does not invoke destructor functions. } inline T* get() { Data* data = static_cast<Data*>(pthread_getspecific(m_key)); return data ? data->value : 0; } inline void set(T* ptr) { ASSERT(!get()); pthread_setspecific(m_key, new Data(ptr, this)); } How to make the above code use __thread way to set & get specific value ? where/when does the create & delete happen? If this is not possible, how to write custom pthread_setspecific, pthread_getspecific kind of APIs. I tried using a C++ global map and index it uniquely for each thread and retrieved data from it. But it didn't work well.

    Read the article

  • A generic C++ library that provides QtConcurrent functionality?

    - by Lucas
    QtConcurrent is awesome. I'll let the Qt docs speak for themselves: QtConcurrent includes functional programming style APIs for parallel list processing, including a MapReduce and FilterReduce implementation for shared-memory (non-distributed) systems, and classes for managing asynchronous computations in GUI applications. For instance, you give QtConcurrent::map() an iterable sequence and a function that accepts items of the type stored in the sequence, and that function is applied to all the items in the collection. This is done in a multi-threaded manner, with a thread pool equal to the number of logical CPU's on the system. There are plenty of other function in QtConcurrent, like filter(), filteredReduced() etc. The standard CompSci map/reduce functions and the like. I'm totally in love with this, but I'm starting work on an OSS project that will not be using the Qt framework. It's a library, and I don't want to force others to depend on such a large framework like Qt. I'm trying to keep external dependencies to a minimum (it's the decent thing to do). I'm looking for a generic C++ framework that provides me with the same/similar high-level primitives that QtConcurrent does. AFAIK boost has nothing like this (I may be wrong though). boost::thread is very low-level compared to what I'm looking for. I know C# has something very similar with their Parallel Extensions so I know this isn't a Qt-only idea. What do you suggest I use?

    Read the article

  • Why is CoRegisterClassObject creating two extra threads?

    - by Stijn Sanders
    I'm trying to fix a problem that only recently happens on a number of machine's on a VPN. They each run a client application I wrote that exposes a COM automation object. For some strange reason I haven't been able to discover yet, one thread in the application takes up all of the available CPU time, slowing other operation on the machine. In observing the application's strange behaviour, I've noticed it's the third thread started, and if I debug on my machine I notice the first call to CoRegisterClassObject created two extra threads. If the second of these two threads is the one that gets into an infinite loop, I'm not at all shure how to fix this. Where could I check next about what's wrong? Could it have started by one of the recent patches rolled out by Microsoft this last 'patch tuesday'? I had a go with ProcessExplorer to extract a stack trace of the thread: ntoskrnl.exe!ExReleaseResourceLite+0x1a3 ntoskrnl.exe!PsGetContextThread+0x329 WLDAP32.dll!Ordinal325+0x1231 WLDAP32.dll!Ordinal325+0x129e WLDAP32.dll!Ordinal325+0x1178 ntdll.dll!LdrInitializeThunk+0x24 ntdll.dll!LdrShutdownThread+0xe9 kernel32.dll!ExitThread+0x3e kernel32.dll!FreeLibraryAndExitThread+0x1e ole32.dll!StringFromGUID2+0x65d kernel32.dll!GetModuleFileNameA+0x1ba

    Read the article

  • Shaping EF LINQ Query Results Using Multi-Table Includes

    - by sisdog
    I have a simple LINQ EF query below using the method syntax. I'm using my Include statement to join four tables: Event and Doc are the two main tables, EventDoc is a many-to-many link table, and DocUsage is a lookup table. My challenge is that I'd like to shape my results by only selecting specific columns from each of the four tables. But, the compiler is giving a compiler is giving me the following error: 'System.Data.Objects.DataClasses.EntityCollection does not contain a definition for "Doc' and no extension method 'Doc' accepting a first argument of type 'System.Data.Objects.DataClasses.EntityCollection' could be found. I'm sure this is something easy but I'm not figuring it out. I haven't been able to find an example of someone using the multi-table include but also shaping the projection. Thx,Mark var qry= context.Event .Include("EventDoc.Doc.DocUsage") .Select(n => new { n.EventDate, n.EventDoc.Doc.Filename, //<=COMPILER ERROR HERE n.EventDoc.Doc.DocUsage.Usage }) .ToList(); EventDoc ed; Doc d = ed.Doc; //<=NO COMPILER ERROR SO I KNOW MY MODEL'S CORRECT DocUsage du = d.DocUsage;

    Read the article

  • Application that depends heavily on stored procedures

    - by PieterG
    We currently have an application that depends largely on stored procedures. There is a heavy use of temp tables. It's an extremely large application. Facing this situation, I would like to use Entity Framework or Linq2Sql for a rewrite. I might consider using Fluent Hibernate or Subsonic, as i've used them quite extensively in the past. I've had problems with Linq2Sql generating the return types for the stored procedures because of the usage of the temp tables, and I think it's cumbersome to go and change all the stored procedures from temp tables to in-memory tables. Considering the 2 choices that I want to make, which one of the 2 is the best route to go and why? If my choices are extremely idiotic, please provide alternatives. Edit: The reason for the question and the change is that the data access layer is non-existent and was built 10 years ago. We currently still run into a lot of issues with it. I don't want to divulge too much, but if you saw it, your eyes would start bleeding :)

    Read the article

  • How to copy files without slowing down my app?

    - by Kevin Gebhardt
    I have a bunch of little files in my assets which need to be copied to the SD-card on the first start of my App. The copy code i got from here placed in an IntentService works like a charm. However, when I start to copy many litte files, the whole app gets increddible slow (I'm not really sure why by the way), which is a really bad experience for the user on first start. As I realised other apps running normal in that time, I tried to start a child process for the service, which didn't work, as I can't acess my assets from another process as far as I understood. Has anybody out there an idea how a) to copy the files without blocking my app b) to get through to my assets from a private process (process=":myOtherProcess" in Manifest) or c) solve the problem in a complete different way Edit: To make this clearer: The copying allready takes place in a seperate thread (started automaticaly by IntentService). The problem is not to separate the task of copying but that the copying in a dedicated thread somehow affects the rest of the app (e.g. blocking to many app-specific resources?) but not other apps (so it's not blocking the whole CPU or someting) Edit2: Problem solved, it turns out, there wasn't really a problem. See my answer below.

    Read the article

  • programs hangs during socket interaction

    - by herrturtur
    I have two programs, sendfile.py and recvfile.py that are supposed to interact to send a file across the network. They communicate over TCP sockets. The communication is supposed to go something like this: sender =====filename=====> receiver sender <===== 'ok' ======= receiver or sender <===== 'no' ======= receiver if ok: sender ====== file ======> receiver I've got The sender and receiver code is here: Sender: import sys from jmm_sockets import * if len(sys.argv) != 4: print "Usage:", sys.argv[0], "<host> <port> <filename>" sys.exit(1) s = getClientSocket(sys.argv[1], int(sys.argv[2])) try: f = open(sys.argv[3]) except IOError, msg: print "couldn't open file" sys.exit(1) # send filename s.send(sys.argv[3]) # receive 'ok' buffer = None response = str() while 1: buffer = s.recv(1) if buffer == '': break else: response = response + buffer if response == 'ok': print 'receiver acknowledged receipt of filename' # send file s.send(f.read()) elif response == 'no': print "receiver doesn't want the file" # cleanup f.close() s.close() Receiver: from jmm_sockets import * s = getServerSocket(None, 16001) conn, addr = s.accept() buffer = None filename = str() # receive filename while 1: buffer = conn.recv(1) if buffer == '': break else: filename = filename + buffer print "sender wants to send", filename, "is that ok?" user_choice = raw_input("ok/no: ") if user_choice == 'ok': # send ok conn.send('ok') #receive file data = str() while 1: buffer = conn.recv(1) if buffer=='': break else: data = data + buffer print data else: conn.send('no') conn.close() I'm sure I'm missing something here in the sorts of a deadlock, but don't know what it is.

    Read the article

  • Flash Player: Any remedy for the stale video image data problem (in a reused NetStream object)?

    - by amn
    Has anyone experienced stale stills of a previous playback for a reused NetStream object? If so, what are the workarounds for this, except re-creating the object (which eats performance and time)? It is hard to reuse NetStream objects because of a (in my opinion) fundamental issue with NetStream objects - when you 'close' a playing stream and at a later point issue a 'play' call on it again with a different name, the stream appears to still contain a stale image lingering from previous playback, and this is of course displayed in the Video object for a moment - the moment I assume it takes for new stream data to become available from server. Because of this behavior, to improve my users' visual experience, I simply discard a NetStream object after a playback session, and assign a new NetStream object to the same variable, set it up, and play something else. It appears to work - no stale image - but what bugs me is that it's a work around and costs performance (construction and setting up the object again - event listeners and 'client' delegates and more memory usage - NetStream objects are not garbage collected immediately, it takes some time). It would be really nice to REALLY be able to reuse a stream. I am thinking of something akin to Video.clear method, but for the NetStream class. Am I missing something?

    Read the article

  • JAR files, don't they just bloat and slow Java down?

    - by Josamoto
    Okay, the question might seem dumb, but I'm asking it anyways. After struggling for hours to get a Spring + BlazeDS project up and running, I discovered that I was having problems with my project as a result of not including the right dependencies for Spring etc. There were .jars missing from my WEB-INF/lib folder, yes, silly me. After a while, I managed to get all the .jar files where they belong, and it comes at a whopping 12.5MB at that, and there's more than 30 of them! Which concerns me, but it probably and hopefully shouldn't be concerned. How does Java operate in terms of these JAR files, they do take up quite a bit of hard drive space, taking into account that it's compressed and compiled source code. So that can really quickly populate a lot of RAM and in an instant. My questions are: Does Java load an entire .jar file into memory when say for instance a class in that .jar is instantiated? What about stuff that's in the .jar that never gets used. Do .jars get cached somehow, for optimized application performance? When a single .jar is loaded, I understand that the thing sits in memory and is available across multiple HTTP requests (i.e. for the lifetime of the server instance running), unlike PHP where objects are created on the fly with each request, is this assumption correct? When using Spring, I'm thinking, I had to include all those fiddly .jars, wouldn't I just be better off just using native Java, with say at least and ORM solution like Hibernate? So far, Spring just took extra time configuring, extra hard drive space, extra memory, cpu consumption, so I'm concerned that the framework is going to cost too much application performance just to get for example, IoC implemented with my BlazeDS server. There still has to come ORM, a unit testing framework and bits and pieces here and there. It's just so easy to bloat up a project quickly and irresponsibly easily. Where do I draw the line?

    Read the article

  • Link failure with either abnormal memory consumption or LNK1106 in Visual Studio 2005.

    - by Corvin
    Hello, I am trying to build a solution for windows XP in Visual Studio 2005. This solution contains 81 projects (static libs, exe's, dlls) and is being successfully used by our partners. I copied the solution bundle from their repository and tried setting it up on 3 similar machines of people in our group. I was successful on two machines and the solution failed to build on my machine. The build on my machine encountered two problems: During a simple build creation of the biggest static library (about 522Mb in debug mode) would fail with the message "13libd\ui1d.lib : fatal error LNK1106: invalid file or disk full: cannot seek to 0x20101879" Full solution rebuild creates this library, however when it comes to linking the library to main .exe file, devenv.exe spawns link.exe which consumes about 80Mb of physical memory and 250MB of virtual and spawns another link.exe, which does the same. This goes on until the system runs out of memory. On PCs of my colleagues where successful build could be performed, there is only one link.exe process which uses all the memory required for linking (about 500Mb physical). There is a plenty of hard drive space on my machine and the file system is NTFS. All three of our systems are similar - Core2Quad processors, 4Gb of RAM, Windows XP SP3. We are using Visual studio installed from the same source. I tried using a different RAM and CPU, using dedicated graphics adapter to eliminate possibility of video memory sharing influencing the build, putting solution files to different location, using different versions of VS 2005 (Professional, Standard and Team Suite), changing the amount of available virtual memory, running memtest86 and building the project from scratch (i.e. a clean bundle). I have read what MSDN says about LNK1106, none of the cases apply to me except for maybe "out of heap space", however I am not sure how I should fight this. The only idea that I have left is reinstalling the OS, however I am not sure that it would help and I am not sure that my situation wouldn't repeat itself on a different machine. Would anyone have any sort of advice for me? Thanks

    Read the article

  • Android - Memory leak when dynamically building UI with image resource backgrounds

    - by Rich
    I have an Activity that I swear is leaking memory. The app I'm working on does a lot with images, so I've had to be pretty stingy with memory when working directly with Bitmaps. I added an Activity, and now if you use this new Activity it basically puts me over the edge with mem usage and I end up throwing the "Bitmap exceeds VM budget" exception. If you never launch this Activity, everything is smooth as it was previously. I started reading about memory leaks, and I think that I have a similar situation to what is described in the article in the Android docs. I'm dynamically creating a bunch of image views and adding a BackgroundDrawable from the resources and adding an OnClickListener as well. I imagine I have to do some cleanup when the Activity hits onPause in its life cycle, but I'd like to know specifically what is the correct way. Here is the code that should demonstrate the objects I'm working with... LinearLayout templateContainer; . . . ImageView imgTemplatePreview = (ImageView) item.findViewById(R.id.imgTemplatePreview); . . . imgTemplatePreview.setBackgroundDrawable(getResources().getDrawable(previewId)); imgTemplatePreview.setOnClickListener(imgClick); templateContainer.addView(item);

    Read the article

  • Inline function v. Macro in C -- What's the Overhead (Memory/Speed)?

    - by Jason R. Mick
    I searched Stack Overflow for the pros/cons of function-like macros v. inline functions. I found the following discussion: Pros and Cons of Different macro function / inline methods in C ...but it didn't answer my primary burning question. Namely, what is the overhead in c of using a macro function (with variables, possibly other function calls) v. an inline function, in terms of memory usage and execution speed? Are there any compiler-dependent differences in overhead? I have both icc and gcc at my disposal. My code snippet I'm modularizing is: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = AttractiveTerm * AttractiveTerm; EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); My reason for turning it into an inline function/macro is so I can drop it into a c file and then conditionally compile other similar, but slightly different functions/macros. e.g.: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = pow(SigmaSquared/RadialDistanceSquared,9); EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); (note the difference in the second line...) This function is a central one to my code and gets called thousands of times per step in my program and my program performs millions of steps. Thus I want to have the LEAST overhead possible, hence why I'm wasting time worrying about the overhead of inlining v. transforming the code into a macro. Based on the prior discussion I already realize other pros/cons (type independence and resulting errors from that) of macros... but what I want to know most, and don't currently know is the PERFORMANCE. I know some of you C veterans will have some great insight for me!!

    Read the article

  • Namespace scoped aliases for generic types in C#

    - by TN
    Let's have a following example: public class X { } public class Y { } public class Z { } public delegate IDictionary<Y, IList<Z>> Bar(IList<X> x, int i); public interface IFoo { // ... Bar Bar { get; } } public class Foo : IFoo { // ... public Bar Bar { get { return null; //... } } } void Main() { IFoo foo; //= ... IEnumerable<IList<X>> source; //= ... var results = source.Select(foo.Bar); } The compiler says: The type arguments for method 'System.Linq.Enumerable.Select(System.Collections.Generic.IEnumerable, System.Func)' cannot be inferred from the usage. Try specifying the type arguments explicitly. It's because, it cannot convert Bar to Func<IList<X>, int, IDictionary<Y, IList<Z>>>. It would be great if I could create type namespace scoped type aliases for generic types in C#. Then I would define Bar not as a delegate, but rather I would define it as an namespace scoped alias for Func<IList<X>, int, IDictionary<Y, IList<Z>>>. public alias Bar = Func<IList<X>, int, IDictionary<Y, IList<Z>>>; I could then also define namespace scoped alias for e.g. IDictionary<Y, IList<Z>>. And if used appropriately:), it will make the code more readable. Now I have inline the generic types and the real code is not well readable:( Have you find the same trouble:)? Is there any good reason why it is not in C# 3.0? Or there is no good reason, it's just matter of money and/or time? EDIT: I know that I can use using, but it is not namespace based - not so convenient for my case.

    Read the article

  • 3 questions about JSON-P format.

    - by Tristan
    Hi, I have to write a script which will be hosted on differents domains. This script has to get information from my server. So, stackoverflow's user told me that i have to use JSON-P format, which, after research, is what i'm going to do. (the data provided in JSON-P is for displaying some information hosted on my server on other website) How do I output JSON-P from my server ? Is it the same as the json_encode function from PHP How do i design the tree pattern for the output JSON-P (you know, like : ({"name" : "foo", "id" : "xxxxx", "blog" : "http://xxxxxx.com"}); can I steal this from my XML output ? (http://bit.ly/9kzBDP) Each time a visitor browse a website on which my widget is it'll make a request on my server, requesting the JSON-P data to display on the client side. It'll increase dramatically the CPU load (1 visitor on the website who will have the script = 1 SQL request on my server to output data), so is there any way to 'caching' the JSON-P information output to refresh it only one or twice a day and stores it into a 'file' (in which extension?). BUT on the other hand i would say that requesting the JSON-P data directly (without caching it) is a plus, because, websites which will integrates the script only want to display THEIR information and not the whole data. So, making a script with something like that: jQuery.getJSON("http://www.something.com/json-p/outpout?filter=5&callback=?", function(data) { ................); }); Where filter= the information the website wants to display What do you think ? Thank you very much ;)

    Read the article

  • What hash algorithms are parallelizable? Optimizing the hashing of large files utilizing on multi-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

  • Small openmp programm freezes sometimes (gcc, c, linux)

    - by osgx
    Hello Just write a small omp test, and it does not work correctly all the times: #include <omp.h> int main() { int i,j=0; #pragma omp parallel for(i=0;i<1000;i++) { #pragma omp barrier j+= j^i; } return j; } The usage of j for writing from all threads is incorrect in this example, BUT there must be only nondeterministic value of j I have a freeze. Compiled with gcc-4.3.1 -fopenmp a.c -o gcc -static Run on 4-core x86_Core2 Linux server: $ ./gcc and got freeze (sometimes; like 1 freeze for 4-5 fast runs). Strace: [pid 13118] <... futex resumed> ) = 0 [pid 13118] futex(0x80d3014, FUTEX_WAIT, 2, NULL <unfinished ...> [pid 13120] <... futex resumed> ) = 0 [pid 13119] futex(0x80d3014, FUTEX_WAIT, 2, NULL <unfinished ...> [pid 13120] futex(0x80d3014, FUTEX_WAKE, 1) = 1 [pid 13120] futex(0x80cd798, FUTEX_WAIT, 1, NULL <unfinished ...> [pid 13109] <... futex resumed> ) = 0 [pid 13109] futex(0x80d3014, FUTEX_WAKE, 1) = 1 [pid 13109] futex(0x80d3020, FUTEX_WAIT, 251, NULL <unfinished ...> [pid 13118] <... futex resumed> ) = 0 [pid 13118] futex(0x80d3014, FUTEX_WAKE, 1) = 1 [pid 13119] <... futex resumed> ) = 0 [pid 13118] futex(0x80d3020, FUTEX_WAIT, 251, NULL <unfinished ...> [pid 13119] futex(0x80d3014, FUTEX_WAKE, 1) = 0 [pid 13119] futex(0x80d3020, FUTEX_WAIT, 251, NULL <freeze> Why do I have a freeze (deadlock)?

    Read the article

  • Javascript inheritance: call super-constructor or use prototype chain?

    - by Jeremy S.
    Hi folks, quite recently I read about javascript call usage in MDC https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Function/call one linke of the example shown below, I still don't understand. Why are they using inheritance here like this Prod_dept.prototype = new Product(); is this necessary? Because there is a call to the super-constructor in Prod_dept() anyway, like this Product.call is this just out of common behaviour? When is it better to use call for the super-constructor or use the prototype chain? function Product(name, value){ this.name = name; if(value >= 1000) this.value = 999; else this.value = value; } function Prod_dept(name, value, dept){ this.dept = dept; Product.call(this, name, value); } Prod_dept.prototype = new Product(); // since 5 is less than 1000, value is set cheese = new Prod_dept("feta", 5, "food"); // since 5000 is above 1000, value will be 999 car = new Prod_dept("honda", 5000, "auto"); Thanks for making things clearer

    Read the article

  • Create a model that switches between two different states using Temporal Logic?

    - by NLed
    Im trying to design a model that can manage different requests for different water sources. Platform : MAC OSX, using latest Python with TuLip module installed. For example, Definitions : Two water sources : w1 and w2 3 different requests : r1,r2,and r3 - Specifications : Water 1 (w1) is preferred, but w2 will be used if w1 unavailable. Water 2 is only used if w1 is depleted. r1 has the maximum priority. If all entities request simultaneously, r1's supply must not fall below 50%. - The water sources are not discrete but rather continuous, this will increase the difficulty of creating the model. I can do a crude discretization for the water levels but I prefer finding a model for the continuous state first. So how do I start doing that ? Some of my thoughts : Create a matrix W where w1,w2 ? W Create a matrix R where r1,r2,r3 ? R or leave all variables singular without putting them in a matrix I'm not an expert in coding so that's why I need help. Not sure what is the best way to start tackling this problem. I am only interested in the model, or a code sample of how can this be put together. edit Now imagine I do a crude discretization of the water sources to have w1=[0...4] and w2=[0...4] for 0, 25, 50, 75,100 percent respectively. == means implies Usage of water sources : if w1[0]==w2[4] -- meaning if water source 1 has 0%, then use 100% of water source 2 etc if w1[1]==w2[3] if w1[2]==w2[2] if w1[3]==w2[1] if w1[4]==w2[0] r1=r2=r3=[0,1] -- 0 means request OFF and 1 means request ON Now what model can be designed that will give each request 100% water depending on the values of w1 and w2 (w1 and w2 values are uncontrollable so cannot define specific value, but 0...4 is used for simplicity )

    Read the article

  • jquery ui dialog open multiple dialog boxes using the same class on the button and the content div

    - by MichaelAntoni
    Hello there, i want to open multiple dialog boxes by using the same class on both the button and the content div. The below works but only for the first time. jQuery('.helpDialog').hide(); jQuery('.helpButton').click(function() { jQuery(this).next('.helpDialog').dialog({ autoOpen: true, title: 'Help', width: 500, height: 300, position: [180,10], draggable: true, resizable: false, modal: false }); return false; }); we know the reason for this http://blog.nemikor.com/2009/04/08/basic-usage-of-the-jquery-ui-dialog/ "the second call is ignored because the dialog has already been instantiated on that element." But when i fix that problem by trying the code below, the dialog box no longer opens. Can anyone help? Thanks in advance jQuery('.helpDialog').hide(); jQuery(function() { jQuery('.helpDialog').dialog({ autoOpen: false, modal: true, title: 'Info', width: 600, height: 400, position: [200,0], draggable: false }); }); jQuery('.helpButton').click(function() { jQuery(this).next('.helpDialog').dialog('open'); return false; });

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >