Search Results

Search found 7891 results on 316 pages for 'multi layer perceptron'.

Page 273/316 | < Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >

  • Java, Massive message processing with queue manager (trading)

    - by Ronny
    Hello, I would like to design a simple application (without j2ee and jms) that can process massive amount of messages (like in trading systems) I have created a service that can receive messages and place them in a queue to so that the system won't stuck when overloaded. Then I created a service (QueueService) that wraps the queue and has a pop method that pops out a message from the queue and if there is no messages returns null, this method is marked as "synchronized" for the next step. I have created a class that knows how process the message (MessageHandler) and another class that can "listen" for messages in a new thread (MessageListener). The thread has a "while(true)" and all the time tries to pop a message. If a message was returned, the thread calls the MessageHandler class and when it's done, he will ask for another message. Now, I have configured the application to open 10 MessageListener to allow multi message processing. I have now 10 threads that all time are in a loop. Is that a good design?? Can anyone reference me to some books or sites how to handle such scenario?? Thanks, Ronny

    Read the article

  • What Test Environment Setup do Top Project Committers Use in the Ruby Community?

    - by viatropos
    Today I am going to get as far as I can setting up my testing environment and workflow. I'm looking for practical advice on how to setup the test environment from you guys who are very passionate and versed in Ruby Testing. By the end of the day (6am PST?) I would like to be able to: Type one 1-command to run test suites for ANY project I find on Github. Run autotest for ANY Github project so I can fork and make TESTABLE contributions. Build gems from the ground up with Autotest and Shoulda. For one reason or another, I hardly ever run tests for projects I clone from Github. The major reason is because unless they're using RSpec and have a Rake task to run the tests, I don't see the common pattern behind it all. I have built 3 or 4 gems writing tests with RSpec, and while I find the DSL fun, it's less than ideal because it just adds another layer/language of methods I have to learn and remember. So I'm going with Shoulda. But this isn't a question about which testing framework to choose. So the questions are: What is your, the SO reader and Github project committer, test environment setup using autotest so that whenever you git clone a gem, you can run the tests and autotest-develop them if desired? What are the guys who are writing the Paperclip Tests and Authlogic Tests doing? What is their setup? Thanks for the insight. Looking for answers that will make me a more effective tester.

    Read the article

  • How to call a method from another class that's been instantiated within the current class

    - by Pavan
    my screen has a few views like such __________________ | _____ | | | | | //viewX is a video screen | | | | | viewX | vY | | //viewY is a custom uiview i created. | |____| | //it contains a method which i would like to call that toggles |_________________| //the hidden property of this view. and when it hides, a little | | //button is replaced no the top right corner on top of viewX | viewZ | //the video layer | | |_________________| //viewZ is a view containing many square views - thumbnails. my question is, i dont know how to register for touch events so that it recognises any touch event on no matter which view the user touches the screen.. atm im handling the touch events for each view inside it. so all works well... however what im trying to do is that when the user taps anywhere else on the screen but on viewY, viewY should dissapear by calling that method in the viewY class. this viewY class is instantiated and has no xib file attached to it. the uiview is created progammatically in the viewY class. this whole class for viewY behviour is instantiated in viewX - the video view. my boss says add delegates.. although i have now clue how to do that... any help? is there anyway i can just make it really simple and be able to say REMOVE VIEW no matter which class im calling from? Also ive seen other people achieve this by using these funky arrows - ... <- etc.. although im not sure if thats what i need or how to implement such a thing. ah i think ive made my question quite complicated but i really mean it to be a simple one, and know it can be done in an easy way!

    Read the article

  • Could someone explain gtk2hs drag and drop to me, the listDND.hs demo just isn't doing it for me?

    - by Tom Carstens
    As the title says, I just don't get DND (or rather I understand the concept and I understand the order of callbacks, I just don't understand how to setup DND for actual usage.) I'd like to say that I've done DND stuff before in C, but considering I never really got that working... So I'm trying (and mostly succeeding, save DND) to write a text editor (using gtksourceview, because it has built in code highlighting.) Reasons are below if you want them. Anyways, there's not really a good DND demo or tutorial available for gtk2hs (listDND.hs just doesn't translate well in my head.) So what I'm asking is for code that demonstrates simple DND on a window widget (for example.) Ideally, it should accept drops from other windows (such as Thunar) and print out the information in string form. I think I can take it from there... Reasons: I'm running a fairly light weight setup, dwm and a few gtk+2 programs. I really don't want to have to pull in gtk+3 to get the current gedit from the repos (Arch Linux.) Currently, I'm using geany for all of my text editing needs, however, geany is a bit heavy for editing config files. Further, geany doesn't care for my terminal of choice (st;) so I don't even get the benefit of using it as an IDE. Meaning I'd like a lightweight text editor with syntax highlighting. I could configure emacs or vim or something, but that seems to me to be more of a hack then a proper solution. Thus my project was born. It's mostly working (aside from DND, all that's left is proper multi-tab support.) Admittedly, I could probably work this out if I wrote it in C, but there isn't that much state in a text editor so Haskell's been working fine with almost no need for mutable variables.

    Read the article

  • Autoscale Font in a TextBox Control so that its as big as possible and still fits in text area bound

    - by blak3r
    I need a TextBox or some type of Multi-Line Label control which will automatically adjust the font-size to make it as large as possible and yet have the entire message fit inside the bounds of the text area. I wanted to see if anyone had implemented a user control like this before developing my own. Example application: have a TextBox which will be half of the area on a windows form. When a message comes in which is will be approximately 100-500 characters it will put all the text in the control and set the font as large as possible. An implementation which uses Mono Supported .NET libraries would be a plus. Thanks in advance. If know one has implemented a control already... If someone knows how to test if a given text completely fits inside the text area that would be useful for if I roll my own control. Edit: I ended up writing an extension to RichTextBox. I will post my code shortly once i've verified that all the kinks are worked out.

    Read the article

  • Running Cargo From Maven antrun Plugin

    - by Frank
    I have a maven (multi-module) project creating some WAR and EAR files for JBoss AS 7.1.x. For one purpose, I need to deploy one generated EAR file of one module to a fresh JBoss instance and run it, call some REST web service calls against it and stop JBoss. Then I need to package the results of these calls that were written to the database. Currently, I am trying to use CARGO and the maven ant run plugin to perform this task. Unfortunately, I cannot get the three (maven, ant run and CARGO) to play together. I don't have the uberjar that is used in the ant examples of cargo. How can I configure the ant run task so that the cargo ant task can create, start, deploy my JBoss? Ideally the one unpacked and configured by the cargo-maven2-plugin in another phase? Or, is there a better way to achieve my goal of creating the database? I cannot really use the integration-test phase, as it is executed after the package phase. So, I plan to do it all in compile phase using antrun.

    Read the article

  • Error when creating an image from a UIView

    - by Raphael Pinto
    Hi, I'm creating an image from a view whith the folowing code : UIGraphicsBeginImageContext(myView.bounds.size); [myView.layer renderInContext:UIGraphicsGetCurrentContext()]; UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil); The image is created in the user album but in the consol I get this : 2010-05-11 10:17:17.974 myApp[3875:1807] sqlite error 8 [attempt to write a readonly database] 2010-05-11 10:17:18.014 myApp[3875:1807] Backtrace for sqlite error: (0x34e3a909 0x34e3d87f 0x34e3b029 0x34e3b22b 0x34e2f48b 0x34e2dccb 0x34e2d2f7 0x34e2d3bb 0x34e4dcd3 0x34e50515 0x34e51351 0x34e51681 0x34e513f5 0x34e511e3 0x303af57c 0x34e51163 0x303b06e8 0x303ac8e0 0x303ac6d8 0x303ac8c8 0x303aca80 0x30350d55 0x3034a12c) 2010-05-11 10:17:18.077 myApp[3875:1807] sqlite error 8 [attempt to write a readonly database] 2010-05-11 10:17:18.087 myApp[3875:1807] Backtrace for sqlite error: (0x34e3a909 0x34e3b053 0x34e3b22b 0x34e2f48b 0x34e2dccb 0x34e2d2f7 0x34e2d3bb 0x34e4dcd3 0x34e50515 0x34e51351 0x34e51681 0x34e513f5 0x34e511e3 0x303af57c 0x34e51163 0x303b06e8 0x303ac8e0 0x303ac6d8 0x303ac8c8 0x303aca80 0x30350d55 0x3034a12c) 2010-05-11 10:17:18.091 myApp[3875:1807] sqlite error 8 [attempt to write a readonly database] 2010-05-11 10:17:18.095 myApp[3875:1807] Backtrace for sqlite error: (0x34e3a909 0x34e3b085 0x34e3b22b 0x34e2f48b 0x34e2dccb 0x34e2d2f7 0x34e2d3bb 0x34e4dcd3 0x34e50515 0x34e51351 0x34e51681 0x34e513f5 0x34e511e3 0x303af57c 0x34e51163 0x303b06e8 0x303ac8e0 0x303ac6d8 0x303ac8c8 0x303aca80 0x30350d55 0x3034a12c) 2010-05-11 10:17:18.111 myApp[3875:1807] sqlite error 1 [SQL logic error or missing database] 2010-05-11 10:17:18.115 myApp[3875:1807] Backtrace for sqlite error: (0x34e3a909 0x34e3d87f 0x34e3b029 0x34e3b4dd 0x34e2f48b 0x34e2dccb 0x34e2d2f7 0x34e2d3bb 0x34e4dcd3 0x34e50515 0x34e51351 0x34e51681 0x34e513f5 0x34e511e3 0x303af57c 0x34e51163 0x303b06e8 0x303ac8e0 0x303ac6d8 0x303ac8c8 0x303aca80 0x30350d55 0x3034a12c) 2010-05-11 10:17:18.120 myApp[3875:1807] sqlite error 1 [SQL logic error or missing database] 2010-05-11 10:17:18.124 myApp[3875:1807] Backtrace for sqlite error: (0x34e3a909 0x34e3b053 0x34e3b4dd 0x34e2f48b 0x34e2dccb 0x34e2d2f7 0x34e2d3bb 0x34e4dcd3 0x34e50515 0x34e51351 0x34e51681 0x34e513f5 0x34e511e3 0x303af57c 0x34e51163 0x303b06e8 0x303ac8e0 0x303ac6d8 0x303ac8c8 0x303aca80 0x30350d55 0x3034a12c) 2010-05-11 10:17:18.129 myApp[3875:1807] sqlite error 1 [cannot commit - no transaction is active] 2010-05-11 10:17:18.133 myApp[3875:1807] Backtrace for sqlite error: (0x34e3a909 0x34e3b085 0x34e3b4dd 0x34e2f48b 0x34e2dccb 0x34e2d2f7 0x34e2d3bb 0x34e4dcd3 0x34e50515 0x34e51351 0x34e51681 0x34e513f5 0x34e511e3 0x303af57c 0x34e51163 0x303b06e8 0x303ac8e0 0x303ac6d8 0x303ac8c8 0x303aca80 0x30350d55 I don't know where the problem come from. Can you help me?

    Read the article

  • Threads are blocked in malloc and free, virtual size

    - by Albert Wang
    Hi, I'm running a 64-bit multi-threaded program on the windows server 2003 server (X64), It run into a case that some of the threads seem to be blocked in the malloc or free function forever. The stack trace is like follows: ntdll.dll!NtWaitForSingleObject() + 0xa bytes ntdll.dll!RtlpWaitOnCriticalSection() - 0x1aa bytes ntdll.dll!RtlEnterCriticalSection() + 0xb040 bytes ntdll.dll!RtlpDebugPageHeapAllocate() + 0x2f6 bytes ntdll.dll!RtlDebugAllocateHeap() + 0x40 bytes ntdll.dll!RtlAllocateHeapSlowly() + 0x5e898 bytes ntdll.dll!RtlAllocateHeap() - 0x1711a bytes MyProg.exe!malloc(unsigned __int64 size=0) Line 168 C MyProg.exe!operator new(unsigned __int64 size=1) Line 59 + 0x5 bytes C++ ntdll.dll!NtWaitForSingleObject() ntdll.dll!RtlpWaitOnCriticalSection() ntdll.dll!RtlEnterCriticalSection() ntdll.dll!RtlpDebugPageHeapFree() ntdll.dll!RtlDebugFreeHeap() ntdll.dll!RtlFreeHeapSlowly() ntdll.dll!RtlFreeHeap() MyProg.exe!free(void * pBlock=0x000000007e8e4fe0) C BTW, the param values passed to the new operator is not correct here maybe due to optimization. Also, at the same time, I found in the process Explorer, the virtual size of this program is 10GB, but the private bytes and working set is very small (<2GB). We did have some threads using virtualalloc but in a way that commit the memory in the call, and these threads are not blocked. m_pBuf = VirtualAlloc(NULL, m_size, MEM_COMMIT, PAGE_READWRITE); ...... VirtualFree(m_pBuf, 0, MEM_RELEASE); This looks strange to me, seems a lot of virtual space is reserved but not committed, and malloc/free is blocked by lock. I'm guessing if there's any corruptions in the memory/object, so plan to turn on gflag with pageheap to troubleshoot this. Does anyone has similar experience on this before? Could you share with me so I may get more hints? Thanks a lot!

    Read the article

  • Standard term for a thread I/O reorder buffer?

    - by Crashworks
    I have a case where many threads all concurrently generate data that is ultimately written to one long, serial file. I need to somehow serialize these writes so that the file gets written in the right order. ie, I have an input queue of 2048 jobs j0..jn, each of which produces a chunk of data oi. The jobs run in parallel on, say, eight threads, but the output blocks have to appear in the file in the same order as the corresponding input blocks — the output file has to be in the order o0o1o2... The solution to this is pretty self evident: I need some kind of buffer that accumulates and writes the output blocks in the correct order, similar to a CPU reorder buffer in Tomasulo's algorithm, or to the way that TCP reassembles out-of-order packets before passing them to the application layer. Before I go code it, I'd like to do a quick literature search to see if there are any papers that have solved this problem in a particularly clever or efficient way, since I have severe realtime and memory constraints. I can't seem to find any papers describing this though; a Scholar search on every permutation of [threads, concurrent, reorder buffer, reassembly, io, serialize] hasn't yielded anything useful. I feel like I must just not be searching the right terms. Is there a common academic name or keyword for this kind of pattern that I can search on?

    Read the article

  • Java: fastest way to do random reads on huge disk file(s)

    - by cocotwo
    I've got a moderately big set of data, about 800 MB or so, that is basically some big precomputed table that I need to speed some computation by several orders of magnitude (creating that file took several mutlicores computers days to produce using an optimized and multi-threaded algo... I do really need that file). Now that it has been computed once, that 800MB of data is read only. I cannot hold it in memory. As of now it is one big huge 800MB file but splitting in into smaller files ain't a problem if it can help. I need to read about 32 bits of data here and there in that file a lot of time. I don't know before hand where I'll need to read these data: the reads are uniformly distributed. What would be the fastest way in Java to do my random reads in such a file or files? Ideally I should be doing these reads from several unrelated threads (but I could queue the reads in a single thread if needed). Is Java NIO the way to go? I'm not familiar with 'memory mapped file': I think I don't want to map the 800 MB in memory. All I want is the fastest random reads I can get to access these 800MB of disk-based data. btw in case people wonder this is not at all the same as the question I asked not long ago: http://stackoverflow.com/questions/2346722/java-fast-disk-based-hash-set

    Read the article

  • How to use interfaces in exception handling

    - by vikp
    Hi, I'm working on the exception handling layer for my application. I have read few articles on interfaces and generics. I have used inheritance before quite a lot and I'm comfortable with in that area. I have a very brief design that I'm going to implement: public interface IMyExceptionLogger { public void LogException(); // Helper methods for writing into files,db, xml } I'm slightly confused what I should be doing next. public class FooClass: IMyExceptionLogger { // Fields // Constructors } Should I implement LogException() method within FooClass? If yes, than I'm struggling to see how I'm better of using an interface instead of the concrete class... I have a variety of classes that will make a use of that interface, but I don't want to write an implementation of that interface within each class. In the same time If I implement an interface in one class, and then use that class in different layers of the application I will be still using concrete classes instead of interfaces, which is a bad OO design... I hope this makes sense. Any feedback and suggestions are welcome. Please notice that I'm not interested in using net4log or its competitors because I'm doing this to learn. Thank you Edit: Wrote some more code. So I will implement variety of loggers with this interface, i.e. DBExceptionLogger, CSVExceptionLogger, XMLExceptionLogger etc. Than I will still end up with concrete classes that I will have to use in different layers of my application.

    Read the article

  • How to set up a load/stress test for a web site?

    - by Ryan
    I've been tasked with stress/load testing our company web site out of the blue and know nothing about doing so. Every search I make on google for "how to load test a web site" just comes back with various companies and software to physically do the load testing. For now I'm more interested in how to actually go about setting up a load test like what I should take into account prior to load testing, what pages within my site I should be testing load against and what things I'm going to want to monitor when doing the test. Our web site is on a multi-tier system complete with a separate database server (IIS 7 Web Server, SQL Server 2000 db). I imagine I'd want to monitor both the web server and the database server for testing load however when setting up scenarios to load test the web server I'd have to use pages that query the database to see any load on the database server at the same time. Are web servers and database servers generally tested simultaneously or are they done as separate tests? As you can see I'm pretty clueless as to the whole operation so any incite as to how to go about this would be very helpful. FYI I have been tinkering with Pylot and was able to create and run a scenario against our site but I'm not sure what I should be looking for in the results or if the scenario I created is even a scenario worth measuring for our site. Thanks in advance.

    Read the article

  • Cross-Origin Resource Sharing (CORS) - am I missing something here?

    - by David Semeria
    I was reading about CORS (https://developer.mozilla.org/en/HTTP_access_control) and I think the implementation is both simple and effective. However, unless I'm missing something, I think there's a big part missing from the spec. As I understand, it's the foreign site that decides, based on the origin of the request (and optionally including credentials), whether to allow access to its resources. This is fine. But what if malicious code on the page wants to POST a user's sensitive information to a foreign site? The foreign site is obviously going to authenticate the request. Hence, again if I'm not missing something, CORS actually makes it easier to steal sensitive information. I think it would have made much more sense if the original site could also supply an immutable list of servers its page is allowed to access. So the expanded sequence would be: 1) Supply a page with list of acceptable CORS servers (abc.com, xyz.com, etc) 2) Page wants to make an XHR request to abc.com - the browser allows this because it's in the allowed list and authentication proceeds as normal 3) Page wants to make an XHR request to malicious.com - request rejected locally (ie by the browser) because the server is not in the list. I know that malicious code could still use JSONP to do its dirty work, but I would have thought that a complete implementation of CORS would imply the closing of the script tag multi-site loophole. I also checked out the official CORS spec (http://www.w3.org/TR/cors) and could not find any mention of this issue.

    Read the article

  • C/C++ I18N mbstowcs question

    - by bogertron
    I am working on internationalizing the input for a C/C++ application. I have currently hit an issue with converting from a multi-byte string to wide character string. The code needs to be cross platform compatible, so I am using mbstowcs and wcstombs as much as possible. I am currently working on a WIN32 machine and I have set the locale to a non-english locale (Japanese). When I attempt to convert a multibyte character string, I seem to be having some conversion issues. Here is an example of the code: int main(int argc, char** argv) { wchar_t *wcsVal = NULL; char *mbsVal = NULL; /* Get the current code page, in my case 932, runs only on windows */ TCHAR szCodePage[10]; int cch= GetLocaleInfo( GetSystemDefaultLCID(), LOCALE_IDEFAULTANSICODEPAGE, szCodePage, sizeof(szCodePage)); /* verify locale is set */ if (setlocale(LC_CTYPE, "") == 0) { fprintf(stderr, "Failed to set locale\n"); return 1; } mbsVal = argv[1]; /* validate multibyte string and convert to wide character */ int size = mbstowcs(NULL, mbsVal, 0); if (size == -1) { printf("Invalid multibyte\n"); return 1; } wcsVal = (wchar_t*) malloc(sizeof(wchar_t) * (size + 1)); if (wcsVal == NULL) { printf("memory issue \n"); return 1; } mbstowcs(wcsVal, szVal, size + 1); wprintf(L"%ls \n", wcsVal); return 0; } At the end of execution, the wide character string does not contain the converted data. I believe that there is an issue with the code page settings, because when i use MultiByteToWideChar and have the current code page sent in EX: MultiByteToWideChar( CP_ACP, 0, mbsVal, -1, wcsVal, size + 1 ); in place of the mbstowcs calls, the conversion succeeds. My question is, how do I use the generic mbstowcs call instead of teh MuliByteToWideChar call?

    Read the article

  • Java generics question with wildcards

    - by Sean
    Just came across a place where I'd like to use generics and I'm not sure how to make it work the way I want. I have a method in my data layer that does a query and returns a list of objects. Here's the signature. public List getList(Class cls, Map query) This is what I'd like the calling code to look like. List<Whatever> list = getList(WhateverImpl.class, query); I'd like to make it so that I don't have to cast this to a List coming out, which leads me to this. public <T> List<T> getList(Class<T> cls, Map query) But now I have the problem that what I get out is always the concrete List<WhateverImpl> passed in whereas I'd like it to be the Whatever interface. I tried to use the super keyword but couldn't figure it out. Any generics gurus out there know how this can be done?

    Read the article

  • Processing a database queue across multiple threads - design advice

    - by rwmnau
    I have a SQL Server table full of orders that my program needs to "follow up" on (call a webservice to see if something has been done with them). My application is multi-threaded, and could have instances running on multiple servers. Currently, every so often (on a Threading timer), the process selects 100 rows, at random (ORDER BY NEWID()), from the list of "unconfirmed" orders and checks them, marking off any that come back successfully. The problem is that there's a lot of overlap between the threads, and between the different processes, and their's no guarantee that a new order will get checked any time soon. Also, some orders will never be "confirmed" and are dead, which means that they get in the way of orders that need to be confirmed, slowing the process down if I keep selecting them over and over. What I'd prefer is that all outstanding orders get checked, systematically. I can think of two easy ways do this: The application fetches one order to check at a time, passing in the last order it checked as a parameter, and SQL Server hands back the next order that's unconfirmed. More database calls, but this ensures that every order is checked in a reasonable timeframe. However, different servers may re-check the same order in succession, needlessly. The SQL Server keeps track of the last order it asked a process to check up on, maybe in a table, and gives a unique order to every request, incrementing its counter. This involves storing the last order somewhere in SQL, which I wanted to avoid, but it also ensures that threads won't needlessly check the same orders at the same time Are there any other ideas I'm missing? Does this even make sense? Let me know if I need some clarification.

    Read the article

  • When is a try catch not a try catch?

    - by Dearmash
    I have a fun issue where during application shutdown, try / catch blocks are being seemingly ignored in the stack. I don't have a working test project (yet due to deadline, otherwise I'd totally try to repro this), but consider the following code snippet. public static string RunAndPossiblyThrow(int index, bool doThrow) { try { return Run(index); } catch(ApplicationException e) { if(doThrow) throw; } return ""; } public static string Run(int index) { if(_store.Contains(index)) return _store[index]; throw new ApplicationException("index not found"); } public static string RunAndIgnoreThrow(int index) { try { return Run(index); } catch(ApplicationException e) { } return ""; } During runtime this pattern works famously. We get legacy support for code that relies on exceptions for program control (bad) and we get to move forward and slowly remove exceptions used for program control. However, when shutting down our UI, we see an exception thrown from "Run" even though "doThrow" is false for ALL current uses of "RunAndPossiblyThrow". I've even gone so far as to verify this by modifying code to look like "RunAndIgnoreThrow" and I'll still get a crash post UI shutdown. Mr. Eric Lippert, I read your blog daily, I'd sure love to hear it's some known bug and I'm not going crazy. EDIT This is multi-threaded, and I've verified all objects are not modified while being accessed

    Read the article

  • How to detect whether an EventWaitHandle is waiting?

    - by AngryHacker
    I have a fairly well multi-threaded winforms app that employs the EventWaitHandle in a number of places to synchronize access. So I have code similar to this: List<int> _revTypes; EventWaitHandle _ewh = new EventWaitHandle(false, EventResetMode.ManualReset); void StartBackgroundTask() { _ewh.Reset(); Thread t = new Thread(new ThreadStart(LoadStuff)); t.Start(); } void LoadStuff() { _revTypes = WebServiceCall.GetRevTypes() // ...bunch of other calls fetching data from all over the place // using the same pattern _ewh.Set(); } List<int> RevTypes { get { _ewh.WaitOne(); return _revTypes; } } Then I just call .RevTypes somewehre from the UI and it will return data to me when LoadStuff has finished executing. All this works perfectly correctly, however RevTypes is just one property - there are actually several dozens of these. And one or several of these properties are holding up the UI from loading in a fast manner. Short of placing benchmark code into each property, is there a way to see which property is holding the UI from loading? Is there a way to see whether the EventWaitHandle is forced to actually wait?

    Read the article

  • Ajax and JSF 1.1 using hidden iframe with "proxy forms", what do you think about this development st

    - by Steel Plume
    Hi, currently I am using yet 1.1 specs, so I am trying to make simple what is too complex for me :p, managing backing beans with conflicting navigation rules, external params breaking rules and so on... for example when I need a backing bean used by other "views" simply I call it using FacesContext inside other backing beans, but often it's too wired up to JSF navigation/initialization rules to be really usable, and of course more simple is more useful become the FacesContext. So with only a bit of cross browser Javascript (simply a form copy and a read-write on a "proxy" form), I create a sort of proxy form inside the main user page (totally disassociated from JSF navigation rules, but using JSF taglibs). Ajax gives me flexibility on the user interaction, but data is always managed by JSF. Pratically I demand all "fictious" user actions to an hidden "iframe" which build up all needed forms according JSF rules, then a javascript simply clone its form output and put it into the user view level (CSS for showing/hiding real command buttons and making pretty), the user plays around and when he click submit, a script copies all "proxied" form values into the real JSF form inside the "iframe" that invokes the real submit of the form, what it returns is obviously dependent by your choice. Now JSF is really a pleasure :-p My real interest is to know what are your alternative strategy for using pure Ajax and JSF 1.1 without adopting middle layer like ajax4jsf and others, all good choices but too much "plugins" than specs.

    Read the article

  • What about parallelism across network using multiple PCs?

    - by MainMa
    Parallel computing is used more and more, and new framework features and shortcuts make it easier to use (for example Parallel extensions which are directly available in .NET 4). Now what about the parallelism across network? I mean, an abstraction of everything related to communications, creation of processes on remote machines, etc. Something like, in C#: NetworkParallel.ForEach(myEnumerable, () => { // Computing and/or access to web ressource or local network database here }); I understand that it is very different from the multi-core parallelism. The two most obvious differences would probably be: The fact that such parallel task will be limited to computing, without being able for example to use files stored locally (but why not a database?), or even to use local variables, because it would be rather two distinct applications than two threads of the same application, The very specific implementation, requiring not just a separate thread (which is quite easy), but spanning a process on different machines, then communicating with them over local network. Despite those differences, such parallelism is quite possible, even without speaking about distributed architecture. Do you think it will be implemented in a few years? Do you agree that it enables developers to easily develop extremely powerfull stuff with much less pain? Example: Think about a business application which extracts data from the database, transforms it, and displays statistics. Let's say this application takes ten seconds to load data, twenty seconds to transform data and ten seconds to build charts on a single machine in a company, using all the CPU, whereas ten other machines are used at 5% of CPU most of the time. In a such case, every action may be done in parallel, resulting in probably six to ten seconds for overall process instead of forty.

    Read the article

  • How can I specify processor affinity?

    - by BenAlabaster
    I have an application that's having some trouble handling multi-processor systems. It's not an app that I have a particular affection for modifying and would like to avoid it if possible. However, I'm not above modifying the code if I have to. The application is written in VBA (and hence my inclination to avoid touching it). We've noticed that the application seems to run pretty smoothly if we set the processor affinity to a single processor using task manager, only manifesting instability when processor affinity isn't set. I know that I can specify the processor affinity of a task using .NET and as such, there lies a possibility of me writing a shell application that could be used to run legacy applications with a specified processor affinity, does anyone have any experience with this and can throw out some ideas as to headaches I'm likely to run into with this approach? The other question is: Is it in fact possible to modify the core VBA product to handle its own processor affinity? I've never had to handle this with any of my applications natively so this (at this point in time) is completely outside my realm of expertise. Thanks in advance

    Read the article

  • Why does my DIV clip its child DIV when jQuery moves it in IE?

    - by Ben Saufley
    I have two divs, both with position:absolute;, one inside the other. The parent isn't in a place where it can be set as position:relative without an extra layer of complexity (there are a lot of other elements around it that I'd have to account for to put it where it needs to be, which is at the very top of the page, over everything). The child element is made to stick off the bottom of the parent. In Chrome, Safari, Firefox, it all works splendidly. In IE, it works until jQuery moves the parent element - at which point the parent element clips the child, so you can barely see the top of the child. I feel like I've read about this, about IE clipping child elements, but I can't seem to find an answer that applies to my case. It's pretty simple, basically: <div id="parent" style="position:absolute;top:0;left:0;"> [content] <div id="tab" style="position:absolute;bottom:-30px;left:0;width:64px;height:32px;background-image:(...);"></div> </div> <script> $(document).ready( function() { $("#tab").click(function() { $("#parent").animate({"top":"-50px"},300); }); }); </script>

    Read the article

  • perl multithreading issue for autoincrement

    - by user3446683
    I'm writing a multi threaded perl script and storing the output in a csv file. I'm trying to insert a field called sl.no. in the csv file for each row entered but as I'm using threads, the sl. no. overlaps in most. Below is an idea of my code snippet. for ( my $count = 1 ; $count <= 10 ; $count++ ) { my $t = threads->new( \&sub1, $count ); push( @threads, $t ); } foreach (@threads) { my $num = $_->join; } sub sub1 { my $num = shift; my $start = '...'; #distributing data based on an internal logic my $end = '...'; #distributing data based on an internal logic my $next; for ( my $x = $start ; $x <= $end ; $x++ ) { my $count = $x + 1; #part of code from which I get @data which has name and age my $j = 0; if ( $x != 0 ) { $count = $next; } foreach (@data) { #j is required here for some extra code flock( OUTPUT, LOCK_EX ); print OUTPUT $count . "," . $name . "," . $age . "\n"; flock( OUTPUT, LOCK_UN ); $j++; $count++; } $next = $count; } return $num; } I need the count to be incremented which is the serial number for the rows that would be inserted in the csv file. Any help would be appreciated.

    Read the article

  • Optimization in Common Decalaration

    - by Pratik
    Its a 3-tier ASP.NET Website Project In Data Layer there is class "Common Decalaration" in which lot of common things are mentioned. Something this way : public class CommonDeclartion { #region Common Messages public const string RECORD_INSERT_MSG = "Record Inserted Successfully "; public const string RECORD_UPDATE_MSG = "Record Updated Successfully"; public const string RECORD_DELETE_MSG = "Record Deleted Successfully"; public const string ERROR_MSG = "Error Ocuured while Perfoming This Action."; public const string UserID_Incorrect = "Please Enter The Correct User ID."; public const string RECORD_ALREADY_EXIT = "Record Already Exit"; public const string NO_RECORD = "No Record found."; #endregion } Can this be more optimized in terms of : 1.Perfomance 2.Security(if any) 3.Code Readablity or Reusablity I thought of using enum but can't figure that out : enum CommonMessages { RECORD_INSERT_MSG "Record Inserted Successfully.", RECORD_UPDATE_MSG "Record Updated Successfully.", RECORD_DELETE_MSG "Record Deleted Successfully.", ERROR_MSG "Error Ocuured while Perfoming This Action.", UserID_Incorrect "Please Enter The Correct User ID.", RECORD_ALREADY_EXIT "Record Already Exit.", NO_RECORD "No Record found.", } or else should keep them in some collections like dictionary/NameValueCollection or so or i have to keep them in XML in form of key/value pair and reterive from it ? What can be better way keeping in mind 1.Perfomance 2.Security(if any) 3.Code Readablity or Reusablity

    Read the article

  • Dealing with image upload on server

    - by user1073320
    I have got a the following problem: I have got multi-step form where in one step user upload image to server and then few steps further supplies other information, when this information is invalid no data should be commited - also the image should be deleted. I was thinking about PHP session, but I've read here PHP - Store Images in SESSION data? that it is inefficient way. Every time you proceed step in the form the image is reloaded (in the session) and as somebody mentioned "You will want it to only be as big as it needs to be and you need to delete it as soon as you don't need it because large pieces of information in the session will slow down the session startup." - here i got a question: will it slow down the stratup the session of user who upload file or sessions of all users? I have to mention that I'm looking for solution that doesn't rely on operating system scripts (cron or etc) - I have no permission to run such scripts. The perfect solution for me would be: saving image on disk (for example in some folder named after session id) then after the latest step of form move this image or delete depending on form validation. If user unexpectedly destroy the session (for example closing the browser) of course the folder with image should be deleted. In nutshell I need somethig like callback to event "destroying session".

    Read the article

< Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >