Search Results

Search found 1794 results on 72 pages for 'embarrassingly parallel'.

Page 66/72 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Use bug tracker to get things done and manage personal tasks?

    - by Frank
    This is slightly off-topic, but can only be answered by programmers and is useful to many programmers: Do you think it is useful to use a bug tracking system to keep track of personal todo items and to Get Things Done? I have not tried that; in fact, I don't have much experience with bug tracking systems. For my todo lists, I have played around with Google Tasks and Remember The Milk, but both of them have shortcomings: Google Tasks: I like that you can create todo lists easily, can reorder items in the list and easily create hierarchies. But it is way too simplistic and does not allow to tag tasks or move tasks from one list to another. Remember The Milk: It is nice and sleek, but you cannot create hierarchies of tasks, cannot arbitrarily reorder tasks and cannot set dependencies of tasks. That's where a bug tracking system should come in: Since I think (maybe too much?) like a programmer, my tasks have a natural hierarchy and a tree of dependencies, like in a Makefile. Here are two examples: The task of writing my thesis is done when several milestones are done. Some of these milestones can run in parallel (writing background chapter, running experiments A, running experiments B), others depend on each other (writing main chapter depends on first getting results from experiments A). The same is true for more personal goals: I want to host a dinner party, which requires finding a good date, finishing the guest list, making invitations, finding nice recipes, cooking, ... For me, all these tasks involve hierarchical dependencies and milestones that bug tracking systems should be able to handle? Here is an article that explains how to do advanced GTD with Remember The Milk, but he has to use several workarounds: (1) add a general tag 'wait' to tasks that are waiting for others to be completed but you cannot enter the IDs of the tasks that they are waiting for, (2) starting some special tasks with "." so that they are at the top of the alphabetically sorted list and signal that others are 'below' it as subgoals. Bug tracking systems should be able to handle these things much more naturally? Does anyone have experience and can recommend a lightweight bug tracking system that might be good for this? Other requirements: Should run as web app, should allow me to tag a task with several tags (like 'work', 'fun', 'short-task', 'errands', ...).

    Read the article

  • How to split HTML code with javascript or JQuery

    - by Dean
    Hi I'm making a website using JSP and servlets and I have to now break up a list of radio buttons to insert a textarea and a button. I have got the button and textarea to hide and show when you click on the radio button it shows the text area and button. But this only appears at the top and when there are hundreds on the page this will become awkward so i need a way for it to appear underneath. Here is what my HTML looks like when complied: <form action="addSpotlight" method="POST"> <table> <tr><td><input type="radio" value="29" name="publicationIDs" ></td><td>A System For Dynamic Server Allocation in Application Server Clusters, IEEE International Symposium on Parallel and Distributed Processsing with Applications, 2008</td> </tr> <tr><td><input type="radio" value="30" name="publicationIDs" ></td><td>Analysing BitTorrent's Seeding Strategies, 7th IEEE/IFIP International Conference on Embedded and Ubiquitous Computing (EUC-09), 2009</td> </tr> <tr><td><input type="radio" value="31" name="publicationIDs" ></td><td>The Effect of Server Reallocation Time in Dynamic Resource Allocation, UK Performance Engineering Workshop 2009, 2009</td> </tr> <tr><td><input type="radio" value="32" name="publicationIDs" ></td><td>idk, hello, 1992</td> </tr> <tr><td><input type="radio" value="33" name="publicationIDs" ></td><td>sad, safg, 1992</td> </tr> <div class="abstractWriteup"><textarea name="abstract"></textarea> <input type="submit" value="Add Spotlight"></div> </table> </form> Now here is what my JSP looks like: <form action="addSpotlight" method="POST"> <table> <%int i = 0; while(i<ids.size()){%> <tr><td><input type="radio" value="<%=ids.get(i)%>" name="publicationIDs" ></td><td><%=info.get(i)%></td> </tr> <%i++; }%> <div class="abstractWriteup"><textarea name="abstract"></textarea> <input type="submit" value="Add Spotlight"></div> </table> </form> Thanks in Advance Dean

    Read the article

  • Advice for Architecture Design Logic for software application

    - by Prasad
    Hi, I have a framework of basic to complex set of objects/classes (C++) running into around 500. With some rules and regulations - all these objects can communicate with each other and hence can cover most of the common queries in the domain. My Dream: I want to provide these objects as icons/glyphs (as I learnt recently) on a workspace. All these objects can be dragged/dropped into the workspace. They have to communicate only through their methods(interface) and in addition to few iterative and conditional statements. All these objects are arranged finally to execute a protocol/workflow/dataflow/process. After drawing the flow, the user clicks the Execute/run button. All the user interaction should be multi-touch enabled. The best way to show my dream is : Jeff Han's Multitouch Video. consider Jeff is playing with my objects instead of the google maps. :-) it should be like playing a jigsaw puzzle. Objective: how can I achieve the following while working on this final product: a) the development should be flexible to enable provision for web services b) the development should enable easy web application development c) The development should enable client-server architecture - d) further it should also enable mouse based drag/drop desktop application like Adobe programs etc. I mean to say: I want to economize on investments. Now I list my efforts till now in design : a) Created an Editor (VB) where the user writes (manually) the object / class code b) On Run/Execute, the code is copied into a main() function and passed to interpreter. c) Catch the output and show it in the console. The interpreter can be separated to become a server and the Editor can become the client. This needs lot of standard client-server architecture work. But some how I am not comfortable in the tightness of this system. Without interpreter is there much faster and better embeddable solution to this? - other than writing a special compiler for these objects. Recently learned about AXIS-C++ can help me - looks like - a friend suggested. Is that the way to go ? Here are my questions: (pl. consider me a self taught programmer and NOT my domain) a) From the stage of C++ objects to multi-touch product, how can I make sure I will develop the parallel product/service models as well.? What should be architecture aspects I should consider ? b) What technologies are best suited for this? c) If I am thinking of moving to Cloud Computing, how difficult/ how redundant / how unnecessary my efforts will be ? d) How much time in months would it take to get the first beta ? I take the liberty to ask if any of the experts here are interested in this project, please email me: [email protected] Thank you for any help. Looking forward.

    Read the article

  • How do I correct "Commit Failed. File xxx is out of date. xxx path not found."

    - by Ryan Taylor
    I have recently run into a particularly sticky issue regarding committing the result of a merge in subversion. Our Subversion server is @ 1.5.0 and my TortoiseSVN client is now @ 1.6.1. I am trying to merge a feature branch back into my trunk. The merge appears to work okay; however, the commit fails with the following error message. Commit failed (details follow): File 'flex/src/com/penbay/invision/portal/services/http/soap/ReportServices/GetAllBldgsParamsByRegionBySiteResultEvent.as' is out of date '/svn/ibis/!svn/wrk/531d459d-80fa-ea46-bfb4-940d79ee6d2e/visualization/trunk/source/flex/src/com/penbay/invision/portal/services/http/soap/ReportServices/GetAllBldgsParamsByRegionBySiteResultEvent.as' path not found You have to update your working copy first. My working trunk is up to date. I have even checked out a new one into a different folder to make sure there wasn't any local cruft messing with the merge. I have done some more research into this and I think part of the problem is user error. I think our problems are: We had some developers committing work with a subversion client before 1.5 and some after. I believe this has the potential to corrupt the merge info. In other branches we have performed partial merges. That is, we did not always perform merges at the root of the branch. This was to facilitate updating Flex and .NET efforts within the same branch. We performed cyclic (reflexive) merges on our branch. This was done because we had multiple parallel branches and we wanted to periodically update our branch with the latest code in trunk. All of these things are explicitly not recommended by the Subversion book/team. We have learned our lesson and now know the best practices. However, we first need to merge and commit our latest branch. What it the best way to correct the problems we are encountering? Would deleting all the merge info in the trunk and branch be a viable solution? No. I have done this but it does not resolve the error that I am getting above.

    Read the article

  • C++ Unlocking a std::mutex before calling std::unique_lock wait

    - by Sant Kadog
    I have a multithreaded application (using std::thread) with a manager (class Tree) that executes some piece of code on different subtrees (embedded struct SubTree) in parallel. The basic idea is that each instance of SubTree has a deque that store objects. If the deque is empty, the thread waits until a new element is inserted in the deque or the termination criteria is reached. One subtree can generate objects and push them in the deque of another subtree. For convenience, all my std::mutex, std::locks and std::variable_condition are stored in a struct called "locks". The class Tree creates some threads that run the following method (first attempt) : void Tree::launch(SubTree & st, Locks & locks ) { /* some code */ std::lock_guard<std::mutex> deque_lock(locks.deque_mutex_[st.id_]) ; // lock the access to the deque of subtree st if (st.deque_.empty()) // check that the deque is still empty { // some threads are still running, wait for them to terminate std::unique_lock<std::mutex> wait_lock(locks.restart_mutex_[st.id_]) ; locks.restart_condition_[st.id_].wait(wait_lock) ; } /* some code */ } The problem is that "deque_lock" is still locked while the thread is waiting. Hence no object can be added in the deque of the current thread by a concurrent one. So I turned the lock_guard into a unique_lock and managed the lock/unlock manually : void launch(SubTree & st, Locks & locks ) { /* some code */ std::unique_lock<std::mutex> deque_lock(locks.deque_mutex_[st.id_]) ; // lock the access to the deque of subtree st if (st.deque_.empty()) // check that the deque is still empty { deque_lock.unlock() ; // unlock the access to the deque to enable the other threads to add objects // DATA RACE : nothing must happen to the unprotected deque here !!!!!! // some threads are still running, wait for them to terminate std::unique_lock<std::mutex> wait_lock(locks.restart_mutex_[st.id_]) ; locks.restart_condition_[st.id_].wait(wait_lock) ; } /* some code */ } The problem now, is that there is a data race, and I would like to make sure that the "wait" instruction is performed directly after the "deque_lock.unlock()" one. Would anyone know a way to create such a critical instruction sequence with the standard library ? Thanks in advance.

    Read the article

  • How to catch 'exceptions' for out of order execution in Workflow Foundation 4?

    - by Alex Key
    Hi, I am attempting to model a worklfow using a "WCF Workflow Service" in .net / vs 2010 that needs to handle out of order execution gracefully (but not allow it - if thath makes sense!?) For example I have 2 receive activities one called Initialize and the other called GetValue inside a FlowChart. In most cases Initialize should be called first and GetValue after (as modled in the flow chart). However if GetValue is executed before Initialize I do not want to return an "out of order" exception (although when I look at the WCF test client, I can't actually see an exception). But instead a custom exception saying something like "you must initialize first". In theory I could model this with lots of parallel activities and conditions to check if Initialized / Running / Terminated etc. But the business process I am modelling if very very similar to a state machine... except it must handle people executing things in the wrong order. Ideally I would like to catch the "out of order" exception (thought I don't think it's really an exception as such), check the 'exception' to see which function was attempted to run and then handle it. I have done some research around enabling AllowBufferedReceive. However I don't want to be able to execute out of order (I don't think), but instead give a detailed response if it does happen. I've looked at the new beta state machine template for WF 4 - but i'm not sure if it does what i'm after? I'm not sure if I have the wrong end of the stick, so any help would be greatly appreciated. [EDIT] To help clarify... Sorry it's a tricky one to explain. The standard I am trying to implement (the e-learning standard SCORM RTE) is structured like a state machine i.e. certain functions can only be executed in certain states. However the standard specifies that if the calling clients tries to execute a function that it is not meant to, then a warning should be issued... for example "you cannot use GetValue(), because you have not yet Initialized". Ideally I'd like to structure the workflow as the theoretical state machine and not need to have to use multiple if/else's to handle all the scenarios where something could be executed out-of-order. I'd like to catch a out-of-order exception (but I don't think there is such an exception - as it's not in the debugger) and rethrow it.

    Read the article

  • how to implement a "soft barrier" in multithreaded c++

    - by Jason
    I have some multithreaded c++ code with the following structure: do_thread_specific_work(); update_shared_variables(); //checkpoint A do_thread_specific_work_not_modifying_shared_variables(); //checkpoint B do_thread_specific_work_requiring_all_threads_have_updated_shared_variables(); What follows checkpoint B is work that could have started if all threads have reached only checkpoint A, hence my notion of a "soft barrier". Typically multithreading libraries only provide "hard barriers" in which all threads must reach some point before any can continue. Obviously a hard barrier could be used at checkpoint B. Using a soft barrier can lead to better execution time, especially since the work between checkpoints A and B may not be load-balanced between the threads (i.e. 1 slow thread who has reached checkpoint A but not B could be causing all the others to wait at the barrier just before checkpoint B). I've tried using atomics to synchronize things and I know with 100% certainty that is it NOT guaranteed to work. For example using openmp syntax, before the parallel section start with: shared_thread_counter = num_threads; //known at compile time #pragma omp flush Then at checkpoint A: #pragma omp atomic shared_thread_counter--; Then at checkpoint B (using polling): #pragma omp flush while (shared_thread_counter > 0) { usleep(1); //can be removed, but better to limit memory bandwidth #pragma omp flush } I've designed some experiments in which I use an atomic to indicate that some operation before it is finished. The experiment would work with 2 threads most of the time but consistently fail when I have lots of threads (like 20 or 30). I suspect this is because of the caching structure of modern CPUs. Even if one thread updates some other value before doing the atomic decrement, it is not guaranteed to be read by another thread in that order. Consider the case when the other value is a cache miss and the atomic decrement is a cache hit. So back to my question, how to CORRECTLY implement this "soft barrier"? Is there any built-in feature that guarantees such functionality? I'd prefer openmp but I'm familiar with most of the other common multithreading libraries. As a workaround right now, I'm using a hard barrier at checkpoint B and I've restructured my code to make the work between checkpoint A and B automatically load-balancing between the threads (which has been rather difficult at times). Thanks for any advice/insight :)

    Read the article

  • Help me understand this "Programming pearls" bitsort program

    - by ardsrk
    Jon Bentley in Column 1 of his book programming pearls introduces a technique for sorting a sequence of non-zero positive integers using bit vectors. I have taken the program bitsort.c from here and pasted it below: /* Copyright (C) 1999 Lucent Technologies */ /* From 'Programming Pearls' by Jon Bentley */ /* bitsort.c -- bitmap sort from Column 1 * Sort distinct integers in the range [0..N-1] */ #include <stdio.h> #define BITSPERWORD 32 #define SHIFT 5 #define MASK 0x1F #define N 10000000 int a[1 + N/BITSPERWORD]; void set(int i) { int sh = i>>SHIFT; a[i>>SHIFT] |= (1<<(i & MASK)); } void clr(int i) { a[i>>SHIFT] &= ~(1<<(i & MASK)); } int test(int i){ return a[i>>SHIFT] & (1<<(i & MASK)); } int main() { int i; for (i = 0; i < N; i++) clr(i); /*Replace above 2 lines with below 3 for word-parallel init int top = 1 + N/BITSPERWORD; for (i = 0; i < top; i++) a[i] = 0; */ while (scanf("%d", &i) != EOF) set(i); for (i = 0; i < N; i++) if (test(i)) printf("%d\n", i); return 0; } I understand what the functions clr, set and test are doing and explain them below: ( please correct me if I am wrong here ). clr clears the ith bit set sets the ith bit test returns the value at the ith bit Now, I don't understand how the functions do what they do. I am unable to figure out all the bit manipulation happening in those three functions. Please help.

    Read the article

  • multithreading with database

    - by Darsin
    I am looking out for a strategy to utilize multithreading (probably asynchronous delegates) to do a synchronous operation. I am new to multithreading so i will outline my scenario first. This synchronous operation right now is done for one set of data (portfolio) based on the the parameters provided. The (psudeo-code) implementation is given below: public DataSet DoTests(int fundId, DateTime portfolioDate) { // Get test results for the portfolio // Call the database adapter method, which in turn is a stored procedure, // which in turns runs a series of "rule" stored procs and fills a local temp table and returns it back. DataSet resultsDataSet = GetTestResults(fundId, portfolioDate); try { // Do some local processing on the results DoSomeProcessing(resultsDataSet); // Save the results in Test, TestResults and TestAllocations tables in a transaction. // Sets a global transaction which is provided to all the adapter methods called below // It is defined in the Base class StartTransaction("TestTransaction"); // Save Test and get a testId int testId = UpdateTest(resultsDataSet); // Adapter method, uses the same transaction // Update testId in the other tables in the dataset UpdateTestId(resultsDataSet, testId); // Update TestResults UpdateTestResults(resultsDataSet); // Adapter method, uses the same transaction // Update TestAllocations UpdateTestAllocations(resultsDataSet); // Adapter method, uses the same transaction // It is defined in the base class CommitTransaction("TestTransaction"); } catch { RollbackTransaction("TestTransaction"); } return resultsDataSet; } Now the requirement is to do it for multiple set of data. One way would be to call the above DoTests() method in a loop and get the data. I would prefer doing it in parallel. But there are certain catches: StartTransaction() method creates a connection (and transaction) every time it is called. All the underlying database tables, procedures are the same for each call of DoTests(). (obviously). Thus my question are: Will using multithreading anyway improve performance? What are the chances of deadlock especially when new TestId's are being created and the Tests, TestResults and TestAllocations are being saved? How can these deadlocked be handled? Is there any other more efficient way of doing the above operation apart from looping over the DoTests() method repeatedly?

    Read the article

  • Log4j: Issues about the FallbackErrorHandler

    - by rdogpink
    I am working on a client-server-application and wanted to implement a flexible Loggingframework, so I chose log4j, which doesn´t really evolve anymore, but it is still handy framework. Because the Logging happens along the network, i wanted a solution for the case, that the network drive isn´t available, so the Logger has to change its destination file(s). Now I wanted to use the FallbackErrorHandler (configured with a XML-File) from the Log4j-library and the implementation worked: When my network drive isn´t available, it switches to a local Logfile, so no logging should be lost. But I headded two problems since yesterday and couldn´t figure or find out, how to solve it. No return to initial Logging Configuration: When the network drive is on again and the Logger could write to the old destinations, log4j still logs at the local drive and I can´t figure out, how to notify the original (primary) Logger to start again. I also tried to attach a second Appender to the ErrorHandler, which should mirror the failed primary Logger, that it tries to write on the network destination and when the network is on again, it logs in both files, on the local and on the network drive. But unfortunately it didn´t work out, I only got a failure message that the ErrorHandler-content doesn´t fit. log4j:WARN The content of element type "errorHandler" must match "(param*,root-ref?,logger-ref*,appender-ref?)". This is the responsible code. <appender name="TraceAppender" class="org.apache.log4j.DailyRollingFileAppender"> <!-- The second appender-ref "TestAppender" leads to the error. --> <errorHandler class="org.apache.log4j.varia.FallbackErrorHandler"> <logger-ref ref="com.idoh"/> <appender-ref ref="TraceFallbackAppender"/> <appender-ref ref="TestAppender"/> </errorHandler> <param name="datePattern" value=".yyyy-MM-dd" /> <param name="file" value="logs/Trace.txt" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%-6r %d{HH:mm:ss,SSS} [%t] %-5p - %m%n"/> </layout> </appender> So, how could I trigger log4j to reset to initial configuration or hold a second appender parallel to the "Local-Logger". My Application should work by itself and shouldn´t have to be restarted often. First Error message is swallowed: I recognized, that the first message, which leads to the switching between the primary logger and the FallbackErrorHandler (for example a logging-request to a readonly-File), is swallowed, so neither the primary logger logs it (because it can´t) nor the backup-Logger knows what it missed. So anybody else ran in this problem and could solve it? Or has any suggestions?

    Read the article

  • Several client waiting for the same event

    - by ff8mania
    I'm developing a communication API to be used by a lot of generic clients to communicate with a proprietary system. This proprietary system exposes an API, and I use a particular classes to send and wait messages from this system: obviously the system alert me that a message is ready using an event. The event is named OnMessageArrived. My idea is to expose a simple SendSyncMessage(message) method that helps the user/client to simply send a message and the method returns the response. The client: using ( Communicator c = new Communicator() ) { response = c.SendSync(message); } The communicator class is done in this way: public class Communicator : IDisposable { // Proprietary system object ExternalSystem c; String currentRespone; Guid currentGUID; private readonly ManualResetEvent _manualResetEvent; private ManualResetEvent _manualResetEvent2; String systemName = "system"; String ServerName = "server"; public Communicator() { _manualResetEvent = new ManualResetEvent(false); //This methods are from the proprietary system API c = SystemInstance.CreateInstance(); c.Connect(systemName , ServerName); } private void ConnectionStarter( object data ) { c.OnMessageArrivedEvent += c_OnMessageArrivedEvent; _manualResetEvent.WaitOne(); c.OnMessageArrivedEvent-= c_OnMessageArrivedEvent; } public String SendSync( String Message ) { Thread _internalThread = new Thread(ConnectionStarter); _internalThread.Start(c); _manualResetEvent2 = new ManualResetEvent(false); String toRet; int messageID; currentGUID = Guid.NewGuid(); c.SendMessage(Message, "Request", currentGUID.ToString()); _manualResetEvent2.WaitOne(); toRet = currentRespone; return toRet; } void c_OnMessageArrivedEvent( int Id, string root, string guid, int TimeOut, out int ReturnCode ) { if ( !guid.Equals(currentGUID.ToString()) ) { _manualResetEvent2.Set(); ReturnCode = 0; return; } object newMessage; c.FetchMessage(Id, 7, out newMessage); currentRespone = newMessage.ToString(); ReturnCode = 0; _manualResetEvent2.Set(); } } I'm really noob in using waithandle, but my idea was to create an instance that sends the message and waits for an event. As soon as the event arrived, checks if the message is the one I expect (checking the unique guid), otherwise continues to wait for the next event. This because could be (and usually is in this way) a lot of clients working concurrently, and I want them to work parallel. As I implemented my stuff, at the moment if I run client 1, client 2 and client 3, client 2 starts sending message as soon as client 1 has finished, and client 3 as client 2 has finished: not what I'm trying to do. Can you help me to fix my code and get my target? Thanks!

    Read the article

  • objective C convert NSString to unsigned

    - by user1501354
    I have changed my question. I want to convert an NSString to an unsigned int. Why? Because I want to do parallel payment in PayPal. Below I have given my coding in which I want to convert the NSString to an unsigned int. My query is: //optional, set shippingEnabled to TRUE if you want to display shipping //options to the user, default: TRUE [PayPal getPayPalInst].shippingEnabled = TRUE; //optional, set dynamicAmountUpdateEnabled to TRUE if you want to compute //shipping and tax based on the user's address choice, default: FALSE [PayPal getPayPalInst].dynamicAmountUpdateEnabled = TRUE; //optional, choose who pays the fee, default: FEEPAYER_EACHRECEIVER [PayPal getPayPalInst].feePayer = FEEPAYER_EACHRECEIVER; //for a payment with multiple recipients, use a PayPalAdvancedPayment object PayPalAdvancedPayment *payment = [[PayPalAdvancedPayment alloc] init]; payment.paymentCurrency = @"USD"; // A payment note applied to all recipients. payment.memo = @"A Note applied to all recipients"; //receiverPaymentDetails is a list of PPReceiverPaymentDetails objects payment.receiverPaymentDetails = [NSMutableArray array]; NSArray *emailArray = [NSArray arrayWithObjects:@"[email protected]",@"[email protected]", nil]; for (int i = 1; i <= 2; i++) { PayPalReceiverPaymentDetails *details = [[PayPalReceiverPaymentDetails alloc] init]; // Customize the payment notes for one of the three recipient. if (i == 2) { details.description = [NSString stringWithFormat:@"Component %d", i]; } details.recipient = [NSString stringWithFormat:@"%@",[emailArray objectAtIndex:i-1]]; unsigned order; if (i==1) { order = [[feeArray objectAtIndex:0] unsignedIntValue]; } if (i==2) { order = [[amountArray objectAtIndex:0] unsignedIntValue]; } //subtotal of all items for this recipient, without tax and shipping details.subTotal = [NSDecimalNumber decimalNumberWithMantissa:order exponent:-4 isNegative:FALSE]; //invoiceData is a PayPalInvoiceData object which contains tax, shipping, and a list of PayPalInvoiceItem objects details.invoiceData = [[PayPalInvoiceData alloc] init]; //invoiceItems is a list of PayPalInvoiceItem objects //NOTE: sum of totalPrice for all items must equal details.subTotal //NOTE: example only shows a single item, but you can have more than one details.invoiceData.invoiceItems = [NSMutableArray array]; PayPalInvoiceItem *item = [[PayPalInvoiceItem alloc] init]; item.totalPrice = details.subTotal; [details.invoiceData.invoiceItems addObject:item]; [payment.receiverPaymentDetails addObject:details]; } [[PayPal getPayPalInst] advancedCheckoutWithPayment:payment]; Can anybody tell me how to do this conversion? Thanks and regards in advance.

    Read the article

  • Rendering of graphics different depending on position

    - by jedierikb
    When drawing parallel vertical lines with a fixed distance between them (1.75 pixels) with a non-integer x-value-offset to both lines, the lines are drawn differently based on the offset. In the picture below are two pairs of very close together vertical lines. As you can see, they look very different. This is frustrating, especially when animating the sprite. Any ideas how ensure that sprites-with-non-integer-positions' graphics will visually display the same? package { import flash.display.Sprite; import flash.display.StageAlign; import flash.display.StageScaleMode; import flash.events.Event; public class tmpa extends Sprite { private var _sp1:Sprite; private var _sp2:Sprite; private var _num:Number; public function tmpa( ):void { stage.align = StageAlign.TOP_LEFT; stage.scaleMode = StageScaleMode.NO_SCALE; _sp1 = new Sprite( ); drawButt( _sp1, 0 ); _sp1.x = 100; _sp1.y = 100; _num = 0; _sp2 = new Sprite( ); drawButt( _sp2, _num ); _sp2.x = 100; _sp2.y = 200; addChild( _sp1 ); addChild( _sp2 ); addEventListener( Event.ENTER_FRAME, efCb, false, 0, true ); } private function efCb( evt:Event ):void { _num += .1; if (_num > 400) { _num = 0; } drawButt( _sp2, _num ); } private function drawButt( sp:Sprite, offset:Number ):void { var px1:Number = 1 + offset; var px2:Number = 2.75 + offset; sp.graphics.clear( ); sp.graphics.lineStyle( 1, 0, 1, true ); sp.graphics.moveTo( px1, 1 ); sp.graphics.lineTo( px1, 100 ); sp.graphics.lineStyle( 1, 0, 1, true ); sp.graphics.moveTo( px2, 1 ); sp.graphics.lineTo( px2, 100 ); } } } edit from original post which thought the problem was tied to the x-position of the sprite.

    Read the article

  • Improving Javascript Load Times - Concatenation vs Many + Cache

    - by El Yobo
    I'm wondering which of the following is going to result in better performance for a page which loads a large amount of javascript (jQuery + jQuery UI + various other javascript files). I have gone through most of the YSlow and Google Page Speed stuff, but am left wondering about a particular detail. A key thing for me here is that the site I'm working on is not on the public net; it's a business to business platform where almost all users are repeat visitors (and therefore with caches of the data, which is something that YSlow assumes will not be the case for a large number of visitors). First up, the standard approach recommended by tools such as YSlow is to concatenate it, compress it, and serve it up in a single file loaded at the end of your page. This approach sounds reasonably effective, but I think that a key part of the reasoning here is to improve performance for users without cached data. The system I currently have is something like this * All javascript files are compressed and loaded at the bottom of the page * All javascript files have far future cache expiration dates, so will remain (for most users) in the cache for a long time * Pages only load the javascript files that they require, rather than loading one monolithic file, most of which will not be required Now, my understanding is that, if the cache expiration date for a javascript file has not been reached, then the cached version is used immediately; there is no HTTP request sent at to the server at all. If this is correct, I would assume that having multiple tags is not causing any performance penalty, as I'm still not having any additional requests on most pages (recalling from above that almost all users have populated caches). In addition to this, not loading the JS means that the browser doesn't have to interpret or execute all this additional code which it isn't going to need; as a B2B application, most of our users are unfortunately stuck with IE6 and its painfully slow JS engine. Another benefit is that, when code changes, only the affected files need to be fetched again, rather than the whole set (granted, it would only need to be fetched once, so this is not so much of a benefit). I'm also looking at using LabJS to allow for parallel loading of the JS when it's not cached. So, what do people think is a better approach? In a similar vein, what do you think about a similar approach to CSS - is monolithic better?

    Read the article

  • Separation of domain and ui layer in a composite

    - by hansmaad
    Hi all, i'm wondering if there is a pattern how to separate the domain logic of a class from the ui responsibilities of the objects in the domain layer. Example: // Domain classes interface MachinePart { CalculateX(in, out) // Where do we put these: // Draw(Screen) ?? // ShowProperties(View) ?? // ... } class Assembly : MachinePart { CalculateX(in, out) subParts } class Pipe : MachinePart { CalculateX(in, out) length, diamater... } There is an application that calculates the value X for machines assembled from many machine parts. The assembly is loaded from a file representation and is designed as a composite. Each concrete part class stores some data to implement the CalculateX(in,out) method to simulate behaviour of the whole assembly. The application runs well but without GUI. To increase the usability a GUi should be developed on top of the existing implementation (changes to the existing code are allowed). The GUI should show a schematic graphical representation of the assembly and provide part specific dialogs to edit several parameters. To achieve these goals the application needs new functionality for each machine part to draw a schematic representation on the screen, show a property dialog and other things not related to the domain of machine simulation. I can think of some different solutions to implement a Draw(Screen) functionality for each part but i am not happy with each of them. First i could add a Draw(Screen) method to the MachinePart interface but this would mix-up domain code with ui code and i had to add a lot of functionality to each machine part class what makes my domain model hard to read and hard to understand. Another "simple" solution is to make all parts visitable and implement ui code in visitors but Visitor does not belong to my favorite patterns. I could derive UI variants from each machine part class to add the UI implementation there but i had to check if each part class is suited for inheritance and had to be careful on changes to the base classes. My currently favorite design is to create a parallel composite hierarchy where each component stores data to define a machine part, has implementation for UI methods and a factory method which creates instances of the corresponding domain classes, so that i can "convert" a UI assembly to a domain assembly. But there are problems to go back from the created domain hierarchy to the UI hierarchy for showing calculation results in the drawing for example (imagine some parts store some values during the calculation i want to show in the schematic representation after the simluation). Maybe there are some proven patterns for such problems?

    Read the article

  • Collision Detection probelm (intersection with plane)

    - by Demi
    I'm doing a scene using openGL (a house). I want to do some collision detection, mainly with the walls in the house. I have tried the following code: // a plane is represented with a normal and a position in space Vector planeNor(0,0,1); Vector position(0,0,-10); Plane p(planeNor,position); Vector vel(0,0,-1); double lamda; // this is the intersection point Vector pNormal; // the normal of the intersection // this method is from Nehe's Lesson 30 coll= p.TestIntersionPlane(vel,Z,lamda,pNormal); glPushMatrix(); glBegin(GL_QUADS); if(coll) glColor3f(1,0,0); else glColor3f(1,1,1); glVertex3d(0,0,-10); glVertex3d(3,0,-10); glVertex3d(3,3,-10); glVertex3d(0,3,-10); glEnd(); glPopMatrix(); Nehe's method: #define EPSILON 1.0e-8 #define ZERO EPSILON bool Plane::TestIntersionPlane(const Vector3 & position,const Vector3 & direction, double& lamda, Vector3 & pNormal) { double DotProduct=direction.scalarProduct(normal); // Dot Product Between Plane Normal And Ray Direction double l2; // Determine If Ray Parallel To Plane if ((DotProduct<ZERO)&&(DotProduct>-ZERO)) return false; l2=(normal.scalarProduct(position))/DotProduct; // Find Distance To Collision Point if (l2<-ZERO) // Test If Collision Behind Start return false; pNormal= normal; lamda=l2; return true; } Z is initially (0,0,0) and every time I move the camera towards the plane, I reduce its z component by 0.1 (i.e. Z.z-=0.1 ). I know that the problem is with the vel vector, but I can't figure out what the right value should be. Can anyone please help me?

    Read the article

  • Asyncronous While Loop?

    - by o7th Web Design
    I have a pretty great SqlDataReader wrapper in which I can map the output into a strongly typed list. What I am finding now is that on larger datasets with larger numbers of columns, performance could probably be a bit better if I can optimize my mapping. In thinking about this there is one section in particular that I am concerned about as it seems to be the heaviest hitter: while (_Rdr.Read()) { T newObject = new T(); for (int i = 0; i <= _Rdr.FieldCount - 1; ++i) { PropertyInfo info = (PropertyInfo)_ht[_Rdr.GetName(i).ToUpper()]; if ((info != null) && info.CanWrite) { info.SetValue(newObject, (_Rdr.GetValue(i) is DBNull) ? default(T) : _Rdr.GetValue(i), null); } } _en.Add(newObject); } _Rdr.Close(); What I would really like to know, is if there is a way that I can make this loop asyncronous? I feel that will make all the difference in the world with this beast :) Here is the entire Map method in case anyone can see where I can make further improvements on it... IList<T> Map<T> // Map our datareader object to a strongly typed list private static IList<T> Map<T>(IDataReader _Rdr) where T : new() { try { Type _t = typeof(T); List<T> _en = new List<T>(); Hashtable _ht = new Hashtable(); PropertyInfo[] _props = _t.GetProperties(); Parallel.ForEach(_props, info => { _ht[info.Name.ToUpper()] = info; }); while (_Rdr.Read()) { T newObject = new T(); for (int i = 0; i <= _Rdr.FieldCount - 1; ++i) { PropertyInfo info = (PropertyInfo)_ht[_Rdr.GetName(i).ToUpper()]; if ((info != null) && info.CanWrite) { info.SetValue(newObject, (_Rdr.GetValue(i) is DBNull) ? default(T) : _Rdr.GetValue(i), null); } } _en.Add(newObject); } _Rdr.Close(); return _en; }catch(Exception ex){ _Msg += "Wrapper.Map Exception: " + ex.Message; ErrorReporting.WriteEm.WriteItem(ex, "o7th.Class.Library.Data.Wrapper.Map", _Msg); return default(IList<T>); } }

    Read the article

  • iproute2 not functioning ("RTNETLINK answers: Operation not supported")

    - by James Watt
    The command and error message: gtwy ~ # ip rule add from 64.251.23.186 table t1 RTNETLINK answers: Operation not supported Older article of the same problem, but it did not help me: http://forums.gentoo.org/viewtopic-t-696982-start-0-postdays-0-postorder-asc-highlight-.html I have looked on google at great lengths to try to find a solution. It seems that my kernel configuration is missing something? Any help or ideas would be appreciated. My system/kernel is: 2.6.36-gentoo-r5 #3 SMP Thu Jan 13 10:49:06 EST 2011 x86_64 Intel(R) Xeon(R) CPU X3220 @ 2.40GHz GenuineIntel GNU/Linux. I am posting this on SuperUser since this system is used as a workstation and this problem is unrelated to specific tasks that are handled exclusively by servers. iproute2 is installed: gtwy etc # emerge --search iproute2 Searching... [ Results for search key : iproute2 ] [ Applications found : 1 ] * sys-apps/iproute2 Latest version available: 2.6.35-r2 Latest version installed: 2.6.35-r2 Size of files: 378 kB Homepage: http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2 Description: kernel routing and traffic control utilities License: GPL-2 A small snippet of my kernel .config (view entire .config): gtwy linux # cat .config | grep NETLINK CONFIG_NETFILTER_NETLINK=y CONFIG_NETFILTER_NETLINK_QUEUE=y CONFIG_NETFILTER_NETLINK_LOG=y CONFIG_NF_CT_NETLINK=y CONFIG_SCSI_NETLINK=y gtwy linux # cat .config | grep IP_ADVANCED_ROUTER CONFIG_IP_ADVANCED_ROUTER=y gtwy linux # cat .config | grep INGRESS CONFIG_NET_SCH_INGRESS=y gtwy linux # cat .config | grep NET_SCHED CONFIG_NET_SCHED=y emerge --info Portage 2.1.9.25 (default/linux/amd64/10.0, gcc-4.1.2, glibc-2.10.1-r1, 2.6.36-gentoo-r5 x86_64) ================================================================= System uname: Linux-2.6.36-gentoo-r5-x86_64-Intel-R-_Xeon-R-_CPU_X3220_@_2.40GHz-with-gentoo-1.12.13 Timestamp of tree: Thu, 13 Jan 2011 01:15:01 +0000 app-shells/bash: 4.0_p37 dev-java/java-config: 1.3.7-r1, 2.1.10 dev-lang/python: 2.4.6, 2.5.4-r4, 2.6.5-r2, 3.1.2-r3 sys-apps/baselayout: 1.12.13 sys-apps/sandbox: 1.6-r2 sys-devel/autoconf: 2.13, 2.65 sys-devel/automake: 1.9.6-r2::<unknown repository>, 1.10.2, 1.11.1 sys-devel/binutils: 2.20.1-r1 sys-devel/gcc: 4.1.2, 4.3.4, 4.4.3-r2 sys-devel/gcc-config: 1.4.1 sys-devel/libtool: 2.2.6b sys-devel/make: 3.81 virtual/os-headers: 2.6.30-r1 (sys-kernel/linux-headers) ACCEPT_KEYWORDS="amd64" ACCEPT_LICENSE="*" CBUILD="x86_64-pc-linux-gnu" CFLAGS="-march=nocona -O2 -pipe" CHOST="x86_64-pc-linux-gnu" CONFIG_PROTECT="/etc /var/bind" CONFIG_PROTECT_MASK="/etc/ca-certificates.conf /etc/env.d /etc/env.d/java/ /etc/fonts/fonts.conf /etc/gconf /etc/php/apache2-php5/ext-active/ /etc/php/cgi-php5/ext-active/ /etc/php/cli-php5/ext-active/ /etc/revdep-rebuild /etc/sandbox.d /etc/terminfo" CXXFLAGS="-march=nocona -O2 -pipe" DISTDIR="/usr/portage/distfiles" FEATURES="assume-digests binpkg-logs distlocks fixlafiles fixpackages news parallel-fetch protect-owned sandbox sfperms strict unknown-features-warn unmerge-logs unmerge-orphans userfetch" GENTOO_MIRRORS="http://gentoo.chem.wisc.edu/gentoo" LC_ALL="en_US.UTF-8" LDFLAGS="-Wl,-O1 -Wl,--as-needed" LINGUAS="en" MAKEOPTS="-j5" PKGDIR="/usr/portage/packages" PORTAGE_CONFIGROOT="/" PORTAGE_RSYNC_OPTS="--recursive --links --safe-links --perms --times --compress --force --whole-file --delete --stats --timeout=180 --exclude=/distfiles --exclude=/local --exclude=/packages" PORTAGE_TMPDIR="/var/tmp" PORTDIR="/usr/portage" PORTDIR_OVERLAY="/usr/local/portage" SYNC="rsync://rsync.namerica.gentoo.org/gentoo-portage" USE="acl amd64 apache2 berkdb bzip2 cli cracklib crypt ctype cups curl cxx dri fortran gdbm gpm iconv jpeg jpeg2k libwww mmx modules mudflap multilib mysql ncurses nls nptl nptlonly openmp pam pcre perl php png pppd python readline session sockets sse sse2 ssl symlink sysfs tcpd threads unicode vhosts xml xorg xsl zlib" ALSA_CARDS="ali5451 als4000 atiixp atiixp-modem bt87x ca0106 cmipci emu10k1x ens1370 ens1371 es1938 es1968 fm801 hda-intel intel8x0 intel8x0m maestro3 trident usb-audio via82xx via82xx-modem ymfpci" ALSA_PCM_PLUGINS="adpcm alaw asym copy dmix dshare dsnoop empty extplug file hooks iec958 ioplug ladspa lfloat linear meter mmap_emul mulaw multi null plug rate route share shm softvol" APACHE2_MODULES="actions alias auth_basic authn_alias authn_anon authn_dbm authn_default authn_file authz_dbm authz_default authz_groupfile authz_host authz_owner authz_user autoindex cache cgi cgid dav dav_fs dav_lock deflate dir disk_cache env expires ext_filter file_cache filter headers include info log_config logio mem_cache mime mime_magic negotiation rewrite setenvif speling status unique_id userdir usertrack vhost_alias" COLLECTD_PLUGINS="df interface irq load memory rrdtool swap syslog" ELIBC="glibc" GPSD_PROTOCOLS="ashtech aivdm earthmate evermore fv18 garmin garmintxt gpsclock itrax mtk3301 nmea ntrip navcom oceanserver oldstyle oncore rtcm104v2 rtcm104v3 sirf superstar2 timing tsip tripmate tnt ubx" INPUT_DEVICES="keyboard mouse evdev" KERNEL="linux" LCD_DEVICES="bayrad cfontz cfontz633 glk hd44780 lb216 lcdm001 mtxorb ncurses text" LINGUAS="en" PHP_TARGETS="php5-3" RUBY_TARGETS="ruby18" USERLAND="GNU" VIDEO_CARDS="fbdev glint intel mach64 mga neomagic nouveau nv r128 radeon savage sis tdfx trident vesa via vmware dummy v4l" XTABLES_ADDONS="quota2 psd pknock lscan length2 ipv4options ipset ipp2p iface geoip fuzzy condition tee tarpit sysrq steal rawnat logmark ipmark dhcpmac delude chaos account" Unset: CPPFLAGS, CTARGET, EMERGE_DEFAULT_OPTS, FFLAGS, INSTALL_MASK, LANG, PORTAGE_BUNZIP2_COMMAND, PORTAGE_COMPRESS, PORTAGE_COMPRESS_FLAGS, PORTAGE_RSYNC_EXTRA_OPTS

    Read the article

  • PHP crashing (seg-fault) under mod_fcgi, apache

    - by Andras Gyomrey
    I've been programming a site using: Zend Framework 1.11.5 (complete MVC) PHP 5.3.6 Apache 2.2.19 CentOS 5.6 i686 virtuozzo on vps cPanel WHM 11.30.1 (build 4) Mysql 5.1.56-log Mysqli API 5.1.56 The issue started here http://stackoverflow.com/questions/6769515/php-programming-seg-fault. In brief, php is giving me random segmentation-faults. [Wed Jul 20 17:45:34 2011] [error] mod_fcgid: process /usr/local/cpanel/cgi-sys/php5(11562) exit(communication error), get unexpected signal 11 [Wed Jul 20 17:45:34 2011] [warn] [client 190.78.208.30] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Wed Jul 20 17:45:34 2011] [error] [client 190.78.208.30] Premature end of script headers: index.php About extensions. When i compile php with "--enable-debug" flag, i have to disable this line: zend_extension="/usr/local/IonCube/ioncube_loader_lin_5.3.so" Otherwise, the server doesn't accept requests and i get a "The connection with the server was reset". It is possible that i have to disable eaccelerator too because of the same reason. I still don't get why apache gets running it some times and some others not: extension="eaccelerator.so" Anyway, after i get httpd running, seg-faults can occurr randomly. If i don't compile php with "--enable-debug" flag, i can get DETERMINISTICALLY a php crash: <?php class Admin_DbController extends Controller_BaseController { public function updateSqlDefinitionsAction() { $db = Zend_Registry::get('db'); $row = $db->fetchRow("SHOW CREATE TABLE 222AFI"); } } ?> BUT if i compile php with "--enable-debug" flag, it's really hard to get this error. I must add some complexity to make it crash. I have to be doing many paralell requests for a few seconds to get a crash: <?php class Admin_DbController extends Controller_BaseController { public function updateSqlDefinitionsAction() { $db = Zend_Registry::get('db'); $tableList = $db->listTables(); foreach ($tableList as $tableName){ $row = $db->fetchRow("SHOW CREATE TABLE " . $db->quoteIdentifier($tableName)); file_put_contents( DB_DEFINITIONS_PATH . '/' . $tableName . '.sql', $row['Create Table'] . ';' ); } } } ?> Please notice this is the same script, but creating DDL for all tables in database rather than for one. It seems that if php is heavy loaded (with extensions and me doing many paralell requests) it's when i get php to crash. About starting httpd with "-X": i've tried. The thing is, it is already hard to make php crash with --enable-debug. With "-X" option (which only enables one child process) i can't do parallel requests. So i haven't been able to create to proper debug backtrace: https://bugs.php.net/bugs-generating-backtrace.php My concrete question is, what do i do to get a coredump? root@GWT4 [~]# httpd -V Server version: Apache/2.2.19 (Unix) Server built: Jul 20 2011 19:18:58 Cpanel::Easy::Apache v3.4.2 rev9999 Server's Module Magic Number: 20051115:28 Server loaded: APR 1.4.5, APR-Util 1.3.12 Compiled using: APR 1.4.5, APR-Util 1.3.12 Architecture: 32-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/usr/local/apache" -D SUEXEC_BIN="/usr/local/apache/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf"

    Read the article

  • Bypass BIOS password set by faulty Toshiba firmware on Satellite A55 laptop?

    - by Brian
    How can the CMOS be cleared on the Toshiba Satellite A55-S1065? I have this 7 year old laptop that has been crippled by a glitch in its BIOS: 'A "Password =" prompt may be displayed when the computer is turned on, even though no power-on password has been set. If this happens, there is no password that will satisfy the password request. The computer will be unusable until this problem is resolved. [..] The occurrence of this problem on any particular computer is unpredictable -- it may never happen, but it could happen any time that the computer is turned on. [..] Toshiba will cover the cost of this repair under warranty until Dec 31, 2010.' -Toshiba As they stated, this machine is "unusable." The escape key does not bypass the prompt (nor does any other key), thus no operating system can be booted and no firmware updates can be installed. After doing some research, I found solutions that have been suggested for various Toshiba Satellite models afflicted by this glitch: "Make arrangements with a Toshiba Authorized Service Provider to have this problem resolved." -Toshiba (same link). Even prior to the expiration of Toshiba's support ("repair under warranty until Dec 31, 2010"), there have been reports that this solution is prohibitively expensive, labor charges accruing even when the laptop is still under warranty, and other reports that are generally discouraging: "They were unable to fix it and the guy who worked on it said he couldn’t find the jumpers on the motherboard to clear the BIOS. I paid $39 for my troubles and still have the password problem." - Steve. Since the costs of the repairs can now exceed the value of the hardware, it would seem this is a DIY solution, or a non-solution (i.e. the hardware is trash). Build a Toshiba parallel loopback by stripping and soldering the wires on a DB25 plug to connect connect these pins: 1-5-10, 2-11, 3-17, 4-12, 6-16, 7-13, 8-14, 9-15, 18-25. -CGSecurity. According to a list of supported models on pwcrack, this will likely not work for my Satellite A55-1065 (as well as many other models of similar age). -pwcrack Disconnect the laptop battery for an extended period of time. Doesn't work, laptop sat in a closet for several years without the battery connected and I forgot about the whole thing for awhile. The poor thing. Clear CMOS by setting the proper jumper setting or by removing the CMOS (RTC) battery, or by short circuiting a (hidden?) jumper that looks like a pair of solder marks -various sources for various Satellite models: Satellite A105: "you will see C88 clearly labeled right next the jack that the wireless card plugs into. There are two little solder squares (approx 1/16") at this location" -kerneltrap Satellite 1800: "Underneath the RAM there is black sticker, peel off the black sticker and you will reveal two little solder marks which are actually 'jumpers'. Very carefully hold a flat-head screwdriver touching both points and power on the unit briefly, effectively 'shorting' this circuit." -shadowfax2020 Satellite L300: "Short the B500 solder pads on the system board." -Lester Escobar Satellite A215: "Short the B500 solder pads on the system board." -fixya Clearing the CMOS could resolve the issue, but I cannot locate a jumper or a battery on this board. Nothing that looks remotely like a battery can be removed (everything is soldered). I have looked closely at the area around the memory and do not see any obvious solder pads that could be a secret jumper. Here are pictures (click for full resolution) : Where is the jumper (or solder pads) to short circuit and wipe the CMOS on this board? Possibly related questions: Remove Toshiba laptop BIOS password? Password Problem Toshiba Satellite..

    Read the article

  • SharePoint Upgrade Global Nav Quirks?

    - by elorg
    We're working on a parallel install/upgrade of SharePoint. The client has WSS 2003 on some old hardware. We've installed MOSS 2007 in a medium farm environment. They want to use this as an opportunity to not just upgrade and use the new features, but to also better organize their content and categorize between different site collections. To accommodate, we've created a few site collections per their specifications in the new environment, and when we ran an upgrade test run we ran into a few .. quirks. We made a backup of the old content database, copied it over to the new environment and restored it as a new database. Created a new web app and attached the migrated data to do an in-place upgrade in this new "test" area. This seems pretty standard - no issues. We have to do a little bit of cleanup (e.g. reset pages to site definition, reset themes, and inherit the global nav / top link bar, etc.). Once that's done, we're using stsadm export/import to copy the individual sites over to their ultimate destinations in the various different site collections. So far so good. But then we ran into one particular site that has a link to an .aspx page in the top link bar in WSS 2003 that's not behaving properly after the upgrade. It's just a link to a "dashboard" .aspx page in a doc library - nothing special. It doesn't seem to matter what we do, or what order we do it (in the "test" web app, in the destination web app, or both). In the end, this ONE site will not allow us to create a link/tab in the global nav. It can inherit the global nav just fine. We can break the inheritance just fine. But if we want to manually add a link in the top link bar - we go through the steps that I've done 1,000x before and click OK - and the tab never appears. It doesn't matter if it's to a page within the site itself, or to Google. We can migrate over other sites into the same site collection and add a tab without issue. If we migrate this quirky site over to another site collection we run into the same issue. Yet, in the "test" web app that we're using to upgrade the data we can add a tab? If we add the tab before we export/import to the final destination, the tab is lost during the process? Has anyone run into anything like this? Any ideas? I've tried every combination of everything that I can think of and nothing works. Unless we can figure out how to get this to work, we're going to just add this tab to the global nav for the entire site collection and inherit it for this site (but that adds the link to all of the site that will inherit, which is both a pro & con for them).

    Read the article

  • what is the best mid/high-end class audio/music creation audio sound card?

    - by Chris
    Hello, I have a computershop myself, and I repair computers. But one of the things I really don't know (yet) is the performace od audio cards for music creation with midi. I have searched and searched and came up with some good reviews, but after browsing for a couple of hours I could't see the trees trough the forrest :-D (it's a dutch expression) At one moment I thought the M-Audio - Delta 1010LT would be a good PCIe card, later on I read that this card was released years ago. (but that could be false information) Also any personal expierence would be great, but not necessairy. I have searched a few cards, and I hope someone can help me make a choice for a friend of mine. He's buget is between $100 and $350 I know there are audio cards from $ 500 - $1850,- this is just too expensive. The following specs are crucial: ASIO Midi Mic in minimal 5.1, 7.1 recommended it's not for airplay, but just to compose music at home. using Ableton and midi keyboard. 1. M-Audio - Delta 1010LT: 8 x 8 analog I/O 2 mic preamps or line inputs S/PDIF digital I/O (coaxial) with 2-channel PCM SCMS copy protection control digital I/O supports surround-encoded AC-3 and DTS pass-through 1 x 1 MIDI I/O directly drive up to 7.1 surround (bass management software included) software controlled 36-bit internal DSP digital mixing/routing +4dbu/-10dBV operation individually switched in software word clock I/O for sample accurate device synchronization 2. RME HDSP 9632: * Stereo Analog Ein- und Ausgang, symmetrisch*, 24-Bit/192kHz, > 110 dB SNR * Optionale Erweiterungsboards mit je 4 symmetrischen Ein- und Ausgängen * Alle analogen I/Os voll 192 kHz-fähig, also keine Reduzierung der Kanalzahl * 1 x ADAT Digital In/Out, 96 kHz-fähig (S/MUX) * 1 x SPDIF Digital In/Out, 192 kHz-fähig * 1 x Breakout Kabel für koaxialen SPDIF-Betrieb* * Also bis zu 16 Ein-und Ausgänge gleichzeitig nutzbar! * 1 x Stereo Kopfhörerausgang, parallel zum analogen Ausgang, aber eigene Pegelanpassung * 1 x MIDI I/O für 16 Kanäle Hi-Speed MIDI über Breakout Kabel * DIGICheck, RMEs einzigartiges Meter- und Analysetool mit Spectral Analyser, Professionelle Level Meter 2/8/16-Kanalig, Vector Audio Scope und diversen weiteren Analysefunktionen * HDSP Meter Bridge: Frei skalierbare Levelmeter mit Peak- und RMS Berechnung in Hardware * TotalMix: 512-Kanal Mischer mit 40 Bit interner Auflösung 3. EMU 1212M (1212 M) PCIe: * Top kwaliteit convertors 24-bit/192kHz convertors. * Hardware gestuurde effecten. * DSP zero-latency hardware mixen en monitoring. * Analoge en digitale I/O plus MIDI. * EMU Production Tools Software Bundle - Cakewalk SONAR , Steinberg Cubase LE, Ableton Live E-MU Edition **EMU 1212M PCI-e inputs/outputs:** * 2 balanced jack inputs. * 2 balanced jack outputs. * 24-bit/192kHz ADAT I/O. * 24-bit/192kHz Coaxiale S/PDif I/O switchable to AES/EBU. * MIDI I/O. 4. M-Audio Audiophile 192: - Up to 24-bit/192kHz audio - 2 balanced analog inputs (1/4” TRS) - 2 balanced analog outputs (1/4” TRS) - S/PDIF digital I/O (coaxial RCA connectors) with 2-channel PCM - SCMS copy protection control - Digital I/O supports surround-encoded AC-3 and DTS pass-through - Direct hardware input monitoring via separate balanced 1/4” TRS monitor outputs - Software routing of inputs and outputs - Digital I/O can be routed to/from external effects - 16-channel MIDI I/O - ASIO, WDM, GSIF 2 and Core Audio driver support for compatibility with most applications - 64-bit driver support for Windows - PCI 2.2 compatibility - Apple G5 compatible - Incompatible exceptions - Includes Ableton Live Lite music production software, so you can make music right away - Works with other Delta cards Technical Specifcations: - Compatibility - ASIO - WDM - GSIF 2 - Core Audio

    Read the article

  • Why are USB 2.0 devices crashing my system?

    - by BenAlabaster
    Background on the machine I'm having a problem with: The machine was inherited and appears to be circa 2003 (there's a date stamp on the power supply which leads me to this conclusion). I've got it set up as a Skype terminal for my 2 year old to keep in touch with her grandparents and other members of the family - which everyone loves. It has a generic baby-ATX motherboard with no identifying markings. CPU-Z identifies the motherboard model as VT8601 but doesn't provide me with any manufacturer name. There's one stamp on the motherboard that says "Rev.B". On board it has 10/100 LAN, 2 x USB 1.0, VGA, PS/2 for KB and mouse, parallel port, 2 x serial ports, 2 x IDE, 1 x floppy, 2 x SDRAM slots, 1 x CPU housing that is seating a 1.3GHz Intel Celeron CPU, 3 x PCI, 1 x AGP - although you can only use 2 of the PCI slots if you use the AGP slot due to the physical layout of the board. It's got 768Mb PC133 SDRAM - 1 x 512Mb & 1 x 256Mb installed as well as a D-LINK WDA-2320 54G Wi-Fi network card and a generic USB 2.0 expansion board containing 3 x external + 1 x internal USB connectors. All this is sitting in a slimline case. I don't know the wattage of the PSU, but can post this later if this proves to be helpful. The motherboard is running a version of Award BIOS for which I don't have the version number to hand but can again post this later if it would be helpful. It has an 80Gb Western Digital hard drive freshly formatted and built with Windows XP Professional with Service Pack 3 and all current patches. In addition to Windows XP, the only other software it's running is Skype 4.1 (4.2 crashes the machine as soon as it starts up). It's got a Daytek MV150 15" touch screen running through the VGA and COM1 with the most current drivers from the Daytek website and the most current version of ELO-Touchsystems drivers for the touch component. The webcam is a Logitech Webcam C200 with the latest drivers from the Logitech website. The problem If I hook any USB 2.0 devices to this machine, it hangs the whole machine and I have to hard boot it to get it back up. Workarounds found I can plug the same devices into the on board USB 1.0 connectors and everything works fine, albeit at reduced performance. I've tried 3 different kinds of USB thumb drives, 3 different makes/models of webcams and my iPhone all with the same effect. They're recognized and don't hang the machine when I hook them to the USB 1.0 but if I hook them to the USB 2.0 ports, the machine hangs within a couple of seconds of recognizing the devices were connected. Attempted solutions I've tried disabling all the on board devices that I'm not using - such as the on board LAN, the second COM port, the AGP connector etc. through the BIOS in an (perhaps misguided or futile) attempt to reduce the power consumption... I don't think it had any effect but it didn't do any harm. I was wondering if the PSU wattage just isn't enough to drive the USB 2.0 devices; I've seen this suggested but haven't found any confirmation that this could really be an issue - nor have I found a way to work around this issue - if indeed it is one. Any ideas? The only thing I haven't done which I only just thought of while writing this essay is trying the USB 2.0 card in a different PCI slot, or re-ordering the wi-fi and USB cards in the slots... although I'm not sure if this will make any difference. I've installed the USB card in another machine and it works without issue, so it's not a problem with the USB card itself. Other thoughts Perhaps this is an incompatibility between the USB 2.0 card and the BIOS, would re-flashing the BIOS with a newer version help? Do I need to be able to identify the manufacturer of the motherboard in order to be able to find a BIOS edition specific for this motherboard or will any version of Award BIOS function in its place? Question Does anyone have any ideas that could help me get my USB 2.0 devices hooked up to this machine?

    Read the article

  • C#: System.Collections.Concurrent.ConcurrentQueue vs. Queue

    - by James Michael Hare
    I love new toys, so of course when .NET 4.0 came out I felt like the proverbial kid in the candy store!  Now, some people get all excited about the IDE and it’s new features or about changes to WPF and Silver Light and yes, those are all very fine and grand.  But me, I get all excited about things that tend to affect my life on the backside of development.  That’s why when I heard there were going to be concurrent container implementations in the latest version of .NET I was salivating like Pavlov’s dog at the dinner bell. They seem so simple, really, that one could easily overlook them.  Essentially they are implementations of containers (many that mirror the generic collections, others are new) that have either been optimized with very efficient, limited, or no locking but are still completely thread safe -- and I just had to see what kind of an improvement that would translate into. Since part of my job as a solutions architect here where I work is to help design, develop, and maintain the systems that process tons of requests each second, the thought of extremely efficient thread-safe containers was extremely appealing.  Of course, they also rolled out a whole parallel development framework which I won’t get into in this post but will cover bits and pieces of as time goes by. This time, I was mainly curious as to how well these new concurrent containers would perform compared to areas in our code where we manually synchronize them using lock or some other mechanism.  So I set about to run a processing test with a series of producers and consumers that would be either processing a traditional System.Collections.Generic.Queue or a System.Collection.Concurrent.ConcurrentQueue. Now, I wanted to keep the code as common as possible to make sure that the only variance was the container, so I created a test Producer and a test Consumer.  The test Producer takes an Action<string> delegate which is responsible for taking a string and placing it on whichever queue we’re testing in a thread-safe manner: 1: internal class Producer 2: { 3: public int Iterations { get; set; } 4: public Action<string> ProduceDelegate { get; set; } 5: 6: public void Produce() 7: { 8: for (int i = 0; i < Iterations; i++) 9: { 10: ProduceDelegate(“Hello”); 11: } 12: } 13: } Then likewise, I created a consumer that took a Func<string> that would read from whichever queue we’re testing and return either the string if data exists or null if not.  Then, if the item doesn’t exist, it will do a 10 ms wait before testing again.  Once all the producers are done and join the main thread, a flag will be set in each of the consumers to tell them once the queue is empty they can shut down since no other data is coming: 1: internal class Consumer 2: { 3: public Func<string> ConsumeDelegate { get; set; } 4: public bool HaltWhenEmpty { get; set; } 5: 6: public void Consume() 7: { 8: bool processing = true; 9: 10: while (processing) 11: { 12: string result = ConsumeDelegate(); 13: 14: if(result == null) 15: { 16: if (HaltWhenEmpty) 17: { 18: processing = false; 19: } 20: else 21: { 22: Thread.Sleep(TimeSpan.FromMilliseconds(10)); 23: } 24: } 25: else 26: { 27: DoWork(); // do something non-trivial so consumers lag behind a bit 28: } 29: } 30: } 31: } Okay, now that we’ve done that, we can launch threads of varying numbers using lambdas for each different method of production/consumption.  First let's look at the lambdas for a typical System.Collections.Generics.Queue with locking: 1: // lambda for putting to typical Queue with locking... 2: var productionDelegate = s => 3: { 4: lock (_mutex) 5: { 6: _mutexQueue.Enqueue(s); 7: } 8: }; 9:  10: // and lambda for typical getting from Queue with locking... 11: var consumptionDelegate = () => 12: { 13: lock (_mutex) 14: { 15: if (_mutexQueue.Count > 0) 16: { 17: return _mutexQueue.Dequeue(); 18: } 19: } 20: return null; 21: }; Nothing new or interesting here.  Just typical locks on an internal object instance.  Now let's look at using a ConcurrentQueue from the System.Collections.Concurrent library: 1: // lambda for putting to a ConcurrentQueue, notice it needs no locking! 2: var productionDelegate = s => 3: { 4: _concurrentQueue.Enqueue(s); 5: }; 6:  7: // lambda for getting from a ConcurrentQueue, once again, no locking required. 8: var consumptionDelegate = () => 9: { 10: string s; 11: return _concurrentQueue.TryDequeue(out s) ? s : null; 12: }; So I pass each of these lambdas and the number of producer and consumers threads to launch and take a look at the timing results.  Basically I’m timing from the time all threads start and begin producing/consuming to the time that all threads rejoin.  I won't bore you with the test code, basically it just launches code that creates the producers and consumers and launches them in their own threads, then waits for them all to rejoin.  The following are the timings from the start of all threads to the Join() on all threads completing.  The producers create 10,000,000 items evenly between themselves and then when all producers are done they trigger the consumers to stop once the queue is empty. These are the results in milliseconds from the ordinary Queue with locking: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 4284 5153 4226 4554.33 4: 10 10 4044 3831 5010 4295.00 5: 100 100 5497 5378 5612 5495.67 6: 1000 1000 24234 25409 27160 25601.00 And the following are the results in milliseconds from the ConcurrentQueue with no locking necessary: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 3647 3643 3718 3669.33 4: 10 10 2311 2136 2142 2196.33 5: 100 100 2480 2416 2190 2362.00 6: 1000 1000 7289 6897 7061 7082.33 Note that even though obviously 2000 threads is quite extreme, the concurrent queue actually scales really well, whereas the traditional queue with simple locking scales much more poorly. I love the new concurrent collections, they look so much simpler without littering your code with the locking logic, and they perform much better.  All in all, a great new toy to add to your arsenal of multi-threaded processing!

    Read the article

  • Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite Now Available

    - by chung.wu
    Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite is now available. The management suite combines features that were available in the standalone Application Management Pack for Oracle E-Business Suite and Application Change Management Pack for Oracle E-Business Suite with Oracle's market leading real user monitoring and configuration management capabilities to provide the most complete solution for managing E-Business Suite applications. The features that were available in the standalone management packs are now packaged into Oracle E-Business Suite Plug-in 4.0, which is now fully certified with Oracle Enterprise Manager 11g Grid Control. This latest plug-in extends Grid Control with E-Business Suite specific management capabilities and features enhanced change management support. In addition, this latest release of Application Management Suite for Oracle E-Business Suite also includes numerous real user monitoring improvements. General Enhancements This new release of Application Management Suite for Oracle E-Business Suite offers the following key capabilities: Oracle Enterprise Manager 11g Grid Control Support: All components of the management suite are certified with Oracle Enterprise Manager 11g Grid Control. Built-in Diagnostic Ability: This release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Discovery, Cloning, and User Monitoring that will validate if the appropriate patches, privileges, setups, and profile options have been configured. This feature improves the setup and configuration time to be up and operational. Lifecycle Automation Enhancements Application Management Suite for Oracle E-Business Suite provides a centralized view to monitor and orchestrate changes (both functional and technical) across multiple Oracle E-Business Suite systems. In this latest release, it provides even more control and flexibility in managing Oracle E-Business Suite changes.Change Management: Built-in Diagnostic Ability: This latest release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Customization Manager, Patch Manager, and Setup Manager that will validate if the appropriate patches, privileges, setups, and profile options have been configured. Enhancing the setup time and configuration time to be up and operational. Customization Manager: Multi-Node Custom Application Registration: This feature automates the process of registering and validating custom products/applications on every node in a multi-node EBS system. Public/Private File Source Mappings and E-Business Suite Mappings: File Source Mappings & E-Business Suite Mappings can be created and marked as public or private. Only the creator/owner can define/edit his/her own mappings. Users can use public mappings, but cannot edit or change settings. Test Checkout Command for Versions: This feature allows you to test/verify checkout commands at the version level within the File Source Mapping page. Prerequisite Patch Validation: You can specify prerequisite patches for Customization packages and for Release 12 Oracle E-Business Suite packages. Destination Path Population: You can now automatically populate the Destination Path for common file types during package construction. OAF File Type Support: Ability to package Oracle Application Framework (OAF) customizations and deploy them across multiple Oracle E-Business Suite instances. Extended PLL Support: Ability to distinguish between different types of PLLs (that is, Report and Forms PLL files). Providing better granularity when managing PLL objects. Enhanced Standard Checker: Provides greater and more comprehensive list of coding standards that are verified during the package build process (for example, File Driver exceptions, Java checks, XML checks, SQL checks, etc.) HTML Package Readme: The package Readme is in HTML format and includes the file listing. Advanced Package Search Capabilities: The ability to utilize more criteria within the advanced search package (that is, Public, Last Updated by, Files Source Mapping, and E-Business Suite Mapping). Enhanced Package Build Notifications: More detailed information on the results of a package build process. Better, more detailed troubleshooting guidance in the event of build failures. Patch Manager:Staged Patches: Ability to run Patch Manager with no external internet access. Customer can download Oracle E-Business Suite patches into a shared location for Patch Manager to access and apply. Supports highly secured production environments that prohibit external internet connections. Support for Superseded Patches: Automatic check for superseded patches. Allows users to easily add superseded patches into the Patch Run. More comprehensive and correct Patch Runs. Removes many manual and laborious tasks, frees up Apps DBAs for higher value-added tasks. Automatic Primary Node Identification: Users can now specify which is the "primary node" (that is, which node hosts the Shared APPL_TOP) during the Patch Run interview process, available for Release 12 only. Setup Manager:Preview Extract Results: Ability to execute an extract in "proof mode", and examine the query results, to determine accuracy. Used in conjunction with the "where" clause in Advanced Filtering. This feature can provide better and more accurate fine tuning of extracts. Use Uploaded Extracts in New Projects: Ability to incorporate uploaded extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access extracts that have been uploaded. Allows customer to reuse uploaded extracts to provision new instances. Re-use Existing (that is, historical) Extracts in New Projects: Ability to incorporate existing extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access point-in-time extracts (snapshots) of configuration data. Allows customer to reuse existing extracts to provision new instances. Allows comparative historical reporting of identical APIs, executed at different times. Support for BR100 formats: Setup Manager can now automatically produce reports in the BR100 format. Native support for industry standard formats. Concurrent Manager API Support: General Foundation now provides an API for management of "Concurrent Manager" configuration data. Ability to migrate Concurrent Managers from one instance to another. Complete the setup once and never again; no need to redefine the Concurrent Managers. User Experience Management Enhancements Application Management Suite for Oracle E-Business Suite includes comprehensive capabilities for user experience management, supporting both real user and synthetic transaction based user monitoring techniques. This latest release of the management suite include numerous improvements in real user monitoring support. KPI Reporting: Configurable decimal precision for reporting of KPI and SLA values. By default, this is two decimal places. KPI numerator and denominator information. It is now possible to view KPI numerator and denominator information, and to have it available for export. Content Messages Processing: The application content message facility has been extended to distinguish between notifications and errors. In addition, it is now possible to specify matching rules that can be used to refine a selected content message specification. Note this is only available for XPath-based (not literal) message contents. Data Export: The Enriched data export facility has been significantly enhanced to provide improved performance and accessibility. Data is no longer stored within XML-based files, but is now stored within the Reporter database. However, it is possible to configure an alternative database for its storage. Access to the export data is through SQL. With this enhancement, it is now more easy than ever to use tools such as Oracle Business Intelligence Enterprise Edition to analyze correlated data collected from real user monitoring and business data sources. SNMP Traps for System Events: Previously, the SNMP notification facility was only available for KPI alerting. It has now been extended to support the generation of SNMP traps for system events, to provide external health monitoring of the RUEI system processes. Performance Improvements: Enhanced dashboard performance. The dashboard facility has been enhanced to support the parallel loading of items. In the case of dashboards containing large numbers of items, this can result in a significant performance improvement. Initial period selection within Data Browser and reports. The User Preferences facility has been extended to allow you to specify the initial period selection when first entering the Data Browser or reports facility. The default is the last hour. Performance improvement when querying the all sessions group. Technical Prerequisites, Download and Installation Instructions The Linux version of the plug-in is available for immediate download from Oracle Technology Network or Oracle eDelivery. For specific information regarding technical prerequisites, product download and installation, please refer to My Oracle Support note 1224313.1. The following certifications are in progress: * Oracle Solaris on SPARC (64-bit) (9, 10) * HP-UX Itanium (11.23, 11.31) * HP-UX PA-RISC (64-bit) (11.23, 11.31) * IBM AIX on Power Systems (64-bit) (5.3, 6.1)

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72  | Next Page >