Search Results

Search found 22900 results on 916 pages for 'pascal case'.

Page 841/916 | < Previous Page | 837 838 839 840 841 842 843 844 845 846 847 848  | Next Page >

  • JavaScript: Given an offset and substring length in an HTML string, what is the parent node?

    - by Bungle
    My current project requires locating an array of strings within an element's text content, then wrapping those matching strings in <a> elements using JavaScript (requirements simplified here for clarity). I need to avoid jQuery if at all possible - at least including the full library. For example, given this block of HTML: <div> <p>This is a paragraph of text used as an example in this Stack Overflow question.</p> </div> and this array of strings to match: ['paragraph', 'example'] I would need to arrive at this: <div> <p>This is a <a href="http://www.example.com/">paragraph</a> of text used as an <a href="http://www.example.com/">example</a> in this Stack Overflow question.</p> </div> I've arrived at a solution to this by using the innerHTML() method and some string manipulation - basically using the offsets (via indexOf()) and lengths of the strings in the array to break the HTML string apart at the appropriate character offsets and insert <a href="http://www.example.com/"> and </a> tags where needed. However, an additional requirement has me stumped. I'm not allowed to wrap any matched strings in <a> elements if they're already in one, or if they're a descendant of a heading element (<h1> to <h6>). So, given the same array of strings above and this block of HTML (the term matching has to be case-insensitive, by the way): <div> <h1>Example</a> <p>This is a <a href="http://www.example.com/">paragraph of text</a> used as an example in this Stack Overflow question.</p> </div> I would need to disregard both the occurrence of "Example" in the <h1> element, and the "paragraph" in <a href="http://www.example.com/">paragraph of text</a>. This suggests to me that I have to determine which node each matched string is in, and then traverse its ancestors until I hit <body>, checking to see if I encounter a <a> or <h_> node along the way. Firstly, does this sound reasonable? Is there a simpler or more obvious approach that I've failed to consider? It doesn't seem like regular expressions or another string-based comparison to find bounding tags would be robust - I'm thinking of issues like self-closing elements, irregularly nested tags, etc. There's also this... Secondly, is this possible, and if so, how would I approach it?

    Read the article

  • "Simple" sort a nested array using array_multisort or native PHP functions instead of my own foreach loop

    - by Ana Ban
    I have the following array of days of the week, with each day having hours of the day (the whole array represents the schedule of a part-time employee): Array ( [7] => Array ( [0] => 15 [1] => 14 [2] => 13 [3] => 11 [4] => 12 [5] => 10 ) [1] => Array ( [0] => 10 [1] => 13 [2] => 12 ) [6] => Array ( [0] => 14 ) [3] => Array ( [0] => 4 [1] => 5 [2] => 6 ) ) and I simply need to: sort asc each sub-array (2nd dimension) - no need to maintain the numeric keys, values are integers sort asc the 1st dimension and maintain the numeric, integer keys ie: Array ( [1] => Array ( [0] => 10 [1] => 12 [2] => 13 ) [3] => Array ( [0] => 4 [1] => 5 [2] => 6 ) [6] => Array ( [0] => 14 ) [7] => Array ( [0] => 10 [1] => 11 [2] => 12 [3] => 13 [4] => 14 [5] => 15 ) ) Additional info: only the keys of the 1st dimension and the values of the 2nd dimension (and of course their association) are meaningful to my use-case the 1st dimension can have at most 7 values, ranging from 1-7 (days of the week), and will have at least 1 value (1 day) the 2nd dimension can have at most 24 values, ranging from 0-23 (hours of each day), and will have at least 1 value (1 hour per day) I know I can do this with a foreach on the whole ksorted array and sort each 2nd dimension array: ksort($sched); foreach ($sched as &$array) sort($array); unset($array); but I was hoping I could achieve this with native php array function(s) instead. My search led me to try array_multisort(array_values($array), array_keys($array), $array) but I just can't make it work.

    Read the article

  • Correct way of using/testing event service in Eclipse E4 RCP

    - by Thorsten Beck
    Allow me to pose two coupled questions that might boil down to one about good application design ;-) What is the best practice for using event based communication in an e4 RCP application? How can I write simple unit tests (using JUnit) for classes that send/receive events using dependency injection and IEventBroker ? Let’s be more concrete: say I am developing an Eclipse e4 RCP application consisting of several plugins that need to communicate. For communication I want to use the event service provided by org.eclipse.e4.core.services.events.IEventBroker so my plugins stay loosely coupled. I use dependency injection to inject the event broker to a class that dispatches events: @Inject static IEventBroker broker; private void sendEvent() { broker.post(MyEventConstants.SOME_EVENT, payload) } On the receiver side, I have a method like: @Inject @Optional private void receiveEvent(@UIEventTopic(MyEventConstants.SOME_EVENT) Object payload) Now the questions: In order for IEventBroker to be successfully injected, my class needs access to the current IEclipseContext. Most of my classes using the event service are not referenced by the e4 application model, so I have to manually inject the context on instantiation using e.g. ContextInjectionFactory.inject(myEventSendingObject, context); This approach works but I find myself passing around a lot of context to wherever I use the event service. Is this really the correct approach to event based communication across an E4 application? how can I easily write JUnit tests for a class that uses the event service (either as a sender or receiver)? Obviously, none of the above annotations work in isolation since there is no context available. I understand everyone’s convinced that dependency injection simplifies testability. But does this also apply to injecting services like the IEventBroker? This article describes creation of your own IEclipseContext to include the process of DI in tests. Not sure if this could resolve my 2nd issue but I also hesitate running all my tests as JUnit Plug-in tests as it appears impractible to fire up the PDE for each unit test. Maybe I just misunderstand the approach. This article speaks about “simply mocking IEventBroker”. Yes, that would be great! Unfortunately, I couldn’t find any information on how this can be achieved. All this makes me wonder whether I am still on a "good path" or if this is already a case of bad design? And if so, how would you go about redesigning? Move all event related actions to dedicated event sender/receiver classes or a dedicated plugin?

    Read the article

  • ROW_NUMBER() VS. DISTINCT

    - by ramadan2050
    Dear All, I have a problem with ROW_NUMBER() , if i used it with DISTINCT in the following Query I have 2 scenarios: 1- run this query direct : give me for example 400 record as a result 2- uncomment a line which start with [--Uncomment1--] : give me 700 record as a result it duplicated some records not all the records what I want is to solve this problem or to find any way to show a row counter beside each row, to make a [where rownumber between 1 and 30] --Uncomment2-- if I put the whole query in a table, and then filter it , it is work but it still so slow waiting for any feedback and I will appreciate that Thanks in advance SELECT * FROM (SELECT Distinct CRSTask.ID AS TaskID, CRSTask.WFLTaskID, --Uncomment1-- ROW_NUMBER() OVER (ORDER By CRSTask.CreateDate asc ) AS RowNum , CRSTask.WFLStatus AS Task_WFLStatus, CRSTask.Name AS StepName, CRSTask.ModifiedDate AS Task_ModifyDate, CRSTask.SendingDate AS Task_SendingDate, CRSTask.ReceiveDate AS Task_ReceiveDate, CRSTask.CreateDate AS Task_CreateDate, CRS_Task_Recipient_Vw.Task_CurrentSenderName, CRS_Task_Recipient_Vw.Task_SenderName, CRS_INFO.ID AS CRS_ID, CRS_INFO.ReferenceNumber, CRS_INFO.CRSBeneficiaries, CRS_INFO.BarCodeNumber, ISNULL(dbo.CRS_FNC_GetTaskReceiver(CRSTask.ID), '') + ' ' + ISNULL (CRS_Organization.ArName, '') AS OrgName, CRS_Info.IncidentID, COALESCE(CRS_Subject.ArSubject, '??? ????') AS ArSubject, COALESCE(CRS_INFO.Subject, 'Blank Subject') AS CRS_Subject, CRS_INFO.Mode, CRS_Task_Recipient_Vw.ReceiverID, CRS_Task_Recipient_Vw.ReceiverType, CRS_Task_Recipient_Vw.CC, Temp_Portal_Users_View.ID AS CRS_LockedByID, Temp_Portal_Users_View.ArabicName AS CRS_LockedByName, CRSDraft.ID AS DraftID, CRSDraft.Type AS DraftType, CASE WHEN CRS_Folder = 1 THEN Task_SenderName WHEN CRS_Folder = 2 THEN Task_SenderName WHEN CRS_Folder = 3 THEN Task_CurrentSenderName END AS SenderName, CRS_Task_Folder_Vw.CRS_Folder, CRS_INFO.Status, CRS_INFO.CRS_Type, CRS_Type.arName AS CRS_Type_Name FROM CRS_Task_Folder_Vw LEFT OUTER JOIN CRSTask ON CRSTask.ID = CRS_Task_Folder_Vw.TaskID LEFT OUTER JOIN CRS_INFO ON CRS_INFO.ID = CRSTask.CRSID LEFT OUTER JOIN CRS_Subject ON COALESCE( SUBSTRING( CRS_INFO.Subject, CHARINDEX('_', CRS_INFO.Subject) + 1, LEN(CRS_INFO.Subject) ), 'Blank Subject' ) = CRS_Subject.ID LEFT OUTER JOIN CRSInfoAttribute ON CRS_INFO.ID = CRSInfoAttribute.ID LEFT OUTER JOIN CRS_Organization ON CRS_Organization.ID = CRSInfoAttribute.SourceID LEFT OUTER JOIN CRS_Type ON CRS_INFO.CRS_Type = CRS_Type.ID LEFT OUTER JOIN CRS_Way ON CRS_INFO.CRS_Send_Way = CRS_Way.ID LEFT OUTER JOIN CRS_Priority ON CRS_INFO.CRS_Priority_ID = CRS_Priority.ID LEFT OUTER JOIN CRS_SecurityLevel ON CRS_INFO.SecurityLevelID = CRS_SecurityLevel.ID LEFT OUTER JOIN Portal_Users_View ON Portal_Users_View.ID = CRS_INFO.CRS_Initiator LEFT OUTER JOIN AD_DOC_TBL ON CRS_INFO.DocumentID = AD_DOC_TBL.ID LEFT OUTER JOIN CRSTask AS Temp_CRSTask ON CRSTask.ParentTask = Temp_CRSTask.ID LEFT OUTER JOIN Portal_Users_View AS Temp_Portal_Users_View ON Temp_Portal_Users_View.ID = AD_DOC_TBL.Lock_User_ID LEFT OUTER JOIN Portal_Users_View AS Temp1_Portal_Users_View ON Temp1_Portal_Users_View.ID = CRS_INFO.ClosedBy LEFT OUTER JOIN CRSDraft ON CRSTask.ID = CRSDraft.TaskID LEFT OUTER JOIN CRS_Task_Recipient_Vw ON CRSTask.ID = CRS_Task_Recipient_Vw.TaskID --LEFT OUTER JOIN CRSTaskReceiverUsers ON CRSTask.ID = CRSTaskReceiverUsers.CRSTaskID AND CRS_Task_Recipient_Vw.ReceiverID = CRSTaskReceiverUsers.ReceiverID LEFT OUTER JOIN CRSTaskReceiverUserProfile ON CRSTask.ID = CRSTaskReceiverUserProfile.TaskID WHERE Crs_Info.SUBJECT <> 'Blank Subject' AND (CRS_INFO.Subject NOT LIKE '%null%') AND CRS_Info.IsDeleted <> 1 /* AND CRSTask.WFLStatus <> 6 AND CRSTask.WFLStatus <> 8 */ AND ( ( CRS_Task_Recipient_Vw.ReceiverID IN (1, 29) AND CRS_Task_Recipient_Vw.ReceiverType IN (1, 3, 4) ) ) AND 1 = 1 )Codes --Uncomment2-- WHERE Codes.RowNum BETWEEN 1 AND 30 ORDER BY Codes.Task_CreateDate ASC

    Read the article

  • Binary Search Help

    - by aloh
    Hi, for a project I need to implement a binary search. This binary search allows duplicates. I have to get all the index values that match my target. I've thought about doing it this way if a duplicate is found to be in the middle: Target = G Say there is this following sorted array: B, D, E, F, G, G, G, G, G, G, Q, R S, S, Z I get the mid which is 7. Since there are target matches on both sides, and I need all the target matches, I thought a good way to get all would be to check mid + 1 if it is the same value. If it is, keep moving mid to the right until it isn't. So, it would turn out like this: B, D, E, F, G, G, G, G, G, G (MID), Q, R S, S, Z Then I would count from 0 to mid to count up the target matches and store their indexes into an array and return it. That was how I was thinking of doing it if the mid was a match and the duplicate happened to be in the mid the first time and on both sides of the array. Now, what if it isn't a match the first time? For example: B, D, E, F, G, G, J, K, L, O, Q, R, S, S, Z Then as normal, it would grab the mid, then call binary search from first to mid-1. B, D, E, F, G, G, J Since G is greater than F, call binary search from mid+1 to last. G, G, J. The mid is a match. Since it is a match, search from mid+1 to last through a for loop and count up the number of matches and store the match indexes into an array and return. Is this a good way for the binary search to grab all duplicates? Please let me know if you see problems in my algorithm and hints/suggestions if any. The only problem I see is that if all the matches were my target, I would basically be searching the whole array but then again, if that were the case I still would need to get all the duplicates. Thank you BTW, my instructor said we cannot use Vectors, Hash or anything else. He wants us to stay on the array level and get used to using them and manipulating them.

    Read the article

  • AES decryption in Java - IvParameterSpec to big

    - by user1277269
    Im going to decrypt a plaintext with two keys. As you see in the picture were have one encrypted file wich contains KEY1(128 bytes),KEYIV(128 bytes),key2(128bytes) wich is not used in this case then we have the ciphertext. The error I get here is "Exception in thread "main" java.security.InvalidAlgorithmParameterException: Wrong IV length: must be 16 bytes long. but it is 64 bytes." Picture: http://i264.photobucket.com/albums/ii200/XeniuM05/bg_zps0a523659.png public class AES { public static void main(String[] args) throws Exception { byte[] encKey1 = new byte[128]; byte[] EncIV = new byte[256]; byte[] UnEncIV = new byte[128]; byte[] unCrypKey = new byte[128]; byte[] unCrypText = new byte[1424]; File f = new File("C://ftp//ciphertext.enc"); FileInputStream fis = new FileInputStream(F); byte[] EncText = new byte[(int) f.length()]; fis.read(encKey1); fis.read(EncIV); fis.read(EncText); EncIV = Arrays.copyOfRange(EncIV, 128, 256); EncText = Arrays.copyOfRange(EncText, 384, EncText.length); System.out.println(EncText.length); KeyStore ks = KeyStore.getInstance(KeyStore.getDefaultType()); char[] password = "lab1StorePass".toCharArray(); java.io.FileInputStream fos = new java.io.FileInputStream( "C://ftp//lab1Store"); ks.load(fos, password); char[] passwordkey1 = "lab1KeyPass".toCharArray(); PrivateKey Lab1EncKey = (PrivateKey) ks.getKey("lab1EncKeys", passwordkey1); Cipher rsaDec = Cipher.getInstance("RSA"); // set cipher to RSA decryption rsaDec.init(Cipher.DECRYPT_MODE, Lab1EncKey); // initalize cipher ti lab1key unCrypKey = rsaDec.doFinal(encKey1); // Decryps first key UnEncIV = rsaDec.doFinal(EncIV); //decryps encive byte array to undecrypted bytearray---- OBS! Error this is 64 BYTES big, we want 16? System.out.println("lab1key "+ unCrypKey +" IV " + UnEncIV); //-------CIPHERTEXT decryption--------- Cipher AESDec = Cipher.getInstance("AES/CBC/PKCS5Padding"); //---------convert decrypted bytearrays to acctual keys SecretKeySpec unCrypKey1 = new SecretKeySpec(unCrypKey, "AES"); IvParameterSpec ivSpec = new IvParameterSpec(UnEncIV); AESDec.init(Cipher.DECRYPT_MODE, unCrypKey1, ivSpec ); unCrypText = AESDec.doFinal(EncText); // Convert decrypted cipher bytearray to string String deCryptedString = new String(unCrypKey); System.out.println(deCryptedString); }

    Read the article

  • fetching rows from mysql table and displaying them as JGROWL notifications

    - by jeansymolanza
    hey guys, i am having problems implementing my php/mysql code into a jgrowl if you are familiar with jgrowl you will know it delivers notifications like Growl does for OS X i am trying to get it read all the records from my table but at the moment it is only displaying one record as a notification and it loops through it 4 times another problem is that if i have 5 rows in the table then jgrowl will only display 4 notifications are going to be viewed how do i get it to view all the records in the table as notifications and how do i display the total number of records (5) as notifications and account for the missing one at the moment thanking you guys in advance... God bless <script type="text/javascript"> // In case you don't have firebug... if (!window.console || !console.firebug) { var names = ["log", "debug", "info", "warn", "error", "assert", "dir", "dirxml", "group", "groupEnd", "time", "timeEnd", "count", "trace", "profile", "profileEnd"]; window.console = {}; for (var i = 0; i < names.length; ++i) window.console[names[i]] = function() {}; } (function($){ $(document).ready(function(){ // This specifies how many messages can be pooled out at any given time. // If there are more notifications raised then the pool, the others are // placed into queue and rendered after the other have disapeared. $.jGrowl.defaults.pool = 5; var i = 1; var y = 1; setInterval( function() { if ( i < <?php echo $totalRows_comment; ?> ) { <?php echo '$.jGrowl("'.$row_comment['comment'].'",'; ?> { sticky: true, log: function() { console.log("Creating message " + i + "..."); }, beforeOpen: function() { console.log("Rendering message " + y + "..."); y++; } }); } i++; } , 1000 ); }); })(jQuery); </script> <p> </span> <p>

    Read the article

  • how to implement a "soft barrier" in multithreaded c++

    - by Jason
    I have some multithreaded c++ code with the following structure: do_thread_specific_work(); update_shared_variables(); //checkpoint A do_thread_specific_work_not_modifying_shared_variables(); //checkpoint B do_thread_specific_work_requiring_all_threads_have_updated_shared_variables(); What follows checkpoint B is work that could have started if all threads have reached only checkpoint A, hence my notion of a "soft barrier". Typically multithreading libraries only provide "hard barriers" in which all threads must reach some point before any can continue. Obviously a hard barrier could be used at checkpoint B. Using a soft barrier can lead to better execution time, especially since the work between checkpoints A and B may not be load-balanced between the threads (i.e. 1 slow thread who has reached checkpoint A but not B could be causing all the others to wait at the barrier just before checkpoint B). I've tried using atomics to synchronize things and I know with 100% certainty that is it NOT guaranteed to work. For example using openmp syntax, before the parallel section start with: shared_thread_counter = num_threads; //known at compile time #pragma omp flush Then at checkpoint A: #pragma omp atomic shared_thread_counter--; Then at checkpoint B (using polling): #pragma omp flush while (shared_thread_counter > 0) { usleep(1); //can be removed, but better to limit memory bandwidth #pragma omp flush } I've designed some experiments in which I use an atomic to indicate that some operation before it is finished. The experiment would work with 2 threads most of the time but consistently fail when I have lots of threads (like 20 or 30). I suspect this is because of the caching structure of modern CPUs. Even if one thread updates some other value before doing the atomic decrement, it is not guaranteed to be read by another thread in that order. Consider the case when the other value is a cache miss and the atomic decrement is a cache hit. So back to my question, how to CORRECTLY implement this "soft barrier"? Is there any built-in feature that guarantees such functionality? I'd prefer openmp but I'm familiar with most of the other common multithreading libraries. As a workaround right now, I'm using a hard barrier at checkpoint B and I've restructured my code to make the work between checkpoint A and B automatically load-balancing between the threads (which has been rather difficult at times). Thanks for any advice/insight :)

    Read the article

  • get_or_create generic relations in Django & python debugging in general

    - by rabidpebble
    I ran the code to create the generically related objects from this demo: http://www.djangoproject.com/documentation/models/generic_relations/ Everything is good intially: >>> bacon.tags.create(tag="fatty") <TaggedItem: fatty> >>> tag, newtag = bacon.tags.get_or_create(tag="fatty") >>> tag <TaggedItem: fatty> >>> newtag False But then the use case that I'm interested in for my app: >>> tag, newtag = bacon.tags.get_or_create(tag="wholesome") Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/django/db/models/manager.py", line 123, in get_or_create return self.get_query_set().get_or_create(**kwargs) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 343, in get_or_create raise e IntegrityError: app_taggeditem.content_type_id may not be NULL I tried a bunch of random things after looking at other code: >>> tag, newtag = bacon.tags.get_or_create(tag="wholesome", content_type=TaggedItem) ValueError: Cannot assign "<class 'generics.app.models.TaggedItem'>": "TaggedItem.content_type" must be a "ContentType" instance. or: >>> tag, newtag = bacon.tags.get_or_create(tag="wholesome", content_type=TaggedItem.content_type) InterfaceError: Error binding parameter 3 - probably unsupported type. etc. I'm sure somebody can give me the correct syntax, but the real problem here is that I have no idea what is going on. I have developed in strongly typed languages for over ten years (x86 assembly, C++ and C#) but am new to Python. I find it really difficult to follow what is going on in Python when things like this break. In the languages I mentioned previously it's fairly straightforward to figure things like this out -- check the method signature and check your parameters. Looking at the Django documentation for half an hour left me just as lost. Looking at the source for get_or_create(self, **kwargs) didn't help either since there is no method signature and the code appears very generic. A next step would be to debug the method and try to figure out what is happening, but this seems a bit extreme... I seem to be missing some fundamental operating principle here... what is it? How do I resolve issues like this on my own in the future?

    Read the article

  • Changing the itemsSource of a treeview makes it's children invisible, when they were already display

    - by Marnix Kraus
    I found some strange problem in WPF, using the itemsSource of a treeview. I hope I can make this specific problem clear for you. First; a story. There is a treeview. It has a list with treeviewitems as itemsSource. This list is called _roots. There is another list, called _leafs. For as in a treeview, the _roots contain the _leafs in some hierarchical way. For example: <TreeviewItem Header="Jungle"> <TreeviewItem> <SpecialTreeviewItem Header="Monkey"/> <SpecialTreeviewItem Header="Apple"/> </TreeviewItem> </TreeviewItem> Now I am trying to switch between these two lists as itemsSource. It seemed to work fine, but it doesn't: When the Jungle-item is un-expanded, and I change the itemsSource to _leafs, and change it back again to _roots, everything works fine and all items can be expanded and showed. But when the Jungle-item is expanded (and the special items are already visible) and I change it to the _leafs itemsSource, and then change the itemsSource back to _roots, all special items have disappeared!! Also, when I do the same as case 2, but first un-expand the Jungle-item again, the special items also disappear. I did a lot of debugging, before posting this question here and come to the following conclusion: Printing on the event: visibility changed, the visibility is set to false for all items that were already visible (that is, when _roots become visible, the special items become invisible (because they were already visible)) So, IsVisible is false for the items, but Visibility = Visible. Which is a bit strange. The problem seems to depend on the use of the _roots list, which in a certain way contain the _leafs. When I change the itemsSource to different lists with special items in it, everything works fine. The hierarchical structure of the _roots make this thing broken. I hope that this is a complete overview of my problem. Help would be appreciated.

    Read the article

  • Richfaces modal panel and a4j:keepAlive

    - by mykola
    Hello! I've got unexpected problems with richfaces (3.3.2) modal panel. When i try to open it, browser opens two panels instead of one: one is in the center, another is in the upper left corner. Besides, no fading happens. Also i have three modes: view, edit, new - and when i open my panel it should show either "Create new..." or "Edit..." in the header and actually it shows but not in the header as the latter isn't rendered at all though it should, because i set proper mode in action before opening this modal panel. Besides it works fine on all other pages i've made and there are tens of such pages in my application. I can't understand what's wrong here. The only way to fix it is to remove <a4j:keepAlive/> from the page that is very strange, imho. I'm not sure if code will be usefull here as it works fine everywhere in my application but this only case. So if you put it on your page it will probably work without problems. My only question is: are there any hidden or rare problems in interaction of these two elements (<rich:modalPanel> and <a4j:keepAlive>)? Or shall i spent another two or three days searching for some wrong comma, parenthesis or whatever in my code? :) For most curious. Panel itself: <!-- there's no outer form --> <rich:modalPanel id="panel" autosized="true" minWidth="300" minHeight="200"> <f:facet name="header"> <h:panelGroup id="panelHeader"> <h:outputText value="#{msg.new_smth}" rendered="#{MbSmth.newMode}"/> <h:outputText value="#{msg.edit_smth}" rendered="#{MbSmth.editMode}"/> </h:panelGroup> </f:facet> <h:panelGroup id="panelDiv"> <h:form > <!-- fields and buttons --> </h:form> </h:panelGroup> </rich:modalPanel> One of the buttons that open panel: <a4j:commandButton id="addBtn" reRender="panelHeader, panelDiv" value="#{form.add}" oncomplete="#{rich:component('panel')}.show()" action="#{MbSmth.add}" image="create.gif"/> Action invoked on button click: public void add() { curMode = NEW_MODE; // initial mode is VIEW_MODE newSmth = new Smth(); } Mode check: public boolean isNewMode() { return curMode == NEW_MODE; } public boolean isEditMode() { return curMode == EDIT_MODE; }

    Read the article

  • PERL newbie : get a proper minimal debug_mode solution

    - by Michael Mao
    Hi all: I am learning PERL in a "head-first" manner. I am absolutely a newbie in this language: I am trying to have a debug_mode switch from CLI which can be used to control how my script works, by switching certain subroutines "on and off". And below is what I've got so far: #!/usr/bin/perl -s -w # purpose : make subroutine execution optional, # which is depending on a CLI switch flag use strict; use warnings; use constant DEBUG_VERBOSE => "v"; use constant DEBUG_SUPPRESS_ERROR_MSGS => "s"; use constant DEBUG_IGNORE_VALIDATION => "i"; use constant DEBUG_SETPPING_COMPUTATION => "c"; our ($debug_mode); mainMethod(); sub mainMethod # () { if(!$debug_mode) { print "debug_mode is OFF\n"; } elsif($debug_mode) { print "debug_mode is ON\n"; } else { print "OMG!\n"; exit -1; } checkArgv(); printErrorMsg("Error_Code_123", "Parsing Error at..."); verbose(); } sub checkArgv #() { print ("Number of ARGV : ".(1 + $#ARGV)."\n"); } sub printErrorMsg # ($error_code, $error_msg, ..) { if(defined($debug_mode) && !($debug_mode =~ DEBUG_SUPPRESS_ERROR_MSGS)) { print "You can only see me if -debug_mode is NOT set". " to DEBUG_SUPPRESS_ERROR_MSGS\n"; die("terminated prematurely...\n") and exit -1; } } sub verbose # () { if(defined($debug_mode) && ($debug_mode =~ DEBUG_VERBOSE)) { print "Blah blah blah...\n"; } } So far as I can tell, at least it works...: the -debug_mode switch doesn't interfere with normal ARGV the following commandlines work: ./optional.pl ./optional.pl -debug_mode ./optional.pl -debug_mode=v ./optional.pl -debug_mode=s However, I am puzzled when multiple debug_modes are "mixed", such as: ./optional.pl -debug_mode=sv ./optional.pl -debug_mode=vs I don't understand why the above lines of code "magically works". I see both of the "DEBUG_VERBOS" and "DEBUG_SUPPRESS_ERROR_MSGS" apply to the script, which is fine in this case. However, if there are some "conflicting" debug modes, I am not sure how to set the "precedence of debue_modes"? Also, I am not certain if my approach is good enough to Perlists and I hope I am getting my feet in the right direction. One biggest problem is that I now put if statements inside most of my subroutines for controlling their behavior under different modes. Is this okay? Is there a more elegant way? I know there must be a debug module from CPAN or elsewhere, but I wanna a real minimal solution that doesn't depend on any other module than the "default" And I cannot have any control on the environment where this script will be executed... Many thanks to the suggestions in advance.

    Read the article

  • UIIMageView, warning: check_safe_call: could not restore current frame

    - by lukya
    Hi, I am changing the image in UIImageView based on accelerometer input. The images are stored in an array. The application runs fine for a while and then crashes. warning: check_safe_call: could not restore current frame I am not using "UIImage ImageNamed" method when populating the array. The total size of all images is around 12 Mb. But individual images are very light (<100 kb) and i am using only one image at a time from the array. Just in case there are any autoreleases, I have allocated a NSAutoreleasePool in view did load and am draining it in the didReceiveMemoryWarning method (which does get called 2, 3 times before the app crashes?). Following is the code that creates images array: //create autorelease pool as we are creating many autoreleased objects NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSMutableArray *finalarr = [[NSMutableArray alloc] initWithCapacity:9]; NSLog(@"start loading"); for(int y = 0; y < 100; y+=10) { NSMutableArray *arr = [[NSMutableArray alloc] initWithCapacity:10]; for(int x = 0; x < 10; x++) { NSString *imageName = [[NSString alloc] initWithFormat:@"0%d",y + x]; UIImage *img = [[UIImage alloc] initWithContentsOfFile:[[NSBundle mainBundle] pathForResource:imageName ofType:@"png"]]; [arr addObject:img]; [imageName release]; [img release]; } [finalarr addObject:arr]; [arr release]; } NSLog(@"done loading"); // Override point for customization after app launch viewController.imagesArray = finalarr; [finalarr release]; //retain the array of images [viewController.imagesArray retain]; //release the aurtorelease pool to free memory taken up by objects enqued for release [pool release]; As the app runs smoothly for a while, which means array creation is definitely not the problem. After this the following method is called from [accelerometer didAccelerate] -(BOOL)setImageForX:(int)x andY:(int)y { [self.imageView setImage:(UIImage*)[(NSArray*)[self.imagesArray objectAtIndex:y] objectAtIndex:x]]; return TRUE; } So the only code being executed is the "UIImageView setImage". But no objects are being created here. Please let me know if the code seems to have any leaks or i am doing something wrong. Thanks, Swapnil

    Read the article

  • How to optimize my PostgreSQL DB for prefix search?

    - by asmaier
    I have a table called "nodes" with roughly 1.7 million rows in my PostgreSQL db =#\d nodes Table "public.nodes" Column | Type | Modifiers --------+------------------------+----------- id | integer | not null title | character varying(256) | score | double precision | Indexes: "nodes_pkey" PRIMARY KEY, btree (id) I want to use information from that table for autocompletion of a search field, showing the user a list of the ten titles having the highest score fitting to his input. So I used this query (here searching for all titles starting with "s") =# explain analyze select title,score from nodes where title ilike 's%' order by score desc; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------- Sort (cost=64177.92..64581.38 rows=161385 width=25) (actual time=4930.334..5047.321 rows=161264 loops=1) Sort Key: score Sort Method: external merge Disk: 5712kB -> Seq Scan on nodes (cost=0.00..46630.50 rows=161385 width=25) (actual time=0.611..4464.413 rows=161264 loops=1) Filter: ((title)::text ~~* 's%'::text) Total runtime: 5260.791 ms (6 rows) This was much to slow for using it with autocomplete. With some information from Using PostgreSQL in Web 2.0 Applications I was able to improve that with a special index =# create index title_idx on nodes using btree(lower(title) text_pattern_ops); =# explain analyze select title,score from nodes where lower(title) like lower('s%') order by score desc limit 10; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=18122.41..18122.43 rows=10 width=25) (actual time=1324.703..1324.708 rows=10 loops=1) -> Sort (cost=18122.41..18144.60 rows=8876 width=25) (actual time=1324.700..1324.702 rows=10 loops=1) Sort Key: score Sort Method: top-N heapsort Memory: 17kB -> Bitmap Heap Scan on nodes (cost=243.53..17930.60 rows=8876 width=25) (actual time=96.124..1227.203 rows=161264 loops=1) Filter: (lower((title)::text) ~~ 's%'::text) -> Bitmap Index Scan on title_idx (cost=0.00..241.31 rows=8876 width=0) (actual time=90.059..90.059 rows=161264 loops=1) Index Cond: ((lower((title)::text) ~>=~ 's'::text) AND (lower((title)::text) ~<~ 't'::text)) Total runtime: 1325.085 ms (9 rows) So this gave me a speedup of factor 4. But can this be further improved? What if I want to use '%s%' instead of 's%'? Do I have any chance of getting a decent performance with PostgreSQL in that case, too? Or should I better try a different solution (Lucene?, Sphinx?) for implementing my autocomplete feature?

    Read the article

  • Referencing both an old version and new version of the same DLL (VB.Net)

    - by ckittel
    Consider the following situation: WidgetCompany produced a .NET DLL in 2006 called Widget.dll, version 1.0. I consumed this Widget.dll file throughout my VB.Net application. Over time, WidgetCompany has been updating Widget.dll, I never bothered to keep up, continuing to ship version 1.0 of Widget.dll with my software. It's now 2010, my project is now a VB.Net 3.5 application and WidgetCompany has come out with Widget.dll version 3.0. It looks and functions almost identical to Widget.dll version 1.0, using all the same namespaces and type names from before. However, Widget.dll version 3.0 has many run-time breaking changes since 1.0 and I cannot simply cut over to the new version; however, I don't want to continue developing against the 1.0 version and therefore keep digging myself deeper in the hole. What I want to do is do all new development in my project with Widget.dll version 3.0, whilst keeping Widget.dll version 1.0 around until I find time to convert all of my 1.0 consumption to the newer 3.0 code. Now, for starters, I obviously cannot simply reference both Widget.dll (Ver 1.0) and Widget.dll (Ver 3.0) in Visual Studio. Doing so gives me the following message: "A reference to 'Widget.dll' could not be added. A reference to the component 'Widget' already exists in the project." To work around that, I can simply rename version 3.0 Widget.dll to Widget.3.dll. But this is where I'm stuck. Any attempts to reference types found in "the dll" leads to ambiguity and the compiler obviously doesn't have any clue as to what I really want in this or that case. Is there something I can do that gives a DLL a new "root" Namespace or something? For example, if I could say "Widget.dll has a new root namespace of Legacy" then I could update existing code to reference the types found in Legacy.<RootNamespace> namespace while all new code could simply reference types from the <RootNamespace> namespace. Pipe dream or reality? Are there other solutions to situations this (besides "don't get in this situation in the first place")?

    Read the article

  • Algorithm to Render a Horizontal Binary-ish Tree in Text/ASCII form

    - by Justin L.
    It's a pretty normal binary tree, except for the fact that one of the nodes may be empty. I'd like to find a way to output it in a horizontal way (that is, the root node is on the left and expands to the right). I've had some experience expanding trees vertically (root node at the top, expanding downwards), but I'm not sure where to start, in this case. Preferably, it would follow these couple of rules: If a node has only one child, it can be skipped as redundant (an "end node", with no children, is always displayed) All nodes of the same depth must be aligned vertically; all nodes must be to the right of all less-deep nodes and to the left of all deeper nodes. Nodes have a string representation which includes their depth. Each "end node" has its own unique line; that is, the number of lines is the number of end nodes in the tree, and when an end node is on a line, there may be nothing else on that line after that end node. As a consequence of the last rule, the root node should be in either the top left or the bottom left corner; top left is preferred. For example, this is a valid tree, with six end nodes (node is represented by a name, and its depth): [a0]------------[b3]------[c5]------[d8] \ \ \----------[e9] \ \----[f5] \--[g1]--------[h4]------[i6] \ \--------------------[j10] \-[k3] Which represents the horizontal, explicit binary tree: 0 a / \ 1 g * / \ \ 2 * * * / \ \ 3 k * b / / \ 4 h * * / \ \ \ 5 * * f c / \ / \ 6 * i * * / / \ 7 * * * / / \ 8 * * d / / 9 * e / 10 j (branches folded for compactness; * representing redundant, one-child nodes; note that *'s are actual nodes, storing one child each, just with names omitted here for presentation sake) (also, to clarify, I'd like to generate the first, horizontal tree; not this vertical tree) I say language-agnostic because I'm just looking for an algorithm; I say ruby because I'm eventually going to have to implement it in ruby anyway. Assume that each Node data structure stores only its id, a left node, and a right node. A master Tree class keeps tracks of all nodes and has adequate algorithms to find: A node's nth ancestor A node's nth descendant The generation of a node The lowest common ancestor of two given nodes Anyone have any ideas of where I could start? Should I go for the recursive approach? Iterative?

    Read the article

  • _heapwalk reports _HEAPBADNODE, causes breakpoint or loops endlessly

    - by Stefan Hubert
    I use _heapwalk to gather statistics about the Process' standard heap. Under certain circumstances i observe unexpected behaviours like: _HEAPBADNODE is returned some breakpoint is triggered inside _heapwalk, telling me the heap might got corrupted access violation inside _heapWalk. I saw different behaviours on different Computers. On one Windows XP 32 bit machine everything looked fine, whereas on two Windows XP 64 bit machines i saw the mentioned symptoms. I saw this behaviour only if LowFragmentationHeap was enabled. I played around a bit. I walked the heap several times right one after another inside my program. First time doing nothing in between the subsequent calls to _heapWalk (everything fine). Then again, this time doing some stuff (for gathering statistics) in between two subsequent calls to _heapWalk. Depending upon what I did there, I sometimes got the described symptoms. Here finally a question: What exactly is safe and what is not safe to do in between two subsequent calls to _heapWalk during a complete heap walk run? Naturally, i shall not manipulate the heap. Therefore i doublechecked that i don't call new and delete. However, my observation is that function calls with some parameter passing causes my heap walk run to fail already. I subsequently added function calls and increasing number of parameters passed to these. My feeling was two function calls with two paramters being passed did not work anymore. However I would like to know why. Any ideas why this does not happen on some machines? Any ideas why this only happens if LowFragmentationHeap is enabled? Sample Code finally: #include <malloc.h> void staticMethodB( int a, int b ) { } void staticMethodA( int a, int b, int c) { staticMethodB( 3, 6); return; } ... _HEAPINFO hinfo; hinfo._pentry = NULL; while( ( heapstatus = _heapwalk( &hinfo ) ) == _HEAPOK ) { //doing nothing here works fine //however if i call functions here with parameters, this causes //_HEAPBADNODE or something else staticMethodA( 3,4,5); } switch( heapstatus ) { ... case _HEAPBADNODE: assert( false ); /*ERROR - bad node in heap */ break; ...

    Read the article

  • Is there a way to handle the dynamic change of a dropdown for a single row in a grid-based datawindo

    - by TomatoSandwich
    Is there a way to handle the dynamic change of a dropdown for a single row in a grid-based datawindow? Example: NAME LIKABILITY PURCHASED IN COLOUR (Text) (DropDown*) (Text) (Text) Bananas [Good] Hands Yellow [Bad] [Bananas are good] Apples [Good] Bags Red [Bad] Given the above is a grid-based datawindow, where the fields 'NAME','PURCHASED IN' and 'COLOUR' are text fields, where as the 'LIKABILITY' field is a dropdown*. I say dropdown* because the same visual representation can be created by using a DropDownList (hardcoded within the datawindow element at design time), or a DropDownDW (or DDDW, a select statement that can be based on other elements in the datawindow). However, there is no way I can get 'Bananas' having it's 3 dropdowns, while Apples has only 2. If I enter multiple rows of 'Bananas', then all rows have 3 dropdowns, but as soon as I add an Apples row, all dropdowns revert to 2 selections. To attempt to achieve this functionality, I have tried the following options: -- 1) dw_1.Object.likability.values("Good~tG/Bad~tB/Bananas are good~tDRWHO") on ue_itemchange when editing NAME. FAILS: edits all instances of LIKABILITY instead of the current row. -- 2) Duplicate Dropdowns, having one filtered, one unfiltered selection list per row, visible based on NAME selection. FAILS: can't set visibility/overlapping columns on grid-based datawindow. (Source) -- 3) Hard-code display value as Database value, or Vice Versa. Have 'GOOD','BAD','BANANASAREGOOD' as the display and database values, and change handling of options from G, B, DRWHO to these new values. FAILS: 3rd option appears for all rows, still selectable on Apple rows, which is wrong. -- 4) DDDW retrieve list of options for dropdown. Create a DDDW that uses the value of NAME to determine what selections it should have for the dropdown. FAILS: edits all instances of the dropdown, not just the current row. -- 5) DDDW retrieve counter of options available (if B then 3 else 2), then have duplicate dropdown columns that protect/unprotect based on DDDW counter. FAILS: Can't autoselect dddw value to populate column to cause protect on other two columns, ugly solution in any case. -- There is now a bounty on this question for anyone who can give me a solution that will enable me to edit a dropdown column for a single row on a grid-based datawindow in PB 10.5

    Read the article

  • Asynchronous vs Synchronous vs Threading in an iPhone App

    - by Coocoo4Cocoa
    I'm in the design stage for an app which will utilize a REST web service and sort of have a dilemma in as far as using asynchronous vs synchronous vs threading. Here's the scenario. Say you have three options to drill down into, each one having its own REST-based resource. I can either lazily load each one with a synchronous request, but that'll block the UI and prevent the user from hitting a back navigation button while data is retrieved. This case applies almost anywhere except for when your application requires a login screen. I can't see any reason to use synchronous HTTP requests vs asynchronous because of that reason alone. The only time it makes sense is to have a worker thread make your synchronous request, and notify the main thread when the request is done. This will prevent the block. The question then is bench marking your code and seeing which has more overhead, a threaded synchronous request or an asynchronous request. The problem with asynchronous requests is you need to either setup a smart notification or delegate system as you can have multiple requests for multiple resources happening at any given time. The other problem with them is if I have a class, say a singleton which is handling all of my data, I can't use asynchronous requests in a getter method. Meaning the following won't go: - (NSArray *)users { if(users == nil) users = do_async_request // NO GOOD return users; } whereas the following: - (NSArray *)users { if(users == nil) users == do_sync_request // OK. return users; } You also might have priority. What I mean by priority is if you look at Apple's Mail application on the iPhone, you'll notice they first suck down your entire POP/IMAP tree before making a second request to retrieve the first 2 lines (the default) of your message. I suppose my question to you experts is this. When are you using asynchronous, synchronous, threads -- and when are you using either async/sync in a thread? What kind of delegation system do you have setup to know what to do when a async request completes? Are you prioritizing your async requests? There's a gamut of solutions to this all too common problem. It's simple to hack something out. The problem is, I don't want to hack and I want to have something that's simple and easy to maintain.

    Read the article

  • Purpose of Explicit Default Constructors

    - by Dennis Zickefoose
    I recently noticed a class in C++0x that calls for an explicit default constructor. However, I'm failing to come up with a scenario in which a default constructor can be called implicitly. It seems like a rather pointless specifier. I thought maybe it would disallow Class c; in favor of Class c = Class(); but that does not appear to be the case. Some relevant quotes from the C++0x FCD, since it is easier for me to navigate [similar text exists in C++03, if not in the same places] 12.3.1.3 [class.conv.ctor] A default constructor may be an explicit constructor; such a constructor will be used to perform default-initialization or value initialization (8.5). It goes on to provide an example of an explicit default constructor, but it simply mimics the example I provided above. 8.5.6 [decl.init] To default-initialize an object of type T means: — if T is a (possibly cv-qualified) class type (Clause 9), the default constructor for T is called (and the initialization is ill-formed if T has no accessible default constructor); 8.5.7 [decl.init] To value-initialize an object of type T means: — if T is a (possibly cv-qualified) class type (Clause 9) with a user-provided constructor (12.1), then the default constructor for T is called (and the initialization is ill-formed if T has no accessible default constructor); In both cases, the standard calls for the default constructor to be called. But that is what would happen if the default constructor were non-explicit. For completeness sake: 8.5.11 [decl.init] If no initializer is specified for an object, the object is default-initialized; From what I can tell, this just leaves conversion from no data. Which doesn't make sense. The best I can come up with would be the following: void function(Class c); int main() { function(); //implicitly convert from no parameter to a single parameter } But obviously that isn't the way C++ handles default arguments. What else is there that would make explicit Class(); behave differently from Class();? The specific example that generated this question was std::function [20.8.14.2 func.wrap.func]. It requires several converting constructors, none of which are marked explicit, but the default constructor is.

    Read the article

  • Count number of queries executed by NHibernate in a unit test

    - by Bittercoder
    In some unit/integration tests of the code we wish to check that correct usage of the second level cache is being employed by our code. Based on the code presented by Ayende here: http://ayende.com/Blog/archive/2006/09/07/MeasuringNHibernatesQueriesPerPage.aspx I wrote a simple class for doing just that: public class QueryCounter : IDisposable { CountToContextItemsAppender _appender; public int QueryCount { get { return _appender.Count; } } public void Dispose() { var logger = (Logger) LogManager.GetLogger("NHibernate.SQL").Logger; logger.RemoveAppender(_appender); } public static QueryCounter Start() { var logger = (Logger) LogManager.GetLogger("NHibernate.SQL").Logger; lock (logger) { foreach (IAppender existingAppender in logger.Appenders) { if (existingAppender is CountToContextItemsAppender) { var countAppender = (CountToContextItemsAppender) existingAppender; countAppender.Reset(); return new QueryCounter {_appender = (CountToContextItemsAppender) existingAppender}; } } var newAppender = new CountToContextItemsAppender(); logger.AddAppender(newAppender); logger.Level = Level.Debug; logger.Additivity = false; return new QueryCounter {_appender = newAppender}; } } public class CountToContextItemsAppender : IAppender { int _count; public int Count { get { return _count; } } public void Close() { } public void DoAppend(LoggingEvent loggingEvent) { if (string.Empty.Equals(loggingEvent.MessageObject)) return; _count++; } public string Name { get; set; } public void Reset() { _count = 0; } } } With intended usage: using (var counter = QueryCounter.Start()) { // ... do something Assert.Equal(1, counter.QueryCount); // check the query count matches our expectations } But it always returns 0 for Query count. No sql statements are being logged. However if I make use of Nhibernate Profiler and invoke this in my test case: NHibernateProfiler.Intialize() Where NHProf uses a similar approach to capture logging output from NHibernate for analysis via log4net etc. then my QueryCounter starts working. It looks like I'm missing something in my code to get log4net configured correctly for logging nhibernate sql ... does anyone have any pointers on what else I need to do to get sql logging output from Nhibernate?

    Read the article

  • Mindblowing jQuery Weirdness

    - by Jason
    ...at least to me. This code used to work fine. I'm pretty sure nothing has changed, but now all of the sudden it behaves oddly. Basically I'm trying to create inline editing functionality. When the user clicks on the link, it dynamically generates a textbox and a confirm and cancel link. I'm having problems with the cancel link not removing everything in the cell. HTML: ... <td class="bid"> <a href="javascript:" class="102093" title="Click to modify bid">$0.45</a> </td> ... Binding jQuery (in $(function())): $('.bid a').live('click', renderBidChange); .... $('.report_table .cancel').live('click', cancelUpdate); renderBidChange (this function creates the dynamic elements): function renderBidChange(){ var cpc = $(this); var value = cpc.text().replace('$', ''); var cell = cpc.parent('.bid'); cpc.hide(); var input = document.createElement('input'); $(input).attr({type:'text',class:'dynamic cpc-input'}).val(value); cell.append(input); var accept = document.createElement('a'); $(accept).addClass('accept').attr({'href':'javascript:', 'title':'Accept Changes'}).text('Accept Changes'); cell.append(accept); var cancel = document.createElement('a'); $(cancel).addClass('cancel').attr({'href':'javascript:', 'title':'Cancel Changes'}).text('Cancel Changes'); cell.append(cancel); $(input).focus(); input.select(); } cancelUpdate this function just removes everything visible (all the dynamic junk in this case) in the cell and shows what used to be there. function cancelUpdate(){ var cell = $(this).parent(); cell.find(':visible').remove(); cell.find(':hidden').show(); } However, for some reason, the cancel link remains after it is clicked! Everything else is removed except that. W T F Thanks for any insight you're able to provide! I'm sure it's just some stupid little detail I'm over[caffeinatedly]looking... UPDATE Immediately after posting this I epiphanied that it may be a CSS issue, but after double checking my code, it is not.

    Read the article

  • Saxon XSLT-Transformation: How to change serialization of an empty tag from <x/> to <x></x>?

    - by Ben
    Hello folks! I do some XSLT-Transformation using Saxon HE 9.2 with the output later being unmarshalled by Castor 1.3.1. The whole thing runs with Java at the JDK 6. My XSLT-Transformation looks like this: <xsl:transform version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ns="http://my/own/custom/namespace/for/the/target/document"> <xsl:output method="xml" encoding="UTF-8" indent="no" /> <xsl:template match="/"> <ns:item> <ns:property name="id"> <xsl:value-of select="/some/complicated/xpath" /> </ns:property> <!-- ... more ... --> </xsl:template> So the thing is: if the XPath-expression /some/complicated/xpath evaluates to an empty sequence, the Saxon serializer writes <ns:property/> instead of <ns:property></ns:property>. This, however, confuses the Castor unmarshaller, which is next in the pipeline and which unmarshals the output of the transformation to instances of XSD-generated Java-code. So my question is: How can I tell the Saxon-serializer to output empty tags not as standalone tags? Here is what I am proximately currently doing to execute the transformation: import net.sf.saxon.s9api.*; import javax.xml.transform.*; import javax.xml.transform.sax.SAXSource; // ... // read data XMLReader xmlReader = XMLReaderFactory.createXMLReader(); // ... there is some more setting up the xmlReader here ... InputStream xsltStream = new FileInputStream(xsltFile); InputStream inputStream = new FileInputStream(inputFile); Source xsltSource = new SAXSource(xmlReader, new InputSource(xsltStream)); Source inputSource = new SAXSource(xmlReader, new InputSource(inputStream)); XdmNode input = processor.newDocumentBuilder().build(inputSource); // initialize transformation configuration Processor processor = new Processor(false); XsltCompiler compiler = processor.newXsltCompiler(); compiler.setErrorListener(this); XsltExecutable executable = compiler.compile(xsltSource); Serializer serializer = new Serializer(); serializer.setOutputProperty(Serializer.Property.METHOD, "xml"); serializer.setOutputProperty(Serializer.Property.INDENT, "no"); serializer.setOutputStream(output); // execute transformation XsltTransformer transformer = executable.load(); transformer.setInitialContextNode(input); transformer.setErrorListener(this); transformer.setDestination(serializer); transformer.setSchemaValidationMode(ValidationMode.STRIP); transformer.transform(); I'd appreciate any hint pointing in the direction of a solution. :-) In case of any unclarity I'd be happy to give more details. Nightly greetings from Germany, Benjamin

    Read the article

  • Hibernate + Spring : cascade deletion ignoring non-nullable constraints

    - by E.Benoît
    Hello, I seem to be having one weird problem with some Hibernate data classes. In a very specific case, deleting an object should fail due to existing, non-nullable relations - however it does not. The strangest part is that a few other classes related to the same definition behave appropriately. I'm using HSQLDB 1.8.0.10, Hibernate 3.5.0 (final) and Spring 3.0.2. The Hibernate properties are set so that batch updates are disabled. The class whose instances are being deleted is: @Entity( name = "users.Credentials" ) @Table( name = "credentials" , schema = "users" ) public class Credentials extends ModelBase { private static final long serialVersionUID = 1L; /* Some basic fields here */ /** Administrator credentials, if any */ @OneToOne( mappedBy = "credentials" , fetch = FetchType.LAZY ) public AdminCredentials adminCredentials; /** Active account data */ @OneToOne( mappedBy = "credentials" , fetch = FetchType.LAZY ) public Account activeAccount; /* Some more reverse relations here */ } (ModelBase is a class that simply declares a Long field named "id" as being automatically generated) The Account class, which is one for which constraints work, looks like this: @Entity( name = "users.Account" ) @Table( name = "accounts" , schema = "users" ) public class Account extends ModelBase { private static final long serialVersionUID = 1L; /** Credentials the account is linked to */ @OneToOne( optional = false ) @JoinColumn( name = "credentials_id" , referencedColumnName = "id" , nullable = false , updatable = false ) public Credentials credentials; /* Some more fields here */ } And here is the AdminCredentials class, for which the constraints are ignored. @Entity( name = "admin.Credentials" ) @Table( name = "admin_credentials" , schema = "admin" ) public class AdminCredentials extends ModelBase { private static final long serialVersionUID = 1L; /** Credentials linked with an administrative account */ @OneToOne( optional = false ) @JoinColumn( name = "credentials_id" , referencedColumnName = "id" , nullable = false , updatable = false ) public Credentials credentials; /* Some more fields here */ } The code that attempts to delete the Credentials instances is: try { if ( account.validationKey != null ) { this.hTemplate.delete( account.validationKey ); } this.hTemplate.delete( account.languageSetting ); this.hTemplate.delete( account ); } catch ( DataIntegrityViolationException e ) { return false; } Where hTemplate is a HibernateTemplate instance provided by Spring, its flush mode having been set to EAGER. In the conditions shown above, the deletion will fail if there is an Account instance that refers to the Credentials instance being deleted, which is the expected behaviour. However, an AdminCredentials instance will be ignored, the deletion will succeed, leaving an invalid AdminCredentials instance behind (trying to refresh that instance causes an error because the Credentials instance no longer exists). I have tried moving the AdminCredentials table from the admin DB schema to the users DB schema. Strangely enough, a deletion-related error is then triggered, but not in the deletion code - it is triggered at the next query involving the table, seemingly ignoring the flush mode setting. I've been trying to understand this for hours and I must admit I'm just as clueless now as I was then.

    Read the article

  • Agilent E4426B signal generator locks up during multiple GPIB *SAV operations

    - by aspiehler
    I have a test fixture with an Agilent E4426B RF signal generator connected to a PC via a National Instrument Ethernet-to-GPIB bridge. My software is attempting to sanitize the instrument by presetting it and then saving the current state to all of the memory locations writable via the standard SCPI command "*SAV x,y". The loop works to a point, but eventually the instrument responds with an error and continuously displays the "L" icon on the front display and a "Remote preset" message at the bottom. At that point it won't respond to any more remote commands and I have to either cycle power or press LOCAL, then PRESET at which point it takes about 3 minutes to finish presetting. At that point the "L" icon is still present and and the next GPIB command sent to the instrument causes it to report a -113 error (undefined header) in the instrument error queue. I fired up NI spy to see what was happening, and found that the error was happening at the same point in the loop - "*SAV 6,2" in this case. From NI Spy: Send (0, 0x0017, "*SAV 6,2", 8 (0x8), NLend (0,0x01)) Process ID: 0x00000520 Thread ID: 0x00000518 ibsta:0xc168 iberr: 6 ibcntl: 2(0x2) And here's the code from the instrument driver: int CHP_E4426b::Erase() { if ((m_StatusCode = Initialize()) != GPIB_SUCCESS) // basically just sends "*RST" return m_StatusCode; m_SaveState = "*SAV %d, %d"; for (int i=0; i < 10; i++) for (int j=0; j < 100; j++) { sprintf(m_CmdString, m_SaveState, j, i); if ((m_StatusCode = Send(m_CmdString, strlen(m_CmdString))) != GPIB_SUCCESS) return m_StatusCode; } return GPIB_SUCCESS; } I tried putting a small Sleep() delay (10-20 ms) at the end of the inner loop, and to my surprise it caused the error to show up earlier rather than later. 10 ms caused the loop to error out at 44,1 and 20 ms was even sooner. I've already eliminated faulty cabling or the instrument as the culprit. This same type of sequence works without any error on a higher end signal generator, so I'm tempted to chalk this up to a bug in the instrument's firmware.

    Read the article

< Previous Page | 837 838 839 840 841 842 843 844 845 846 847 848  | Next Page >