Search Results

Search found 22900 results on 916 pages for 'pascal case'.

Page 711/916 | < Previous Page | 707 708 709 710 711 712 713 714 715 716 717 718  | Next Page >

  • What is the best way to translate this recursive python method into Java?

    - by Simucal
    In another question I was provided with a great answer involving generating certain sets for the Chinese Postman Problem. The answer provided was: def get_pairs(s): if not s: yield [] else: i = min(s) for j in s - set([i]): for r in get_pairs(s - set([i, j])): yield [(i, j)] + r for x in get_pairs(set([1,2,3,4,5,6])): print x This will output the desire result of: [(1, 2), (3, 4), (5, 6)] [(1, 2), (3, 5), (4, 6)] [(1, 2), (3, 6), (4, 5)] [(1, 3), (2, 4), (5, 6)] [(1, 3), (2, 5), (4, 6)] [(1, 3), (2, 6), (4, 5)] [(1, 4), (2, 3), (5, 6)] [(1, 4), (2, 5), (3, 6)] [(1, 4), (2, 6), (3, 5)] [(1, 5), (2, 3), (4, 6)] [(1, 5), (2, 4), (3, 6)] [(1, 5), (2, 6), (3, 4)] [(1, 6), (2, 3), (4, 5)] [(1, 6), (2, 4), (3, 5)] [(1, 6), (2, 5), (3, 4)] This really shows off the expressiveness of Python because this is almost exactly how I would write the pseudo-code for the algorithm. I especially like the usage of yield and and the way that sets are treated as first class citizens. However, there in lies my problem. What would be the best way to: 1.Duplicate the functionality of the yield return construct in Java? Would it instead be best to maintain a list and append my partial results to this list? How would you handle the yield keyword. 2.Handle the dealing with the sets? I know that I could probably use one of the Java collections which implements that implements the Set interface and then using things like removeAll() to give me a set difference. Is this what you would do in that case? Ultimately, I'm looking to reduce this method into as concise and straightforward way as possible in Java. I'm thinking the return type of the java version of this method will likely return a list of int arrays or something similar. How would you handle the situations above when converting this method into Java?

    Read the article

  • jQuery Colorbox Iframe and Selector

    - by Joker
    I have this: $("a.money").colorbox({href:$("a.money").attr("href")+" div#content"}); Which opens an overlay but only of the content within the #content selector. If I add iframe:true, this will no longer work. Is there a way to get it to work with an iframe? Edit: The closest I could get it to work was by doing this: $("a.money").colorbox({iframe:true, width:700, height:425, onComplete: function() { alert('test'); $test = $("#cboxLoadedContent iframe").contents().find("#content").html(); $("#cboxLoadedContent iframe").contents().find("#container").html($test); } }); Although without the alert it doesn't appear to work, I looked into that and I think I need to use .live(), which led me to trying this: $('a.money').live('click', function() { url = this.href; // this is the url of the element event is triggered from $.fn.colorbox({href: url, iframe:true,width:700, height:425, onComplete: function() { $test = $("#cboxLoadedContent iframe").contents().find("#content").html(); $("#cboxLoadedContent iframe").contents().find("#container").html($test); } }); return false; }); Didn't work, I still needed to add an alert to make it work. In case you might be wondering what I'm trying to do. The webpage is loaded in the iframe, with all the elements in the #container, so that includes #header, #sidebars, but I only want to show the #content inside the iframe. However, this led me to another problem I realized. Assuming I got this to work without the alert, it will only work for that initial page. For example, if I loaded up a form in the iframe, after submitting the form, I would once again see the whole layout instead of just the #content portion. Is it possible to continue only showing the #content portion of the page upon further navigation?

    Read the article

  • T-SQL Table Joins - Unique Situation

    - by Dimitri
    Hello Everyone. This is my first time encountering the case like this and don't quite know how to handle. Situation: I have one table tblSettingsDefinition, with fields: ID, GroupID, Name, typeID, DefaultValue. Then I have tblSettingtypes with fields TypeID, Name. And I have final table, tblUserSettings with fields SettingID, SettingDefinitionID, UserID, Value. The whole point of this is to have customizable settings. Setting can be defined for a Group or as global setting (if GroupID is NULL). It will have a default value, but if user modifies the setting, an entry is added to tblUserSettings that stores new value. I want to have a query that grabs user settings by first looking at the tblUserSettings, and if it has records for the given user, grabs them, if not retrieves default settings. But the trick is that no matter if user has settings or not, I need to have fields from other two table retrieved to know the setting's Type, Name etc... (which are stored in those other tables). I'm writing query something like this: SELECT * FROM tblSettingDefinition SD LEFT JOIN tblUserSettings US ON SD.SettingID = US.SettingDefinitionID JOIN tblSettingTypes ST ON SD.TypeID=ST.ID WHERE US.UserID=@UserID OR ((SD.GroupID IS NULL) OR (SD.GroupID=(SELECT GroupID FROM tblUser WHERE ID=@UserID))) but it retrieves settings for all users from tblUserSettings instead of just ones that match current @UserID. And if @UserID has no records in tblUserSettings, still, all user settings are retrieved instead of the defaults from tblSettingDefinition. Hope I made myself clear. Any help would be highly appreciated. Thank you.

    Read the article

  • Algorithm to see if keywords exist inside a string

    - by rksprst
    Let's say I have a set of keywords in an array {"olympics", "sports tennis best", "tennis", "tennis rules"} I then have a large list (up to 50 at a time) of strings (or actually tweets), so they are a max of 140 characters. I want to look at each string and see what keywords are present there. In the case where a keyword is composed of multiple words like "sports tennis best", the words don't have to be together in the string, but all of them have to show up. I've having trouble figuring out an algorithm that does this efficiently. Do you guys have suggestions on a way to do this? Thanks! Edit: To explain a bit better each keyword has an id associated with it, so {1:"olympics", 2:"sports tennis best", 3:"tennis", 4:"tennis rules"} I want to go through the list of strings/tweets and see which group of keywords match. The output should be, this tweet belongs with keyword #4. (multiple matches may be made, so anything that matches keyword 2, would also match 3 -since they both contain tennis). When there are multiple words in the keyword, e.g. "sports tennis best" they don't have to appear together but have to all appear. e.g. this will correctly match: "i just played tennis, i love sports, its the best"... since this string contains "sports tennis best" it will match and be associated with the keywordID (which is 2 for this example).

    Read the article

  • Associations and the Grails webflow

    - by callie16
    Hi, it's my first time using webflows in Grails and I can't seem to solve this. I have 3 domain classes with associations that look something like this: class A { ... static hasMany = [ b : B ] ... } class B { ... static belongsTo = [ a : A ] static hasMany = [ c : C ] ... } class C { ... static belongsTo = [ b : B ] ... } Now, the GSP communicates with the controller via Javascript (due to my use of Dojo). When I try to remoteFunction a normal action, I can do something like this: def action1 = { def anId = params.id def currA = A.get(anId) def sample = currA.b?.c // I can get all the way to 'c' without any problems ... } However, I have a webflow and the contents of that action is in the webflow... It looks something like this: def someFlow = { ... someState { on("next") { def anId = params.id // this does NOT return a null value def currA = A.get(anId) // this does NOT return a null value def sample = currA.b // error already occurs here and I need to get 'c'! }.("somePage") ... } ... } In this case, it tells me that b doesn't exist... so I can't even get to 'c'. Any suggestions on what to do??? Thanks... getting real desperate...

    Read the article

  • Jquery datatable Hidden column showing after calling another script .how can i hide the column specified permanently?

    - by Sreenath Plakkat
    My table is <table id="EmployeesTable" style="width: 100%;" class="grid-table06 border-one"> <thead> <tr> <th width="80%" align="left" valign="middle">Name</th> <th width="20%" align="left" valign="middle">Department</th> <th>Id</th> </tr> </thead> </table> My script as follows $(function () { $(".switchDate").click(function () { var id = $(this).attr("rel"); fetchEmployeedetails(id); }); fetchEmployeedetails(@model.Id); //on load function fetchEmployeedetails(id) { $("#EmployeesTable").dataTable({ "bProcessing": true, "bServerSide": true, "sAjaxSource": "/Employees/FetchDetails?Deptid=" + id + "&thresholdLow=4&threshold=100", "sPaginationType": "full_numbers", "bDestroy": true, "aaSorting": [[1, 'desc']], "asStripClasses": ['color01', 'color03'], "aoColumnDefs": [{ "aTargets": [2], "bVisible": false }, { "aTargets": [1], "fnRender": function (oObj) { return "<a href='#showemployees' rel='" + oObj.aData[2] + "'></a>"; } }] }); } }); On load it works fine not showing the hidden "Id" column but in case when I choose the id by switchDate on click function it causes the hidden column to be visible for second. How can I hide the column permanently?

    Read the article

  • Nested Groups in Regex

    - by cryptic-star
    I'm constructing a regex that is looking for dates. I would like to return the date found and the sentence it was found in. In the code below, the strings on either side of date_string should check for the conditions of a sentence. For your sake, I've omitted the regex for date_string - sufficed to say, it works for picking out dates. While the inside of date_string isn't important, it is grouped as one entire regex. "((?:[^.|?|!]*)"+date_string+"(?:[^.|?|!]*[.|?|!]\s*))" The problem is that date_string is only matching the last number of any given date, presumably because the regex in front of date_string is matching too far and overrunning the date regex. For example, if I say "Independence Day is July 4.", I will get the sentence and 4, even though it should match 'July 4'. In case you're wondering, my regex inside date_string are ordered in such a way that 'July 4' should match first. Is there any way to do this all in one regex? Or do I need to split it up somehow (i.e. split up all text into sentences, and then check each sentence)?

    Read the article

  • Pinax TemplateSyntaxError

    - by Spikie
    hi, i ran into this errors while trying to modify pinax database model i am using eclipse pydev i have this error on the pydev Exception Type: TemplateSyntaxError at / Exception Value: Caught an exception while rendering: (1146, "Table 'test1.announcements_announcement' doesn't exist") please how do i correct this UPDATE: i asked this question and left unresolved some months back and you what ran into the bug again this week and typed the error message in google hit the page with the question and unanswered so i think i have to answer it and hope it help someone in the future have the same problem. some the problem is that the sqlite path is out of place so django or this case pinax can not find it so to resolve that change the absolute path to sqlite like it DATABASE_ENGINE = 'sqlite3' # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'ado_mssql'. DATABASE_NAME = os.path.join(PROJECT_ROOT,'dev.db' ) # Or path to database file if using sqlite3. DATABASE_USER = '' # Not used with sqlite3. DATABASE_PASSWORD = '' # Not used with sqlite3. DATABASE_HOST = '' # Set to empty string for localhost. Not used with sqlite3. DATABASE_PORT = '' # Set to empty string for default. Not used with sqlite3. i hope that help

    Read the article

  • Find actual value of PHP variable

    - by Simon S
    Hi all. I am having a real headache with reading in a tab delimited text file and inserting it into a MySQL Database. The tab delimited text file was generated (I think) from a MS SQL Database, and I have written a simple script to read in the file and insert it into an existing table in my MySQL database. However, there seems to be some problem with the data in the txt file. When my PHP script parses the file and I output the INSERT statements, the values in each of the fields are longer than they should be. For example, the first field should be a simple two character alphanumeric value. If I echo out the INSERT statements, using Firebug (in Firefox), between each of the characters is a question mark in a black diamond. If I var_dump the values, I get the following: string(5) "A1" Now, this clearly shows a two character string, but var_dump tells me it is five characters long!! If I trim() the value, all I get is the first character (in this case "A"). How can I get at the other characters, even if it is only to remove them? Additionally, this appears to be forcing MySQL to insert the value as a BLOB, not as a varchar as it should. Simon

    Read the article

  • Use external datasource with NUnit's TestCaseAttribute

    - by Hamman359
    Is it possible to get the values for a TestCaseAttribute from an external data source such as an Excel Spreadsheet, CSV file or Database? i.e. Have a .csv file with 1 row of data per test case and pass that data to NUnit one at a time. Here's the specific situation that I'd like to use this for. I'm currently merging some features from one system into another. This is pretty much just a copy and paste process from the old system into the new one. Unfortunately, the code being moved not only does not have any tests, but is not written in a testable manner (i.e. tightly coupled with the database and other code.) Taking the time to make the code testable isn't really possible since its a big mess, i'm on a tight schedule and the entire feature is scheduled to be re-written from the ground up in the next 6-9 months. However, since I don't like the idea of not having any tests around the code, I'm going to create some simple Selenium tests using WebDriver to test the page through the UI. While this is not ideal, it's better than nothing. The page in question has about 10 input values and about 20 values that I need to assert against after the calculations are completed, with about 30 valid combinations of values that I'd like to test. I already have the data in a spreadsheet so it'd be nice to simply be able to pull that out rather than having to re-type it all in Visual Studio.

    Read the article

  • Design suggestion for expression tree evaluation with time-series data

    - by Lirik
    I have a (C#) genetic program that uses financial time-series data and it's currently working but I want to re-design the architecture to be more robust. My main goals are: sequentially present the time-series data to the expression trees. allow expression trees to access previous data rows when needed. to optimize performance of the data access while evaluating the expression trees. keep a common interface so various types of data can be used. Here are the possible approaches I've thought about: I can evaluate the expression tree by passing in a data row into the root node and let each child node use the same data row. I can evaluate the expression tree by passing in the data row index and letting each node get the data row from a shared DataSet (currently I'm passing the row index and going to multiple synchronized arrays to get the data). Hybrid: an immutable data set is accessible by all of the expression trees and each expression tree is evaluated by passing in a data row. The benefit of the first approach is that the data row is being passed into the expression tree and there is no further query done on the data set (which should increase performance in a multithreaded environment). The drawback is that the expression tree does not have access to the rest of the data (in case some of the functions need to do calculations using previous data rows). The benefit of the second approach is that the expression trees can access any data up to the latest data row, but unless I specify what that row is, I'll have to iterate through the rows and figure out which one is the last one. The benefit of the hybrid is that it should generally perform better and still provide access to the earlier data. It supports two basic "views" of data: the latest row and the previous rows. Do you guys know of any design patterns or do you have any tips that can help me build this type of system? Should I use a DataSet to hold and present the data, or are there more efficient ways to present rows of data while maintaining a simple interface? FYI: All of my code is written in C#.

    Read the article

  • Display a input type=file over another input type=file

    - by Kevin Sedgley
    WARNING: Lengthy description coming up! I have written an uploader based upon APC progress uploader for PHP. This works fine and dandy, but the script as a whole (apc etc) is intended to be used only for those with Javascript. To achieve this, I have searched for any input type=file, and replaced these with an absolutely positioned form that appears over the original area where the old file input area was. The reasons for this are so the new uploader can submit to a hidden in page IFrame has to be in a seperate <form> in order to submit to the APC reciever to display the progress upload bar. allows it to be used within any form with an input type=file throughout the site I have used JQuery to do this, with the following code: Original HTML form code: <div><input type="file" name="media" id="media" /></div> Find position of div block code: // get the parent div, and properties thereof parentDiv = $(this).closest('div'); w = $(parentDiv).width(); h = $(parentDiv).height(); loc = $(parentDiv).offset(); Locate new block over old block: $('#_sender').appendTo('body').css({left:loc.left,top:loc.top,position:'absolute',zIndex:400,height:h,width:w}).show(); This works fine, and shows over the old block OK. The problem: When other elements in the DOM before or above it change (in this case a "tree view" selector is pushing the old block down) the new upload form gets moved over other elements. Is there a JQuery (or JS) method for changing this upon DOM change? Some kind of .onchange for the page?! Or an .onmove for the original block? Thanks in advance you lovely people Before DOM change: . After: .

    Read the article

  • Can I spread out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • Can I spread out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • JScrollPanel without scrollbars

    - by Erik Itland
    I'm trying to use a JScrollPanel to display a JPanel that might be too big for the containing Jpanel. I don't want to show the scrollbars (yes, this is questionable UI design, but it is my best guess of what the customer wants. We use the same idea other places in the application, and I feel this case have given me enough time to ponder if I can do it in a better way, but if you have a better idea I might accept it an answer.) First attempt: set verticalScrollBarPolicy to NEVER. Result: Scrolling using mouse wheel doesn't work. Second attempt: set the scrollbars to null. Result: Scrolling using mouse wheel doesn't work. Third attempt: set scrollbars visible property to false. Result: It is immidiately set visible by Swing. Fourth attempt: inject a scrollbar where setVisible is overridden to do nothing when called with true. Result: Can't remember exactly, but I think it just didn't work. Fifth attempt: inject a scrollbar where setBounds are overridden. Result: Just didn't look nice. (I might have missed something here, though.) Sixth attempt: ask stackoverflow. Result: Pending. Scrolling works once scrollbars are back.

    Read the article

  • Splitting MS Access Database - Front End Part Location

    - by kristof
    One of the best practices as specified by Microsoft for Access Development is splitting Access application into 2 parts; Front End that hold all the object except tables and the Back End that holds the tables. The msdn page links there to the article Splitting Microsoft Access Databases to Improve Performance and Simplify Maintainability that describes the process in details. It is recommended that in multi user environment the Back End is stored on the server/shared folder while the Front End is distributed to each user. That implies that each time there are any changes made to the front end they need to be deployed to every user machine. My question is: Assuming that the users themselves do not have rights to modify the Front End part of the application what would be the drawbacks/dangers of leaving this on the server as well next to the Back End copy? I can see the performance issues here, but are there any dangers here like possible corruptions etc? Thank you EDIT Just to clarify, the scenario specified in question assumes one Front End stored on the server and shared by users. I understand that the recommendation is to have FE deployed to each user machine, but my question is more about what are the dangers if that is not done. E.g. when you are given an existing solution that uses the approach of both FE and BE on the server. Assuming the the performance is acceptable and the customer is reluctant to change the approach would you still push the change? And why exactly? For example the danger of possible data corruption would definitely be the strong enough argument, but is that the case? It is a part of follow up of my previous question From SQL Server to MS Access 2007

    Read the article

  • Flex List ItemRenderer with image looses BitmapData when scrolling

    - by Dominik
    Hi i have a mx:List with a DataProvider. This data Provider is a ArrayCollection if FotoItems public class FotoItem extends EventDispatcher { [Bindable] public var data:Bitmap; [Bindable] public var id:int; [Bindable] public var duration:Number; public function FotoItem(data:Bitmap, id:int, duration:Number, target:IEventDispatcher=null) { super(target); this.data = data; this.id = id; this.duration = duration; } } my itemRenderer looks like this: <?xml version="1.0" encoding="utf-8"?> <mx:VBox xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" > <fx:Script> <![CDATA[ import mx.collections.ArrayCollection; ]]> </fx:Script> <s:Label text="index"/> <mx:Image source="{data.data}" maxHeight="100" maxWidth="100"/> <s:Label text="Duration: {data.duration}ms"/> <s:Label text="ID: {data.id}"/> </mx:VBox> Now when i am scrolling then all images that leave the screen disappear :( When i take a look at the arrayCollection every item's BitmapData is null. Why is this the case?

    Read the article

  • Ibator didn't generate Oracle varchar2 field

    - by bugbug
    I have table APP_REQ_APPROVE_COMPARE with following fields: "ID" NUMBER NOT NULL ENABLE, "TRACK_NO" VARCHAR2(20 BYTE) NOT NULL ENABLE, "REQ_DATE" DATE NOT NULL ENABLE, "OFFCODE" CHAR(6 BYTE) NOT NULL ENABLE, "COMPARE_CASE_ID" NUMBER NOT NULL ENABLE, "VEHICLE_NAME" VARCHAR2(100 BYTE), "ENGINE_NO" VARCHAR2(100 BYTE), "BODY_NO" VARCHAR2(100 BYTE), "HOLD_SHIP" NUMBER, "OWNERSHIP" VARCHAR2(200 BYTE), "RENT_NAME" VARCHAR2(200 BYTE), "CONTRACT" VARCHAR2(100 BYTE), "CONTRACT_NO" VARCHAR2(100 BYTE), "CONTRACT_DATE" DATE, "ISLAWBREAKERRENT" CHAR(1 BYTE) NOT NULL ENABLE, "MISTAKE_DETAIL" VARCHAR2(4000 BYTE), "COMPARE_REASON" VARCHAR2(4000 BYTE), "CREATE_BY" NUMBER NOT NULL ENABLE, "CREATE_ON" DATE DEFAULT SYSDATE NOT NULL ENABLE, "UPDATE_BY" NUMBER, "UPDATE_ON" DATE, When I generate a java bean using Ibator , I didn't find trackNo, VehicalName, ... (all fields defined as varchar2). What is the problem in my case? Here is my Ibator configuration file: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE ibatorConfiguration PUBLIC "-//Apache Software Foundation//DTD Apache iBATIS Ibator Configuration 1.0//EN" "http://ibatis.apache.org/dtd/ibator-config_1_0.dtd"> <ibatorConfiguration> <classPathEntry location="/dos/connector/oracle_jdbc.jar"/> <ibatorContext id="autoPerson" defaultModelType="flat" targetRuntime="Ibatis2Java2"> <jdbcConnection connectionURL="jdbc:oracle:thin:@192.168.42.144:1521:orcl" driverClass="oracle.jdbc.driver.OracleDriver" userId="user" password="password"/> <javaModelGenerator targetPackage="com.ko.model" targetProject="FormConfig"> <property name="enableSubPackages" value="true"/> <property name="trimStrings" value="true"/> </javaModelGenerator> <sqlMapGenerator targetPackage="com.ko.map" targetProject="FormConfig"> <property name="enableSubPackages" value="true"/> </sqlMapGenerator> <daoGenerator targetPackage="com.ko.model.dao" type="SPRING" targetProject="FormConfig" implementationPackage="com.ko.model.dao.impl" > <property name="enableSubPackges" value="true"/> <property name="methodNameCalculator" value="extended"/> </daoGenerator> <table tableName="APP_REQ_APPROVE_COMPARE" domainObjectName="AppReqApproveCompare"/> <ibatorConfiguration>

    Read the article

  • Datatable add new column and values speed issue

    - by Cine
    I am having some speed issue with my datatables. In this particular case I am using it as holder of data, it is never used in GUI or any other scenario that actually uses any of the fancy features. In my speed trace, this particular constructor was showing up as a heavy user of time when my database is ~40k rows. The main user was set_Item of DataTable. protected myclass(DataTable dataTable, DataColumn idColumn) { this.dataTable = dataTable; IdColumn = idColumn ?? this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); JobIdColumn = this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); IsNewColumn = this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); int id = 1; foreach (DataRow r in this.dataTable.Rows) { r[JobIdColumn] = id++; r[IsNewColumn] = (r[IdColumn] == null || r[IdColumn].ToString() == string.Empty) ? 1 : 0; } Digging deeper into the trace, it turns out that set_Item calls EndEdit, which brings my thoughts to the transaction support of the DataTable, for which I have no usage for in my scenario. So my solution to this was to open editing on all of the rows and never close them again. _dt.BeginLoadData(); foreach (DataRow row in _dt.Rows) row.BeginEdit(); Is there a better solution? This feels too much like a big giant hack that will eventually come and bite me. You might suggest that I dont use DataTable at all, but I have already considered that and rejected it due to the amount of effort that would be required to reimplement with a custom class. The main reason it is a datatable is that it is ancient code (.net 1.1 time) and I dont want to spend that much time changing it, and it is also because the original table comes out of a third party component.

    Read the article

  • Most efficient way to check for DBNull and then assign to a variable?

    - by ilitirit
    This question comes up occasionally but I haven't seen a satisfactory answer. A typical pattern is (row is a DataRow): if (row["value"] != DBNull.Value) { someObject.Member = row["value"]; } My first question is which is more efficient (I've flipped the condition): row["value"] == DBNull.Value; // Or row["value"] is DBNull; // Or row["value"].GetType() == typeof(DBNull) // Or... any suggestions? This indicates that .GetType() should be faster, but maybe the compiler knows a few tricks I don't? Second question, is it worth caching the value of row["value"] or does the compiler optimize the indexer away anyway? eg. object valueHolder; if (DBNull.Value == (valueHolder = row["value"])) {} Disclaimers: row["value"] exists. I don't know the column index of the column (hence the column name lookup) I'm asking specifically about checking for DBNull and then assignment (not about premature optimization etc). Edit: I benchmarked a few scenarios (time in seconds, 10000000 trials): row["value"] == DBNull.Value: 00:00:01.5478995 row["value"] is DBNull: 00:00:01.6306578 row["value"].GetType() == typeof(DBNull): 00:00:02.0138757 Object.ReferenceEquals has the same performance as "==" The most interesting result? If you mismatch the name of the column by case (eg. "Value" instead of "value", it takes roughly ten times longer (for a string): row["Value"] == DBNull.Value: 00:00:12.2792374 The moral of the story seems to be that if you can't look up a column by it's index, then ensure that the column name you feed to the indexer matches the DataColumn's name exactly. Caching the value also appears to be nearly twice as fast: No Caching: 00:00:03.0996622 With Caching: 00:00:01.5659920 So the most efficient method seems to be: object temp; string variable; if (DBNull.Value != (temp = row["value"]) { variable = temp.ToString(); } This was a good learning experience.

    Read the article

  • How to draw multiple rectangles using c#

    - by Nivas
    I have drawn and saved the Rectangle on the image i loaded in the picture box. How i like to draw multiple rectangles for that i tried array in the rectangle but it gives error ("Object reference not set to an instance of an object." (Null reference Exception was unhandled). private void pictureBox1_MouseDown(object sender, MouseEventArgs e) { if (mybitmap == null) { mybitmap = new Bitmap(sz.Width, sz.Height); } rect[count] = new Rectangle(e.X, e.Y, 0, 0); this.Invalidate(); } private void pictureBox1_MouseMove(object sender, MouseEventArgs e) { if (stayToolStripMenuItem.Checked == true) { switch (e.Button) { case MouseButtons.Left: { rect[count] = new Rectangle(rect[count].Left, rect[count].Top, e.X - rect[count].Left, e.Y - rect[count].Top); pictureBox1.Invalidate(); count++: break; } } } } private void pictureBox1_Paint(object sender, PaintEventArgs e) { if (stayToolStripMenuItem.Checked == true) { button1.Visible = true; button2.Visible = true; if (mybitmap == null) { return; } using (g = Graphics.FromImage(mybitmap)) { using (Pen pen = new Pen(Color.Red, 2)) { //g.Clear(Color.Transparent); e.Graphics.DrawRectangle(pen, rect); label1.Top = rect[count].Top; label1[count].Left = rect[count].Left; label1.Width = rect[count].Width; label1.Height = rect[count].Height; if (label1.TextAlign == ContentAlignment.TopLeft) { e.Graphics.DrawString(label1.Text, label1.Font, new SolidBrush(label1.ForeColor), rect); g.DrawString(label1.Text, label1.Font, new SolidBrush(label1.ForeColor), rect); g.DrawRectangle(pen, rect[count]); } } } } } How can i do this.....

    Read the article

  • Inserting a ContentControl after another ContentControl

    - by Markus Roth
    In our VSTO Word 2010 Addin, we are trying to insert a RichTextControl after a given other ContentControl. We have tried this: public ContentControl AddContentControl(WdContentControlType type, int position) { Paragraph paragraphBefore = null; if (position == 0) { if (WordDocument.Paragraphs.Count == 0) { WordDocument.Paragraphs.Add(); } paragraphBefore = WordDocument.Paragraphs.First; } else { paragraphBefore = Controls.ElementAt(position - 1).Range.Paragraphs.Last; } object start = paragraphBefore.Range.End; object end = paragraphBefore.Range.End + 1; paragraphBefore.Range.InsertParagraphAfter(); Range rangeToUse = WordDocument.Range(ref start, ref end); ContentControl newControl = _ContentControl = _WordDocument.ContentControls.Add(type, rangeToInsert); Controls.Insert(position, newControl); OnNewContentControl(newControl, position); return newControl.ContentControl; } which works fine, unless the control that is before the one we want to insert has an empty paragraph at the end. If that is the case, the new ContentControl is inserted within the last control. How can we avoid this?

    Read the article

  • In Java, howd do I iterate through lines in a textfile from back to front

    - by rogue780
    Basically I need to take a text file such as : Fred Bernie Henry and be able to read them from the file in the order of Henry Bernie Fred The actual file I'm reading from is 30MB and it would be a less than perfect solution to read the whole file, split it into an array, reverse the array and then go from there. It takes way too long. My specific goal is to find the first occurrence of a string (in this case it's "InitGame") and then return the position beginning of the beginning of that line. I did something like this in python before. My method was to seek to the end of the file - 1024, then read lines until I get to the end, then seek another 1024 from my previous starting point and, by using tell(), I would stop when I got to the previous starting point. So I would read those blocks backwards from the end of the file until I found the text I was looking for. So far, I'm having a heck of a time doing this in Java. Any help would be greatly appreciated and if you live near Baltimore it may even end up with you getting some fresh baked cookies. Thanks!

    Read the article

  • PHP OOP: Providing Domain Entities with "Identity"

    - by sunwukung
    Bit of an abstract problem here. I'm experimenting with the Domain Model pattern, and barring my other tussles with dependencies - I need some advice on generating Identity for use in an Identity Map. In most examples for the Data Mapper pattern I've seen (including the one outlined in this book: http://apress.com/book/view/9781590599099) - the user appears to manually set the identity for a given Domain Object using a setter: $UserMapper = new UserMapper; //returns a fully formed user object from record sets $User = $UserMapper->find(1); //returns an empty object with appropriate properties for completion $UserBlank = $UserMapper->get(); $UserBlank->setId(); $UserBlank->setOtherProperties(); Now, I don't know if I'm reading the examples wrong - but in the first $User object, the $id property is retrieved from the data store (I'm assuming $id represents a row id). In the latter case, however, how can you set the $id for an object if it has not yet acquired one from the data store? The problem is generating a valid "identity" for the object so that it can be maintained via an Identity Map - so generating an arbitrary integer doesn't solve it. My current thinking is to nominate different fields for identity (i.e. email) and demanding their presence in generating blank Domain Objects. Alternatively, demanding all objects be fully formed, and using all properties as their identity...hardly efficient. (Or alternatively, dump the Domain Model concept and return to DBAL/DAO/Transaction Scripts...which is seeming increasingly elegant compared to the ORM implementations I've seen...)

    Read the article

  • Multithreaded IOCP Client Issue

    - by Carl
    I am writing a multithreaded client that uses an IO Completion Port. I create and connect the socket that has the WSA_FLAG_OVERLAPPED attribute set. if ((m_socket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP)) == INVALID_SOCKET) { throw std::exception("Failed to create socket."); } if (WSAConnectByName(m_socket, L"server.com", L"80", &localAddressLength, reinterpret_cast<sockaddr*>(&localAddress), &remoteAddressLength, &remoteAddress, NULL, NULL) == FALSE) { throw std::exception("Failed to connect."); } I associate the IO Completion Port with the socket. if ((m_hIOCP = CreateIoCompletionPort(reinterpret_cast<HANDLE>(m_socket), m_hIOCP, NULL, 8)) == NULL) { throw std::exception("Failed to create IOCP object."); } All appears to go well until I try to send some data over the socket. SocketData* socketData = new SocketData; socketData->hEvent = 0; DWORD bytesSent = 0; if (WSASend(m_socket, socketData->SetBuffer(socketData->GenerateLoginRequestHeader()), 1, &bytesSent, NULL, reinterpret_cast<OVERLAPPED*>(socketData), NULL) == SOCKET_ERROR && WSAGetLastError() != WSA_IO_PENDING) { throw std::exception("Failed to send data."); } Instead of returning SOCKET_ERROR with the last error set to WSA_IO_PENDING, WSASend returns immediately. I need the IO to pend and for it's completion to be handled in my thread function which is also my worker thread. unsigned int __stdcall MyClass::WorkerThread(void* lpThis) { } I've done this before but I don't know what is going wrong in this case, I'd greatly appreciate any efforts in helping me fix this problem.

    Read the article

< Previous Page | 707 708 709 710 711 712 713 714 715 716 717 718  | Next Page >