Search Results

Search found 22839 results on 914 pages for 'decimal point'.

Page 117/914 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • SQL SERVER – 3 Simple Puzzles – Need Your Suggestions

    - by pinaldave
    Last Month, I have posted three Simple Puzzles and I got very good response. I think there can be many interesting answers there. I would like to request all of you to take part the puzzles and provide your answer. I plant to consolidate answers and publish all the valid answers on this blog with due credit. SQL SERVER – Challenge – Puzzle – Usage of FAST Hint SQL SERVER – Puzzle – Challenge – Error While Converting Money to Decimal SQL SERVER – Challenge – Puzzle – Why does RIGHT JOIN Exists I am also thinking that after such a long time we should have word Database Developer (DBD) just like Database Administrator (DBA) in our dictionary. I have also created pole where I talk about this subject. SQL SERVER – Are you a Database Administrator or a Database Developer? ?Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Play music on iPhone through computer

    - by Kyle Cronin
    Now that I've had my iPhone for a few months, I'm trying an experiment to see if I can't replace the laptop I carry around with my iPhone + internet connected computer. To this end, I've been trying to find a program that will let me play the music on my iPhone through the hardware and software on the host computer. If I recall correctly this was possible a few years ago with the iPod - Linux software like Rhythmbox and Banshee was able to read the music off an iPod and play it through the speakers. I even thought I recalled iTunes itself being capable of this at one time. Now, however, iTunes greys out/disables the music on my iPhone and I can't find any documented support for the iPhone in any other music program. Is this really no longer possible? Am I limited to using the headphone jack to get music to play? (note: I am using an iPhone 3G with the 3.0 software. I am attempting to play music on computers other than the one I sync with) Several replies mention that I should check "manually manage" to do this. I just tried this on a computer that I don't sync my iPhone to and it asked me to erase and sync, which is obviously something I don't want to do. update: OK, I checked the "Manually manage music and videos" box on a computer that I didn't sync to (now known as "Computer A"), and it told me that I needed to erase & sync to cause the changes to have effect, so I did. At this point I'm guessing that my iPhone thinks that it's syncing with that computer. I copied over a few songs using the autofill feature. At this point, Computer A sees the maybe 10 or so songs I've copied using autofill. I then plug my iPhone into my Macbook ("Computer B") which I've been syncing with. At this point, I'm pretty sure that it still thought that all my synced content was still on my iPhone. The "manually manage music and videos" checkbox isn't checked, so I check it and go through a similar process where iTunes erases the synced content and I copy over a playlist. At this point, there's no trace of the songs that I copied over from Computer A. So I plug my iPhone into Computer A - in the Music section are the handful of songs that I had copied over earlier, greyed out and unplayable. To make sure that this wasn't some sort of caching issue, I plugged my iPhone into my sister's Macbook ("Computer C") and it lists the same few, greyed out songs that I had copied over from Computer A. Plugging into Computer B doesn't reveal these songs at all, only the songs that it copied over (these are playable). A few things: This inconsistent behavior is driving me insane. Why would my iPhone report two versions of its contents to different computers? Is there a way to get a computer to completely forget about an iPhone and just resync everything to get everything into a consistent state? Even if I get the phone into a consistent state, I still can't play the files on my phone anywhere but the computer I sync with, which was my original goal. What am I doing wrong? maybe I should read the fine print before I mess with my iPhone So going over this thread with a fine-toothed comb again yields this lovely tidbit in the Apple docs: Note: Even when manually managing, some content may only be available from one library at time. This includes all content on iPhone and video content on iPods. OK, so manually managing is a dead end on the iPhone. Are there any other options? Any unofficial third-party programs or drivers that will work?

    Read the article

  • Regular expression in Umbraco for number validation.

    - by Vizioz Limited
    This evening I was looking for a way to validate an Umbraco node that could be either text or a numeric value, in my case a salary that could be either an hourly amount, an annual figure or a comment. In the case where the node contained a value I wanted the XSLT to output a pound sign (£) and for any that contained text it would just output the text, as this could be something like "Contact Us" or "Negotiable"I thought someone else might find this useful so here is the XSLT and the regular expression.First if you are using Umbraco, don't forget to include the reference to the EXSLT Regular expression library at the top of your XSLT.<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxml="urn:schemas-microsoft-com:xslt" xmlns:umbraco.library="urn:umbraco.library" xmlns:Exslt.ExsltRegularExpressions="urn:Exslt.ExsltRegularExpressions" exclude-result-prefixes="msxml umbraco.library Exslt.ExsltRegularExpressions">Then the code I used was:<xsl:if test="Exslt.ExsltRegularExpressions:match($currentPage/data [@alias='Salary'], '^[0-9]*\,?[0-9]*\.?[0-9]+$') != ''"> <xsl:text>£</xsl:text></xsl:if>This regular expression allows any number of digits, an optional comma, more digits, an optional decimal point and finally more digits, so all the following are valid:12,00014.43334,342.03

    Read the article

  • MySQL Connector/Net 6.4.6 Maintenance Release has been released

    - by fernando
    MySQL Connector/Net 6.4.6, a new version of the all-managed .NET driver for MySQL has been released.  This is a maintenance release and is recommended for use in production environments. It is appropriate for use with MySQL server versions 5.0-5.6. This is intended to be the final release for Connector/NET 6.4. It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.4.6 version of MySQL Connector/Net brings the following fixes: - Fix for List.Contains generates a bunch of ORs instead of more efficient IN clause in   LINQ to Entities (Oracle bug #14016344, MySql bug #64934). - Fix for error when trying to change the name of an Index on the Indexes/Keys editor; along with this fix now users can change the Index type of a new Index which could not be done   in previous versions, and when changing the Index name the change is reflected on the list view at the left side of the Index/Keys editor (Oracle bug #13613801). - Fix for stored procedure call using only its name with EF code first (MySql bug #64999, Oracle bug #14008699). - Fix for performance issue in generated EF query: .NET StartsWith/Contains/EndsWith produces MySql's locate instead of Like (MySql bug #64935, Oracle bug #14009363). - Fix for script generated for code first contains wrong alter table and wrong declaration for byte[] (MySql bug #64216, Oracle bug #13900091). - Fix for Exception thrown when using cascade delete in an EDM Model-First in Entity Framework (Oracle bug #14008752, MySql bug #64779). - Fix for Session locking issue with MySqlSessionStateStore (MySql bug #63997, Oracble bug #13733054). - Fixed deleting a user profile using Profile provider (MySQL bug #64409, Oracle bug #13790123). - Fix for bug Cannot Create an Entity with a Key of Type String (MySQL bug #65289, Oracle bug #14540202). This fix checks if the type has a FixedLength facet set in order to create a char otherwise should create varchar, mediumtext or longtext types when using a String CLR type in Code First or Model First also tested in Database First. Unit tests added for Code First and ProviderManifest. - Fix for bug "CacheServerProperties can cause 'Packet too large' error" (MySQL Bug #66578 Orabug #14593547). - Fix for handling unnamed parameter in MySQLCommand. This fix allows the mysqlcommand to handle parameters without requiring naming (e.g. INSERT INTO Test (id,name) VALUES (?, ?) ) (MySQL Bug #66060, Oracle bug #14499549). - Fixed inheritance on Entity Framework Code First scenarios. Discriminator column is created using its correct type as varchar(128) (MySql bug #63920 and Oracle bug #13582335). - Fixed "Trying to customize column precision in Code First does not work" (MySql bug #65001, Oracle bug #14469048). - Fixed bug ASP.NET Membership database fails on MySql database UTF32 (MySQL bug #65144, Oracle bug #14495292). - Fix for MySqlCommand.LastInsertedId holding only 32 bit values (MySql bug #65452, Oracle bug #14171960) by changing   several internal declaration of lastinsertid from int to long. - Fixed "Decimal type should have digits at right of decimal point", now default is 2, but user's changes in   EDM designer are recognized (MySql bug #65127, Oracle bug #14474342). - Fix for NullReferenceException when saving an uninitialized row in Entity Framework (MySql bug #66066, Oracle bug #14479715). - Fix for error when calling RoleProvider.RemoveUserFromRole(): causes an exception due to a wrong table being used (MySql bug #65805, Oracle bug #14405338). - Fix for "Memory Leak on MySql.Data.MySqlClient.MySqlCommand", too many MemoryStream's instances created (MySql bug #65696, Oracle bug #14468204). - Small improvement on MySqlPoolManager CleanIdleConnections for better mysqlpoolmanager idlecleanuptimer at startup (MySql bug #66472 and Oracle bug #14652624). - Fix for bug TIMESTAMP values are mistakenly represented as DateTime with Kind = Local (Mysql bug #66964, Oracle bug #14740705). - Fix for bug Keyword not supported. Parameter name: AttachDbFilename (Mysql bug #66880, Oracle bug #14733472). - Added support to MySql script file to retrieve data when using "SHOW" statements. - Fix for Package Load Failure in Visual Studio 2005 (MySql bug #63073, Oracle bug #13491674). - Fix for bug "Unable to connect using IPv6 connections" (MySQL bug #67253, Oracle bug #14835718). - Added auto-generated values for Guid identity columns (MySql bug #67450, Oracle bug #15834176). - Fix for method FirstOrDefault not supported in some LINQ to Entities queries (MySql bug #67377, Oracle bug #15856964). The release is available to download at http://dev.mysql.com/downloads/connector/net/6.4.html Documentation ------------------------------------- You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • MySQL Connector/Net 6.5.5 Maintenance Release has been released

    - by fernando
    MySQL Connector/Net 6.5.5, a new maintenance release of our 6.5 series, has been released.  This release is GA quality and is appropriate for use in production environments.  Please note that 6.6 is our latest driver series and is the recommended product for development. It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.5.5 version of MySQL Connector/Net brings the following fixes: - Fix for ArgumentNull exception when using Take().Count() in a LINQ to Entities query (bug MySql #64749, Oracle bug #13913047). - Fix for type varchar changed to bit when saving in Table Designer (Oracle bug #13916560). - Fix for error when trying to change the name of an Index on the Indexes/Keys editor; along with this fix now users can change the Index type of a new Index which could not be done   in previous versions, and when changing the Index name the change is reflected on the list view at the left side of the Index/Keys editor (Oracle bug #13613801). - Fix for stored procedure call using only its name with EF code first (MySql bug #64999, Oracle bug #14008699). - Fix for List.Contains generates a bunch of ORs instead of more efficient IN clause in   LINQ to Entities (Oracle bug #14016344, MySql bug #64934). - Fix for performance issue in generated EF query: .NET StartsWith/Contains/EndsWith produces MySql's locate instead of Like (MySql bug #64935, Oracle bug #14009363). - Fix for script generated for code first contains wrong alter table and wrong declaration for byte[] (MySql bug #64216, Oracle bug #13900091). - Fix and code contribution for bug Timed out sessions are removed without notification which allow to enable the Expired CallBack when Session Provider times out any session (bug MySql #62266 Oracle bug # 13354935) - Fix for Exception thrown when using cascade delete in an EDM Model-First in Entity Framework (Oracle bug #14008752, MySql bug #64779). - Fix for Session locking issue with MySqlSessionStateStore (MySql bug #63997, Oracble bug #13733054). - Fixed deleting a user profile using Profile provider (MySQL bug #64470, Oracle bug #13790123) - Fix for bug Cannot Create an Entity with a Key of Type String (MySQL bug #65289, Oracle bug #14540202). This fix checks if the type has a FixedLength facet set in order to create a char otherwise should create varchar, mediumtext or longtext types when using a String CLR type in Code First or Model First also tested in Database First. Unit tests added for Code First and ProviderManifest. - Fix for bug "CacheServerProperties can cause 'Packet too large' error". The issue was due to a missing reading of Max_allowed_packet server property when CacheServerProperties is in true, since the value was read only in the first connection but the following pooled connections had a wrong value causing a Packet too large error. Including also a unit test for this scenario. All unit test passed. MySQL Bug #66578 Orabug #14593547. - Fix for handling unnamed parameter in MySQLCommand. This fix allows the mysqlcommand to handle parameters without requiring naming (e.g. INSERT INTO Test (id,name) VALUES (?, ?) ) (MySQL Bug #66060, Oracle bug #14499549). - Fixed inheritance on Entity Framework Code First scenarios. Discriminator column is created using its correct type as varchar(128) (MySql bug #63920 and Oracle bug #13582335). - Fixed "Trying to customize column precision in Code First does not work" (MySql bug #65001, Oracle bug #14469048). - Fixed bug ASP.NET Membership database fails on MySql database UTF32 (MySQL bug #65144, Oracle bug #14495292). - Fix for MySqlCommand.LastInsertedId holding only 32 bit values (MySql bug #65452, Oracle bug #14171960) by changing   several internal declaration of lastinsertid from int to long. - Fixed "Decimal type should have digits at right of decimal point", now default is 2, but user's changes in   EDM designer are recognized (MySql bug #65127, Oracle bug #14474342). - Fix for NullReferenceException when saving an uninitialized row in Entity Framework (MySql bug #66066, Oracle bug #14479715). - Fix for error when calling RoleProvider.RemoveUserFromRole(): causes an exception due to a wrong table being used (MySql bug #65805, Oracle bug #14405338). - Fix for "Memory Leak on MySql.Data.MySqlClient.MySqlCommand", too many MemoryStream's instances created (MySql bug #65696, Oracle bug #14468204). - Added ANTLR attribution notice (Oracle bug #14379162). - Fixed Entity Framework + mysql connector/net in partial trust throws exceptions (MySql bug #65036, Oracle bug #14668820). - Added support in Parser for Datetime and Time types with precision when using Server 5.6 (No bug Number). - Small improvement on MySqlPoolManager CleanIdleConnections for better mysqlpoolmanager idlecleanuptimer at startup (MySql bug #66472 and Oracle bug #14652624). - Fix for bug TIMESTAMP values are mistakenly represented as DateTime with Kind = Local (Mysql bug #66964, Oracle bug #14740705). - Fix for bug Keyword not supported. Parameter name: AttachDbFilename (Mysql bug #66880, Oracle bug #14733472). - Added support to MySql script file to retrieve data when using "SHOW" statements. - Fix for Package Load Failure in Visual Studio 2005 (MySql bug #63073, Oracle bug #13491674). - Fix for bug "Unable to connect using IPv6 connections" (MySQL bug #67253, Oracle bug #14835718). - Added auto-generated values for Guid identity columns (MySql bug #67450, Oracle bug #15834176). - Fix for method FirstOrDefault not supported in some LINQ to Entities queries (MySql bug #67377, Oracle bug #15856964). The release is available to download at http://dev.mysql.com/downloads/connector/net/6.5.html Documentation ------------------------------------- You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support! 

    Read the article

  • Monitoring Database disk space

    - by Michael Freidgeim
    An article Data files: To Autogrow Or Not To Autogrow? recommends NOT to rely on auto-grow, because it causing delays in unplanned times.We should mtonitor database files(both data and log), and if they close to max capacity, manually increase the size. However it doesn't give references, how to monitor the free space inside databases. I've tried to look how to do it. It can be done manually using   execute sp_spaceused for the database in question or  sp_SOS (can be downloaded from http://searchsqlserver.techtarget.com/tip/Find-size-of-SQL-Server-tables-and-other-objects-with-stored-procedure)Alternatively you can run SQL commands as suggested in Http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=82359 by Michael Valentine Jonesselect [FREE_SPACE_MB] = convert(decimal(12,2),round((a.size-fileproperty(a.name,'SpaceUsed'))/128.000,2)) from dbo.sysfiles aMore useful article Monitor database file sizes with SQL Server Jobs describes how to setup monitoring Finally I found the excellent articleManaging Database Data Usage With Custom Space Alerts, that can be followed even support personnel without much DBA experience.

    Read the article

  • Sniffing out SQL Code Smells: Inconsistent use of Symbolic names and Datatypes

    - by Phil Factor
    It is an awkward feeling. You’ve just delivered a database application that seems to be working fine in production, and you just run a few checks on it. You discover that there is a potential bug that, out of sheer good chance, hasn’t kicked in to produce an error; but it lurks, like a smoking bomb. Worse, maybe you find that the bug has started its evil work of corrupting the data, but in ways that nobody has, so far detected. You investigate, and find the damage. You are somehow going to have to repair it. Yes, it still very occasionally happens to me. It is not a nice feeling, and I do anything I can to prevent it happening. That’s why I’m interested in SQL code smells. SQL Code Smells aren’t necessarily bad practices, but just show you where to focus your attention when checking an application. Sometimes with databases the bugs can be subtle. SQL is rather like HTML: the language does its best to try to carry out your wishes, rather than to be picky about your bugs. Most of the time, this is a great benefit, but not always. One particular place where this can be detrimental is where you have implicit conversion between different data types. Most of the time it is completely harmless but we’re  concerned about the occasional time it isn’t. Let’s give an example: String truncation. Let’s give another even more frightening one, rounding errors on assignment to a number of different precision. Each requires a blog-post to explain in detail and I’m not now going to try. Just remember that it is not always a good idea to assign data to variables, parameters or even columns when they aren’t the same datatype, especially if you are relying on implicit conversion to work its magic.For details of the problem and the consequences, see here:  SR0014: Data loss might occur when casting from {Type1} to {Type2} . For any experienced Database Developer, this is a more frightening read than a Vampire Story. This is why one of the SQL Code Smells that makes me edgy, in my own or other peoples’ code, is to see parameters, variables and columns that have the same names and different datatypes. Whereas quite a lot of this is perfectly normal and natural, you need to check in case one of two things have gone wrong. Either sloppy naming, or mixed datatypes. Sure it is hard to remember whether you decided that the length of a log entry was 80 or 100 characters long, or the precision of a number. That is why a little check like this I’m going to show you is excellent for tidying up your code before you check it back into source Control! 1/ Checking Parameters only If you were just going to check parameters, you might just do this. It simply groups all the parameters, either input or output, of all the routines (e.g. stored procedures or functions) by their name and checks to see, in the HAVING clause, whether their data types are all the same. If not, it lists all the examples and their origin (the routine) Even this little check can occasionally be scarily revealing. ;WITH userParameter AS  ( SELECT   c.NAME AS ParameterName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  t.name + ' '     + CASE     --we may have to put in the length            WHEN t.name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.max_length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.name IN ('nchar', 'nvarchar')                      THEN c.max_length / 2 ELSE c.max_length                    END)                END + ')'         WHEN t.name IN ('decimal', 'numeric')             THEN '(' + CONVERT(VARCHAR(4), c.precision)                   + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = c.XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType]  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'   AND parameter_id>0)SELECT CONVERT(CHAR(80),objectName+'.'+ParameterName),DataType FROM UserParameterWHERE ParameterName IN   (SELECT ParameterName FROM UserParameter    GROUP BY ParameterName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY ParameterName   so, in a very small example here, we have a @ClosingDelimiter variable that is only CHAR(1) when, by the looks of it, it should be up to ten characters long, or even worse, a function that should be a char(1) and seems to let in a string of ten characters. Worth investigating. Then we have a @Comment variable that can't decide whether it is a VARCHAR(2000) or a VARCHAR(MAX) 2/ Columns and Parameters Actually, once we’ve cleared up the mess we’ve made of our parameter-naming in the database we’re inspecting, we’re going to be more interested in listing both columns and parameters. We can do this by modifying the routine to list columns as well as parameters. Because of the slight complexity of creating the string version of the datatypes, we will create a fake table of both columns and parameters so that they can both be processed the same way. After all, we want the datatypes to match Unfortunately, parameters do not expose all the attributes we are interested in, such as whether they are nullable (oh yes, subtle bugs happen if this isn’t consistent for a datatype). We’ll have to leave them out for this check. Voila! A slight modification of the first routine ;WITH userObject AS  ( SELECT   Name AS DataName,--the actual name of the parameter or column ('@' removed)  --and the qualified object name of the routine  OBJECT_SCHEMA_NAME(ObjectID) + '.' + OBJECT_NAME(ObjectID) AS ObjectName,  --now the harder bit: the definition of the datatype.  TypeName + ' '     + CASE     --we may have to put in the length. e.g. CHAR (10)           WHEN TypeName IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN MaxLength = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN TypeName IN ('nchar', 'nvarchar')                      THEN MaxLength / 2 ELSE MaxLength                    END)                END + ')'         WHEN TypeName IN ('decimal', 'numeric')--a BCD number!             THEN '(' + CONVERT(VARCHAR(4), Precision)                   + ',' + CONVERT(VARCHAR(4), Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0 --tush tush. XML         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT TOP 1 QUOTENAME(ss.name) + '.' + QUOTENAME(sc.Name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType],       DataObjectType  FROM   (Select t.name AS TypeName, REPLACE(c.name,'@','') AS Name,          c.max_length AS MaxLength, c.precision AS [Precision],           c.scale AS [Scale], c.[Object_id] AS ObjectID, XML_collection_ID,          is_XML_Document,'P' AS DataobjectType  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  AND parameter_id>0  UNION all  Select t.name AS TypeName, c.name AS Name, c.max_length AS MaxLength,          c.precision AS [Precision], c.scale AS [Scale],          c.[Object_id] AS ObjectID, XML_collection_ID,is_XML_Document,          'C' AS DataobjectType            FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID   WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'  )f)SELECT CONVERT(CHAR(80),objectName+'.'   + CASE WHEN DataobjectType ='P' THEN '@' ELSE '' END + DataName),DataType FROM UserObjectWHERE DataName IN   (SELECT DataName FROM UserObject   GROUP BY DataName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY DataName     Hmm. I can tell you I found quite a few minor issues with the various tabases I tested this on, and found some potential bugs that really leap out at you from the results. Here is the start of the result for AdventureWorks. Yes, AccountNumber is, for some reason, a Varchar(10) in the Customer table. Hmm. odd. Why is a city fifty characters long in that view?  The idea of the description of a colour being 256 characters long seems over-ambitious. Go down the list and you'll spot other mistakes. There are no bugs, but just mess. We started out with a listing to examine parameters, then we mixed parameters and columns. Our last listing is for a slightly more in-depth look at table columns. You’ll notice that we’ve delibarately removed the indication of whether a column is persisted, or is an identity column because that gives us false positives for our code smells. If you just want to browse your metadata for other reasons (and it can quite help in some circumstances) then uncomment them! ;WITH userColumns AS  ( SELECT   c.NAME AS columnName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  REPLACE(t.name + ' '   + CASE WHEN is_computed = 1 THEN ' AS ' + --do DDL for a computed column          (SELECT definition FROM sys.computed_columns cc           WHERE cc.object_id = c.object_id AND cc.column_ID = c.column_ID)     --we may have to put in the length            WHEN t.Name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.Max_Length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.Name IN ('nchar', 'nvarchar')                      THEN c.Max_Length / 2 ELSE c.Max_Length                    END)                END + ')'       WHEN t.name IN ('decimal', 'numeric')       THEN '(' + CONVERT(VARCHAR(4), c.precision) + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'       ELSE ''      END + CASE WHEN c.is_rowguidcol = 1          THEN ' ROWGUIDCOL'          ELSE ''         END + CASE WHEN XML_collection_ID <> 0            THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                THEN 'DOCUMENT '                ELSE 'CONTENT '               END + COALESCE((SELECT                QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM                sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE                sc.xml_collection_ID = c.XML_collection_ID),                'NULL') + ')'            ELSE ''           END + CASE WHEN is_identity = 1             THEN CASE WHEN OBJECTPROPERTY(object_id,                'IsUserTable') = 1 AND COLUMNPROPERTY(object_id,                c.name,                'IsIDNotForRepl') = 0 AND OBJECTPROPERTY(object_id,                'IsMSShipped') = 0                THEN ''                ELSE ' NOT FOR REPLICATION '               END             ELSE ''            END + CASE WHEN c.is_nullable = 0               THEN ' NOT NULL'               ELSE ' NULL'              END + CASE                WHEN c.default_object_id <> 0                THEN ' DEFAULT ' + object_Definition(c.default_object_id)                ELSE ''               END + CASE                WHEN c.collation_name IS NULL                THEN ''                WHEN c.collation_name <> (SELECT                collation_name                FROM                sys.databases                WHERE                name = DB_NAME()) COLLATE Latin1_General_CI_AS                THEN COALESCE(' COLLATE ' + c.collation_name,                '')                ELSE ''                END,'  ',' ') AS [DataType]FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys')SELECT CONVERT(CHAR(80),objectName+'.'+columnName),DataType FROM UserColumnsWHERE columnName IN (SELECT columnName FROM UserColumns  GROUP BY columnName  HAVING MIN(Datatype)<>MAX(DataType))ORDER BY columnName If you take a look down the results against Adventureworks, you'll see once again that there are things to investigate, mostly, in the illustration, discrepancies between null and non-null datatypes So I here you ask, what about temporary variables within routines? If ever there was a source of elusive bugs, you'll find it there. Sadly, these temporary variables are not stored in the metadata so we'll have to find a more subtle way of flushing these out, and that will, I'm afraid, have to wait!

    Read the article

  • A Quantity class with units

    - by Ryan Ohs
    Goals Create a class that associates a numeric quantity with a unit of measurement. Provide support for simple arithmetic and comparison operations. Implementation An immutable class (Could have been struct but I may try inheritance later) Unit is stored in an enumeration Supported operations: Addition w/ like units Subtraction w/ like units Multiplication by scalar Division by scalar Modulus by scalar Equals() >, >=, <, <=, == IComparable ToString() Implicit cast to Decimal The Source The souce can be downloaded from Github. Notes This class does not support any arithmetic that would modify the unit. This class is not suitable for manipulating currencies. Future Ideas Have a CompositeQuantity class that would allow quantities with unlike units to be combined. Similar currency class with support for allocations/distributions. Provide conversion between units. (Actually I think this would be best placed in an external service. Many situations I deal with require some sort of dynamic conversion ratio.)

    Read the article

  • How to round currency values [migrated]

    - by Kenny
    I already have several ways to solve this, but I am interested in whether there is a better solution to this problem. Please only respond with a pure numeric algorithm. String manipulation is not acceptable. I am looking for an elegant and efficient solution. Given a currency value (ie $251.03), split the value into two half and round to two decimal places. The key is that the first half should round up and the second should round down. So the outcome in this scenario should be $125.52 and $125.51.

    Read the article

  • English as a system language but Russian regional settings

    - by mbaitoff
    I usually choose English as an installation language since I believe that the original is better than the translation. However, the environment I'm working in is mostly Russian, so I have to deal with locale specificity. Even worse is the fact that selecting English yields to royal measurement system, that is, feet, inches, and damned letter paper size. Whatever I do, I didn't manage to get rid of letter paper size - eventually here and there I stumble upon letter as a hidden default, and that spoils my prints. How can I select and use English as my language, but use metric system everywhere and a4 paper size everywhere, and Russian regional settings (date, time, decimal etc).

    Read the article

  • MySQL Connector/Net 6.6.3 Beta 2 has been released

    - by fernando
    MySQL Connector/Net 6.6.3, a new version of the all-managed .NET driver for MySQL has been released.  This is the second of two beta releases intended to introduce users to the new features in the release. This release is feature complete it should be stable enough for users to understand the new features and how we expect them to work.  As is the case with all non-GA releases, it should not be used in any production environment.  It is appropriate for use with MySQL server versions 5.0-5.6. It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.6 version of MySQL Connector/Net brings the following new features:   * Stored routine debugging   * Entity Framework 4.3 Code First support   * Pluggable authentication (now third parties can plug new authentications mechanisms into the driver).   * Full Visual Studio 2012 support: everything from Server Explorer to Intellisense&   the Stored Routine debugger. Stored Procedure Debugging ------------------------------------------- We are very excited to introduce stored procedure debugging into our Visual Studio integration.  It works in a very intuitive manner by simply clicking 'Debug Routine' from Server Explorer. You can debug stored routines, functions&   triggers. These release contains fixes specific of the debugger as well as other fixes specific of other areas of Connector/NET:   * Added feature to define initial values for InOut stored procedure arguments.   * Debugger: Fixed Visual Studio locked connection after debugging a routine.   * Fix for bug Cannot Create an Entity with a Key of Type String (MySQL bug #65289, Oracle bug #14540202).   * Fix for bug "CacheServerProperties can cause 'Packet too large' error". MySQL Bug #66578 Orabug #14593547.   * Fix for handling unnamed parameter in MySQLCommand. This fix allows the mysqlcommand to handle parameters without requiring naming (e.g. INSERT INTO Test (id,name) VALUES (?, ?) ) (MySQL Bug #66060, Oracle bug #14499549).   * Fixed end of line issue when debugging a routine.   * Added validation to avoid overwriting a routine backup file when it hasn't changed.   * Fixed inheritance on Entity Framework Code First scenarios. (MySql bug #63920 and Oracle bug #13582335).   * Fixed "Trying to customize column precision in Code First does not work" (MySql bug #65001, Oracle bug #14469048).   * Fixed bug ASP.NET Membership database fails on MySql database UTF32 (MySQL bug #65144, Oracle bug #14495292).   * Fix for MySqlCommand.LastInsertedId holding only 32 bit values (MySql bug #65452, Oracle bug #14171960).   * Fixed "Decimal type should have digits at right of decimal point", now default is 2, and user's changes in     EDM designer are recognized (MySql bug #65127, Oracle bug #14474342).   * Fix for NullReferenceException when saving an uninitialized row in Entity Framework (MySql bug #66066, Oracle bug #14479715).   * Fix for error when calling RoleProvider.RemoveUserFromRole(): causes an exception due to a wrong table being used (MySql bug #65805, Oracle bug #14405338).   * Fix for "Memory Leak on MySql.Data.MySqlClient.MySqlCommand", too many MemoryStream's instances created (MySql bug #65696, Oracle bug #14468204).   * Added ANTLR attribution notice (Oracle bug #14379162).   * Fix for debugger failing when having a routine with an if-elseif-else.   * Also the programming interface for authentication plugins has been redefined. Some limitations remains, due to the current debugger architecture:   * Some MySQL functions cannot be debugged currently (get_lock, release_lock, begin, commit, rollback, set transaction level)..   * Only one debug session may be active on a given server. The Debugger is feature complete at this point. We look forward to your feedback. Documentation ------------------------------------- You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • Error during GENERAL_REQUEST_ENTITY for POST results in ASP .NET session state never getting unlocked

    - by Jesse
    I have been trying to chase down the root cause of a condition where ASP .NET session state remains locked after a web request has been terminated due to an unexpected error. We use the SQL Server session state provider for session because we have several servers in a web farm. This issue first presented itself in the form of many requests getting stuck on the 'AcquireRequestState' event of their lifecycle for no apparent reason. I was able to finding corresponding entries for these requests in the session state database in SQL server that were all locked (column Locked = 1). I was also able to correlate these requests to entries in the IIS log with HTTP status codes of 500 (with a sub status of 0). These findings lead me to believe that, in some cases, a request was erroring out but was NOT releasing its lock on session state like it should. I enabled Failed Request Tracing in IIS for the website in question for status code 500 with all available providers selected each with the 'Verbose' setting for verbosity. I've since gathered several failed traces that have caused permanently locked ASP .NET sessions. They all share the same characteristics: They are all 'POST' requests where the browser is posting data to be processed/saved. They all have events indicating that the 'Session' module was invoked during the REQUEST_ACQUIRE_STATE event. At this point the request would have marked the row in the session state database as being "locked". This is normal and expected. They all have GENERAL_READ_ENTITY_START, GENERAL_READ_ENTITY_END, and GENERAL_REQUEST_ENTITY entries that appear to be reading in the data that was posted to the server as part of the request. This appears to be a buffered operation as these events get repeated over and over with each one reading in some subset of the posted data. At some point during the 'read entity' related events and error occurs. Some have the error code "Incorrect function. (0x80070001)" and others have "The I/O operation has been aborted because of either a thread exit or an application request. (0x800703e3)". Once the error has been encountered, they all jump directly to the END_REQUEST events. The issue here is that, under normal circumstances, there should be a RELEASE_REQUEST_STATE event that will allow the Session module to release the lock it has on the session. This event is being skipped in this scenario. Just to be sure, I enabled failed request tracing for the '200' status code as well and generated several traces of successful requests that do have the RELEASE_REQUEST_STATE event being handled by the Session module. My theory at this point is that some kind of network issue is causing the 'Incorrect function' and 'I/O operation has been aborted because of either a thread exit or an application request' errors, but I don't understand why this seems to be causing the request handling to skip over the RELEASE_REQUEST_STATE event. If the request went through REQUEST_ACQUIRE_STATE it seems like it should also hit RELEASE_REQUEST_STATE as well. I'm loathe to say that this is a bug in IIS or ASP .NET, but it certainly appears that way to me at this point. Are there any configuration changes I could make to help ensure that 'RELEASE_REQUEST_STATE' is fired under all error conditions?

    Read the article

  • Loading files during run time

    - by NDraskovic
    I made a content pipeline extension (using this tutorial) in XNA 4.0 game. I altered some aspects, so it serves my need better, but the basic idea still applies. Now I want to go a step further and enable my game to be changed during run time. The file I am loading trough my content pipeline extension is very simple, it only contains decimal numbers, so I want to enable the user to change that file at will and reload it while the game is running (without recompiling as I had to do so far). This file is a very simplified version of level editor, meaning that it contains rows like: 1 1,5 1,78 -3,6 Here, the first number determines the object that will be drawn to the scene, and the other 3 numbers are coordinates where that object will be placed. So, how can I change the file that contains these numbers so that the game loads it and redraws the scene accordingly? Thanks

    Read the article

  • Understanding binary numbers in terms of real world objects

    - by Kaushik
    When I represent a number in the decimal system, I have an intuitive knowledge of what it amounts to. For example take the number '10': I understand that it means 10 apples or 10 people... i.e I can count in the real world. But as soon as the number is converted to any other system, this understanding no longer applies. For example 10 when converted to binary will be 1010...now what does this represent? Is there a way to understand this number 1010 in terms of counting objects in the real world?

    Read the article

  • Are RSS feeds used? [closed]

    - by acidzombie24
    I was thinking about implementing RSS feeds on my site. The one thing that came to mind is Are RSS feeds ever used?! I don't use them nor know anyone who does. I know to use them it must be through an rss feed program or built in with the browser. I like my browser clean and i know many ppl dont know how to use/configure their browser. So with my thoughts i deducted that rss are not used 99.9999% of people (thats 4 decimal places which is really saying something). Now twitter on the other hand or even email may be used. But I have doubts about rss feeds. Can anyone give me statistics or change my mind on implementing it? I suspect it would be simple but i dont think i should bother if no one will ever use it. I dont even use SE/SO RSS feeds.

    Read the article

  • Which numeral systems are useful in computer science?

    - by authchir
    I am wondering which numeral system different programmers are using, or would use if their language has support for them. As an example, in C++ we can use: Octal by prefixing with 0 (e.g. 0377) Decimal by default (e.g. 255) Hexadecimal by prefixing with 0x (e.g. 0xff) When working with bitmask, I am using hexadecimal but would sometimes want to be able to express binary numbers directly. I know some programming language support it with 0b syntax (e.g. 0b11111111). Is there any other numeric system useful in some computer science domain (e.g. cryptography, codecs, 3D graphics, etc)?

    Read the article

  • Replaceable parameter syntax meaning

    - by Alexander N.
    Replaceable parameter syntax for the console object in C#. I am taking the O'Reilly C# Course 1 and it is asking for a replaceable parameter syntax and it is not very clear on what that means. Currently I used this: double trouble = 99999.0009; double bubble = 11111.0001; Console.WriteLine(trouble * bubble); Am I missing the meaning of replaceable parameter syntax? Can someone provide an example for what I am looking for? Original question for the quiz: "Create two variables, both doubles, assign them numbers greater than 10,000, and include a decimal component. Output the result of multiplying the numbers together, but use replaceable parameter syntax of the Console object, and multiply the numbers within the call to the Console.WriteLine() method."

    Read the article

  • SSMS Tools Pack 1.9.4 is out! Now with SQL Server 2011 (Denali) CTP1 support.

    - by Mladen Prajdic
    To end the year on a good note this release adds support for SQL Server 2011 (Denali) CTP1 and fixes a few bugs. Because of the new SSMS shell in SQL 2011 CTP1 the SSMS Tools Pack 1.9.4 doesn't have regions and debug sections functionality for now. The fixed bugs are: A bug that prevented to create insert statements for a database A bug that didn't script commas as decimal points correctly for non US settings. A bug with searching through grid results. A threading bug that sometimes happened when saving Window Content History. A bug with Window Connection Coloring throwing an error on startup if a server colors was undefined. A bug with changing shortcuts in SSMS for various features. You can download the new version 1.9.4 here. Enjoy it!

    Read the article

  • Algorithm to generate N random numbers between A and B which sum up to X

    - by Shaamaan
    This problem seemed like something which should be solvable with but a few lines of code. Unfortunately, once I actually started to write the thing, I've realized it's not as simple as it sounds. What I need is a set of X random numbers, each of which is between A and B and they all add up to X. The exact variables for the problem I'm facing seem to be even simpler: I need 5 numbers, between -1 and 1 (note: these are decimal numbers), which add up to 1. My initial "few lines of code, should be easy" approach was to randomize 4 numbers between -1 and 1 (which is simple enough), and then make the last one 1-(sum of previous numbers). This quickly proved wrong, as the last number could just as well be larger than 1 or smaller than -1. What would be the best way to approach this problem? PS. Just for reference: I'm using C#, but I don't think it matters. I'm actually having trouble creating a good enough solution for the problem in my head.

    Read the article

  • Integrate SharePoint 2010 with Team Foundation Server 2010

    - by Martin Hinshelwood
    Our client is using a brand new shiny installation of SharePoint 2010, so we need to integrate our upgraded Team Foundation Server 2010 instance into it. In order to do that you need to run the Team Foundation Server 2010 install on the SharePoint 2010 server and choose to install only the “Extensions for SharePoint Products and Technologies”. We want out upgraded Team Project Collection to create any new portal in this SharePoint 2010 server farm. There a number of goodies above and beyond a solution file that requires the install, with the main one being the TFS2010 client API. These goodies allow proper integration with the creation and viewing of Work Items from SharePoint a new feature with TFS 2010. This works in both SharePoint 2007 and SharePoint 2010 with the level of integration dependant on the version of SharePoint that you are running. There are three levels of integration with “SharePoint Services 3.0” or “SharePoint Foundation 2010” being the lowest. This level only offers reporting services framed integration for reporting along with Work Item Integration and document management. The highest is Microsoft Office SharePoint Services (MOSS) Enterprise with Excel Services integration providing some lovely dashboards. Figure: Dashboards take the guessing out of Project Planning and estimation. Plus writing these reports would be boring!   The Extensions that you need are on the same installation media as the main TFS install and the only difference is the options you pick during the install. Figure: Installing the TFS 2010 Extensions for SharePoint Products and Technologies onto SharePoint 2010   Annoyingly you may need to reboot a couple of times, but on this server the process was MUCH smother than on our internal server. I think this was mostly to do with this being a clean install. Once it is installed you need to run the configuration. This will add all of the Solution and Templates that are needed for SharePoint to work properly with TFS. Figure: This is where all the TFS 2010 goodies are added to your SharePoint 2010 server and the TFS 2010 object model is installed.   Figure: All done, you have everything installed, but you still need to configure it Now that we have the TFS 2010 SharePoint Extensions installed on our SharePoint 2010 server we need to configure them both so that they will talk happily to each other. Configuring the SharePoint 2010 Managed path for Team Foundation Server 2010 In order for TFS to automatically create your project portals you need a wildcard managed path setup. This is where TFS will create the portal during the creation of a new Team project. To find the managed paths page for any application you need to first select the “Managed web applications”  link from the SharePoint 2010 Central Administration screen. Figure: Find the “Manage web applications” link under the “Application Management” section. On you are there you will see that the “Managed Paths” are there, they are just greyed out and selecting one of the applications will enable it to be clicked. Figure: You need to select an application for the SharePoint 2010 ribbon to activate.   Figure: You need to select an application before you can get to the Managed Paths for that application. Now we need to add a managed path for TFS 2010 to create its portals under. I have gone for the obvious option of just calling the managed path “TFS02” as the TFS 2010 server is the second TFS server that the client has installed, TFS 2008 being the first. This links the location to the server name, and as you can’t have two projects of the same name in two separate project collections there is unlikely to be any conflicts. Figure: Add a “tfs02” wildcard inclusion path to your SharePoint site. Configure the Team Foundation Server 2010 connection to SharePoint 2010 In order to have you new TFS 2010 Server talk to and create sites in SharePoint 2010 you need to tell the TFS server where to put them. As this TFS 2010 server was installed in out-of-the-box mode it has a SharePoint Services 3.0 (the free one) server running on the same box. But we want to change that so we can use the external SharePoint 2010 instance. Just open the “Team Foundation Server Administration Console” and navigate to the “SharePoint Web Applications” section. Here you click “Add” and enter the details for the Managed path we just created. Figure: If you have special permissions on your SharePoint you may need to add accounts to the “Service Accounts” section.    Before we can se this new SharePoint 2010 instance to be the default for our upgraded Team Project Collection we need to configure SharePoint to take instructions from our TFS server. Configure SharePoint 2010 to connect to Team Foundation Server 2010 On your SharePoint 2010 server open the Team Foundation Server Administration Console and select the “Extensions for SharePoint Products and Technologies” node. Here we need to “grant access” for our TFS 2010 server to create sites. Click the “Grant access” link and  fill out the full URL to the  TFS server, for example http://servername.domain.com:8080/tfs, and if need be restrict the path that TFS sites can be created on. Remember that when the users create a new team project they can change the default and point it anywhere they like as long as it is an authorised SharePoint location. Figure: Grant access for your TFS 2010 server to create sites in SharePoint 2010 Now that we have an authorised location for our team project portals to be created we need to tell our Team Project Collection that this is where it should stick sites by default for any new Team Projects created. Configure the Team Foundation Server 2010 Team Project Collection to create new sites in SharePoint 2010 Back on out TFS 2010 server we need to setup the defaults for our upgraded Team Project Collection to the new SharePoint 2010 integration we have just set up. On the TFS 2010 server open up the “Team Foundation Server Administration Console” again and navigate to the “Team Project Collections” node. Once you are there you will see a list of all of your TPC’s and in our case we have a DefaultCollection as well as out named and Upgraded collection for TFS 2008. If you select the “SharePoint Site” tab we can see that it is not currently configured. Figure: Our new Upgrade TFS2008 Team Project Collection does not have SharePoint configured Select to “Edit Default Site Location” and select the new integration point that we just set up for SharePoint 2010. Once you have selected the “SharePoint Web Application” (the thing we just configured) then it will give you an example based on that configuration point and the name of the Team Project Collection that we are configuring. Figure: Set the default location for new Team Project Portals to be created for this Team Project Collection This is where the reason for configuring the Extensions on the SharePoint 2010 server before doing this last bit becomes apparent. TFS 2010 is going to create a site at our http://sharepointserver/tfs02/ location called http://sharepointserver/tfs02/[TeamProjectCollection], or whatever we had specified, and it would have had difficulty doing this if we had not given it permission first. Figure: If there is no Team Project Collection site at this location the TFS 2010 server is going to create one This will create a nice Team Project Collection parent site to contain the Portals for any new Team Projects that are created. It is with noting that it will not create portals for existing Team Projects as this process is run during the Team Project Creation wizard. Figure: Just a basic parent site to host all of your new Team Project Portals as sub sites   You will need to add all of the users that will be creating Team Projects to be Administrators of this site so that they will not get an error during the Project Creation Wizard. You may also want to customise this as a proper portal to your projects if you are going to be having lots of them, but it is really just a default placeholder so you have a top level site that you can backup and point at. You have now integrated SharePoint 2010 and team Foundation Server 2010! You can now go forth and multiple your Team Projects for this Team Project Collection or you can continue to add portals to your other Collections.   Technorati Tags: TFS 2010,Sharepoint 2010,VS ALM

    Read the article

  • Using Unity – Part 1

    - by nmarun
    I have been going through implementing some IoC pattern using Unity and so I decided to share my learnings (I know that’s not an English word, but you get the point). Ok, so I have an ASP.net project named ProductWeb and a class library called ProductModel. In the model library, I have a class called Product: 1: public class Product 2: { 3: public string Name { get; set; } 4: public string Description { get; set; } 5:  6: public Product() 7: { 8: Name = "iPad"; 9: Description = "Not just a reader!"; 10: } 11:  12: public string WriteProductDetails() 13: { 14: return string.Format("Name: {0} Description: {1}", Name, Description); 15: } 16: } In the Page_Load event of the default.aspx, I’ll need something like: 1: Product product = new Product(); 2: productDetailsLabel.Text = product.WriteProductDetails(); Now, let’s go ‘Unity’fy this application. I assume you have all the bits for the pattern. If not, get it from here. I found this schematic representation of Unity pattern from the above link. This image might not make much sense to you now, but as we proceed, things will get better. The first step to implement the Inversion of Control pattern is to create interfaces that your types will implement. An IProduct interface is added to the ProductModel project. 1: public interface IProduct 2: { 3: string WriteProductDetails(); 4: } Let’s make our Product class to implement the IProduct interface. The application will compile and run as before despite the changes made. Add the following references to your web project: Microsoft.Practices.Unity Microsoft.Practices.Unity.Configuration Microsoft.Practices.Unity.StaticFactory Microsoft.Practices.ObjectBuilder2 We need to add a few lines to the web.config file. The line below tells what version of Unity pattern we’ll be using. 1: <configSections> 2: <section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration, Version=1.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> 3: </configSections> Add another block with the same name as the section name declared above – ‘unity’. 1: <unity> 2: <typeAliases> 3: <!--Custom object types--> 4: <typeAlias alias="IProduct" type="ProductModel.IProduct, ProductModel"/> 5: <typeAlias alias="Product" type="ProductModel.Product, ProductModel"/> 6: </typeAliases> 7: <containers> 8: <container name="unityContainer"> 9: <types> 10: <type type="IProduct" mapTo="Product"/> 11: </types> 12: </container> 13: </containers> 14: </unity> From the Unity Configuration schematic shown above, you see that the ‘unity’ block has a ‘typeAliases’ and a ‘containers’ segment. The typeAlias element gives a ‘short-name’ for a type. This ‘short-name’ can be used to point to this type any where in the configuration file (web.config in our case, but all this information could be coming from an external xml file as well). The container element holds all the mapping information. This container is referenced through its name attribute in the code and you can have multiple of these container elements in the containers segment. The ‘type’ element in line 10 basically says: ‘When Unity requests to resolve the alias IProduct, return an instance of whatever the short-name of Product points to’. This is the most basic piece of Unity pattern and all of this is accomplished purely through configuration. So, in future you have a change in your model, all you need to do is - implement IProduct on the new model class and - either add a typeAlias for the new type and point the mapTo attribute to the new alias declared - or modify the mapTo attribute of the type element to point to the new alias (as the case may be). Now for the calling code. It’s a good idea to store your unity container details in the Application cache, as this is rarely bound to change and also adds for better performance. The Global.asax.cs file comes for our rescue: 1: protected void Application_Start(object sender, EventArgs e) 2: { 3: // create and populate a new Unity container from configuration 4: IUnityContainer unityContainer = new UnityContainer(); 5: UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity"); 6: section.Containers["unityContainer"].Configure(unityContainer); 7: Application["UnityContainer"] = unityContainer; 8: } 9:  10: protected void Application_End(object sender, EventArgs e) 11: { 12: Application["UnityContainer"] = null; 13: } All this says is: create an instance of UnityContainer() and read the ‘unity’ section from the configSections segment of the web.config file. Then get the container named ‘unityContainer’ and store it in the Application cache. In my code-behind file, I’ll make use of this UnityContainer to create an instance of the Product type. 1: public partial class _Default : Page 2: { 3: private IUnityContainer unityContainer; 4: protected void Page_Load(object sender, EventArgs e) 5: { 6: unityContainer = Application["UnityContainer"] as IUnityContainer; 7: if (unityContainer == null) 8: { 9: productDetailsLabel.Text = "ERROR: Unity Container not populated in Global.asax.<p />"; 10: } 11: else 12: { 13: IProduct productInstance = unityContainer.Resolve<IProduct>(); 14: productDetailsLabel.Text = productInstance.WriteProductDetails(); 15: } 16: } 17: } Looking the ‘else’ block, I’m asking the unityContainer object to resolve the IProduct type. All this does, is to look at the matching type in the container, read its mapTo attribute value, get the full name from the alias and create an instance of the Product class. Fabulous!! I’ll go more in detail in the next blog. The code for this blog can be found here.

    Read the article

  • Easily use google maps, openstreet maps etc offline.

    - by samkea
    I did it and i am going to explain step by step. The explanatination may appear long but its simple if you follow. Note: All the softwares i have used are the latest and i have packaged them and provided them in the link below. I use Nokia N96 1) RootSign smartComGPS and install it on your phone(i havent provided the signer so that u wuld do some little work. i used Secman' rootsign). 2) Install Universal Maps Downloader, SmartCom OGF2 converter and OziExplorer 3.95.4s on my PC. a) UMD is used to download map tiles from any map source like googlemaps,opensourcemaps etc... and also combine the tiles into an image file like png,jpg,bmp etc... b) SmartCom OGF2 converter is used to convert the image file into a format usable on your mobile phone. c) OziExplorer will help you to calibrate the usable map file so that it can be used with GPS on your mobile phone without the use of internet. 3) Go to google maps or where u pick your maps and pan to the area of your interest. Zoom the map to at least 15 or 16 zoom level where you can see your area clearly and the streets. 4) copy this script in a notepad file and save it on your desktop: javascript:void(prompt('',gApplication.getMap().ge tCenter())); 5) Open the universal maps downloader. You will notice that you are required to add the: left longitude, right longitude,top latitude, bottom latitude. 6) On your map in google maps, doubleclick on the your prefered to most middle point. you will notice that the map will center in that area. 7) copy the script and paste it in the address bar then press enter. You will notice that a dialog with your (top latitude) and longitude respectively pops up. 8) copy the top latitude ONLY and paste it in the corresponding textbox in the UMD. 9) repeat steps 6-7 for the botton latitude. 10)repeat steps 6-7 for left longitude and right longitude too, but u have to copy the longitudes here. (***BTW record these points in the text file as they may be needed later in calibration) 11) Give the zoom level to the same zoom level that you prefered in google maps. 12) Dont forget to choose a path to save your files and under options set the proxy connection settings in UMD if you are using so. 13) Click on start and bingo! there you have your image tiles and a file with an extension .umd will be saved in the same folder. 14) On the UMD, go to tools, click on MapViewer and choose the .umd file. you will now see your map in one piece....and you will smile! 15) Still go to tools and click on map combiner. A dialog will popup for you to choose the .umd file and to enter the IMAGE file name. u can use another extension for the image file like png, jpg etc...i usually use png. 16) Combine.....bingo! there u go! u have an IMAGE file for your map. *I SUGGEST THAT CREATE A .BMP FILE and A .PNG file* 17) Close UMD and open SmartCom OGF2 converter. 18) Choose your .png image and create an ogf2 file. 19) Connect your phone to your PC in Mass Memory mode and transfer the file to the smartComGPS\Maps folder. 20) Now disconnect your phone and load smartComGPS. it will load the map and propt you to add a calibration point. Go ahead and add one calibration point with dummy coordinates. You will notice that it will add another file with extension .map in the smartComGPS\Maps folder. 21) Connect yiur ohone and copy that file and paste it in your working folder on your PC. Delete that .map file from the phone too because you are going to edit it from your PC and put it back. 22) Now Open the OziExplorer, go to file-->Load and Calibrate Map Image. 23) Choose the .bmp image and bingo! it will load with your map in the same zoom level. 24) Now you are going to calibrate. Use the MapView window and take the small box locater to all the 4 cornners of the map. You will notice that the map in the back ground moves to that area too. 25)On the right side, select the Point1 tab. Now you are in calibration mode. Now move the red box in mapview in the left upper corner to calibrate point1. 26) out of mapview go to the the left upper corner of the background map and choose poit (0,0) and your 1st calibration point. You will notice that these X,Y cordinated will be reflected in the Point1 image cordinates. 27) now go back to the text file where you saved your coordibates and enter the top latitude and the left longitude in the corresponding places. 28) Repeat steps 25-27 for point2,point3,point4 and click on save. Thats it, you have calibrated your image and you are about to finish. 29) Go to save and a dilaog which prompts you to save a .map file will poop up. Do save the map file in your working folder. 30) Right click that .map file and edit the filename in the .map file to remove the pc's directory structure. Eg. Change C\OziExplorer\data\Kampala.bmp to Kampala.ogf2. 31) Save the .map file in the smartComGPS\Maps folder on your phone. 32) now open smartComGPS on your phone and bingo! there is your map with GPS capability and in the same zoom level. 33) In smartComGPS options, choose connect and simulate. By now you should be smiling. Whoa! Hope i was of help. i case you get a problem, please inform me Below is the link to the software. regards. http://rapidshare.com/files/230296037/Utilities_Used.rar.html Ok, the Rapidshare files i posted are gone, so you will have to download as described in the solution. If you need more help, go here: http://www.dotsis.com/mobile_phone/sitemap/t-160491.html Some months later, someone else gave almost the same kind of solution here. http://www.dotsis.com/mobile_phone/sitemap/t-180123.html Note: the solutions were mean't to help view maps on Symbian phones, but i think now they ca even do for Windows Phones, iphones and others so read, extract what you want and use it. Hope it helps. Sam Kea

    Read the article

  • Regression testing with Selenium GRID

    - by Ben Adderson
    A lot of software teams out there are tasked with supporting and maintaining systems that have grown organically over time, and the web team here at Red Gate is no exception. We're about to embark on our first significant refactoring endeavour for some time, and as such its clearly paramount that the code be tested thoroughly for regressions. Unfortunately we currently find ourselves with a codebase that isn't very testable - the three layers (database, business logic and UI) are currently tightly coupled. This leaves us with the unfortunate problem that, in order to confidently refactor the code, we need unit tests. But in order to write unit tests, we need to refactor the code :S To try and ease the initial pain of decoupling these layers, I've been looking into the idea of using UI automation to provide a sort of system-level regression test suite. The idea being that these tests can help us identify regressions whilst we work towards a more testable codebase, at which point the more traditional combination of unit and integration tests can take over. Ending up with a strong battery of UI tests is also a nice bonus :) Following on from my previous posts (here, here and here) I knew I wanted to use Selenium. I also figured that this would be a good excuse to put my xUnit [Browser] attribute to good use. Pretty quickly, I had a raft of tests that looked like the following (this particular example uses Reflector Pro). In a nut shell the test traverses our shopping cart and, for a particular combination of number of users and months of support, checks that the price calculations all come up with the correct values. [BrowserTheory] [Browser(Browsers.Firefox3_6, "http://www.red-gate.com")] public void Purchase1UserLicenceNoSupport(SeleniumProvider seleniumProvider) {     //Arrange     _browser = seleniumProvider.GetBrowser();     _browser.Open("http://www.red-gate.com/dynamic/shoppingCart/ProductOption.aspx?Product=ReflectorPro");                  //Act     _browser = ShoppingCartHelpers.TraverseShoppingCart(_browser, 1, 0, ".NET Reflector Pro");     //Assert     var priceResult = PriceHelpers.GetNewPurchasePrice(db, "ReflectorPro", 1, 0, Currencies.Euros);         Assert.Equal(priceResult.Price, _browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl01_Price"));     Assert.Equal(priceResult.Tax, _browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Tax"));     Assert.Equal(priceResult.Total, _browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Total")); } These tests are pretty concise, with much of the common code in the TraverseShoppingCart() and GetNewPurchasePrice() methods. The (inevitable) problem arose when it came to execute these tests en masse. Selenium is a very slick tool, but it can't mask the fact that UI automation is very slow. To give you an idea, the set of cases that covers all of our products, for all combinations of users and support, came to 372 tests (for now only considering purchases in dollars). In the world of automated integration tests, that's a very manageable number. For unit tests, it's a trifle. However for UI automation, those 372 tests were taking just over two hours to run. Two hours may not sound like a lot, but those cases only cover one of the three currencies we deal with, and only one of the many different ways our systems can be asked to calculate a price. It was already pretty clear at this point that in order for this approach to be viable, I was going to have to find a way to speed things up. Up to this point I had been using Selenium Remote Control to automate Firefox, as this was the approach I had used previously and it had worked well. Fortunately,  the guys at SeleniumHQ also maintain a tool for executing multiple Selenium RC tests in parallel: Selenium Grid. Selenium Grid uses a central 'hub' to handle allocation of Selenium tests to individual RCs. The Remote Controls simply register themselves with the hub when they start, and then wait to be assigned work. The (for me) really clever part is that, as far as the client driver library is concerned, the grid hub looks exactly the same as a vanilla remote control. To create a new browser session against Selenium RC, the following C# code suffices: new DefaultSelenium("localhost", 4444, "*firefox", "http://www.red-gate.com"); This assumes that the RC is running on the local machine, and is listening on port 4444 (the default). Assuming the hub is running on your local machine, then to create a browser session in Selenium Grid, via the hub rather than directly against the control, the code is exactly the same! Behind the scenes, the hub will take this request and hand it off to one of the registered RCs that provides the "*firefox" execution environment. It will then pass all communications back and forth between the test runner and the remote control transparently. This makes running existing RC tests on a Selenium Grid a piece of cake, as the developers intended. For a more detailed description of exactly how Selenium Grid works, see this page. Once I had a test environment capable of running multiple tests in parallel, I needed a test runner capable of doing the same. Unfortunately, this does not currently exist for xUnit (boo!). MbUnit on the other hand, has the concept of concurrent execution baked right into the framework. So after swapping out my assembly references, and fixing up the resulting mismatches in assertions, my example test now looks like this: [Test] public void Purchase1UserLicenceNoSupport() {    //Arrange    ISelenium browser = BrowserHelpers.GetBrowser();    var db = DbHelpers.GetWebsiteDBDataContext();    browser.Start();    browser.Open("http://www.red-gate.com/dynamic/shoppingCart/ProductOption.aspx?Product=ReflectorPro");                 //Act     browser = ShoppingCartHelpers.TraverseShoppingCart(browser, 1, 0, ".NET Reflector Pro");    var priceResult = PriceHelpers.GetNewPurchasePrice(db, "ReflectorPro", 1, 0, Currencies.Euros);    //Assert     Assert.AreEqual(priceResult.Price, browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl01_Price"));     Assert.AreEqual(priceResult.Tax, browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Tax"));     Assert.AreEqual(priceResult.Total, browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Total")); } This is pretty much the same as the xUnit version. The exceptions are that the attributes have changed,  the //Arrange phase now has to handle setting up the ISelenium object, as the attribute that previously did this has gone away, and the test now sets up its own database connection. Previously I was using a shared database connection, but this approach becomes more complicated when tests are being executed concurrently. To avoid complexity each test has its own connection, which it is responsible for closing. For the sake of readability, I snipped out the code that closes the browser session and the db connection at the end of the test. With all that done, there was only one more step required before the tests would execute concurrently. It is necessary to tell the test runner which tests are eligible to run in parallel, via the [Parallelizable] attribute. This can be done at the test, fixture or assembly level. Since I wanted to run all tests concurrently, I marked mine at the assembly level in the AssemblyInfo.cs using the following: [assembly: DegreeOfParallelism(3)] [assembly: Parallelizable(TestScope.All)] The second attribute marks all tests in the assembly as [Parallelizable], whilst the first tells the test runner how many concurrent threads to use when executing the tests. I set mine to three since I was using 3 RCs in separate VMs. With everything now in place, I fired up the Icarus* test runner that comes with MbUnit. Executing my 372 tests three at a time instead of one at a time reduced the running time from 2 hours 10 minutes, to 55 minutes, that's an improvement of about 58%! I'd like to have seen an improvement of 66%, but I can understand that either inefficiencies in the hub code, my test environment or the test runner code (or some combination of all three most likely) contributes to a slightly diminished improvement. That said, I'd love to hear about any experience you have in upping this efficiency. Ultimately though, it was a saving that was most definitely worth having. It makes regression testing via UI automation a far more plausible prospect. The other obvious point to make is that this approach scales far better than executing tests serially. So if ever we need to improve performance, we just register additional RC's with the hub, and up the DegreeOfParallelism. *This was just my personal preference for a GUI runner. The MbUnit/Gallio installer also provides a command line runner, a TestDriven.net runner, and a Resharper 4.5 runner. For now at least, Resharper 5 isn't supported.

    Read the article

  • You are probably NOT a SharePoint Development Expert if&hellip;

    - by Mark Rackley
    So, all you aspiring SharePoint experts out there (especially those of you who put “expert” in your resumes).  It’s time for a cold cool splash of reality. More than likely you are NOT an expert (I know I’m not). Yes, you may have some expertise in certain aspects in SharePoint (it’s questionable if I have THAT some days), but make sure you’ve got the basics down before you start throwing that word “expert” around. I know that it becomes frustrating to those looking to hire SharePoint people and having to sift through all the resumes of those who think very highly of themselves and their skills only to find those gaping holes in common best practices. I’m much more willing to hire a decent dev who KNOWS they are not an expert than to hire a decent+ dev who THINKS they are an expert.  So… I’ve compiled a small reality check for you SharePoint Devs. and a “red flag” check for those of you wishing to hire a SharePoint developer. If any of these apply to you, you are probably not a SharePoint Development Expert. You are not a SharePoint Development Expert if you manually copy your DLLs Seriously, I don’t care if you write the best code in the world. If you are manually copying files to each web front end you are NOT a SharePoint Development expert. Yes, I realize the admins are generally the ones who do the actual deployments, but if you don’t know how to create solution packages for your admins, you are going to end up doing more damage than good some day. There are TONS of tools out there to help generate deployable solutions for you. You have ZERO excuse. You are not a SharePoint Development expert if you can’t tell me the main artifacts of a solution package Directly related to the first one. If you don’t know what the Manifest, DDF, WSP, and Feature files are and how they are used in a solution package, you are NOT a SharePoint development expert. I’m not asking you to be able to write them all from scratch (heck, I can’t even do that), but you MUST know what they are and how to tweak them if necessary. You are not a SharePoint Development expert if you don’t know what a Content Type or a Site Column is You would be absolutely amazed at how many “Expert” SharePoint Developers have NEVER EVER created a Content Type or Site Column or even know what they are. I mean, why would you ever want to create those when you can just do everything as a custom list or custom field? right???? (that’s sarcasm). You also need to know how to package a Content Type and a Site Column into a deployable package by the way. You are not a SharePoint Development expert if you have not created at least one Web Part, Workflow, Timer Job, and Event Handler. If you haven’t written at least one of each, you don’t fully understand what they do or their limitations. Again, I expect NO ONE to be able to write these things blind. I think the last time I wrote an application from scratch without copying and pasting from another project I had done before was back in 1994? Seriously, coding is like a Sour Dough starter, you get it from someone else and keep adding to it. You are not a SharePoint Development expert if you don’t know how to properly dispose of objects Another biggie with zero excuse for getting it wrong. It is so well known that you must dispose of your SPWeb and SPSite objects that if you aren’t doing it then you are not an expert. Heck, if you utilize “using” when handling SPWeb and SPSite objects and don’t realize that it disposes of those objects for you, then you are not a SharePoint Development expert. You are not a SharePoint Development expert if you do not know how to properly elevate privileges Just one of those development basics that any decent SharePoint Developer has got to have down and understand how and why it’s used You are not a SharePoint Development expert if you don’t know all of the development options available to SharePoint and when they should be used Okay… so all you hard core .NET SharePoint dev geeks take a moment to listen. You may be the most top not SharePoint .NET developer in the world, but if you are opening Visual Studio to solve every problem in SharePoint, then you are NOT a SharePoint development expert. The SharePoint developer’s tool kit is growing every day with tools like Visual Studio, Data View Web Parts, XSL, jQuery, SPServices, etc. etc… If you don’t have the ability to at least recognize that “hey, you can basically do the same thing here but just dropping in Easy Tabs instead of writing some weird web part” then you are NOT a SharePoint Development expert AND you are doing a huge disservice to your clients and customers. You are probably NOT a SharePoint Development expert if you call yourself an Expert So, truth telling time. I’m not an expert. There, I said it. I feel so much better. Now, I realize the word “expert” has been used with my name before, but I am quick to point out that I KNOW the experts and know that they will help me if I need it, but I’m not an expert in all things SharePoint. The minute you take on that moniker you are setting yourself up for a fall. It’s too big, there’s too much to know, and there’s WAY too much you can do wrong. You are not a SharePoint Development expert if you are not involved in the community I expect to get the most flack for this one, but it’s always a huge red flag for me when someone says they are an expert and has ZERO knowledge of the SharePoint community. The SharePoint community is ABSOLUTELY CRITICAL to be an effective SharePoint developer, admin, architect, power user or whatever the heck you are!! The community keeps you sane, tells you when you are NOT using a best practice, recommends the best practice, and even knows when Microsoft is giving you the wrong information (*gasp* it does happen). If you can’t tell me who you are following on twitter, who's blog you read, what conferences you attend, or name the experts who you monitor to make sure you are not doing something stupid, then you are probably doing something stupid. Again, not asking you to be a speaker, blogger, or the least bit extroverted but you should be at LEAST stalking the experts. So… what’s the point? So… yeah… what’s my point in all this. Well, first of all let me point out that this is by far not a finished list and I could come up with a LOT more specific “deep dive” questions, but these should be high enough level that even non experts can recognize and ask them. If you have some common ones you run into let me know and add them in the comments below. Also, keep in mind I’m not saying you as a developer HAVE to know EVERYTHING, but you DO need to know what you don’t know and proudly and honestly state “I don’t know, but I’ll learn and find out”.  Those of us hiring SharePoint developers and know and have a passion for SharePoint are not looking for that elusive “expert” who knows everything. We are looking for someone who “gets it”, has a similar passion, great attitude, an understanding that they DON’T know everything, and a desire to do it right.  I would bet money that most SharePoint development disasters happen because of “experts” who think they know everything rather than the developer who is cautious and knows he doesn’t. Lastly, I know there’s a raging debate over what a “SharePoint Developer” is (I should know, as I keep bringing it up). So, obviously this blog post is more closely tied to the .NET side of SharePoint development and less towards the client side, middle tier, or whatever you want to call it. So, let’s please not get that argument going here as well…  Thanks

    Read the article

  • SWT Layout for absolute positioning with minimal-spanning composites

    - by pure.equal
    Hi, I'm writing a DND-editor where I can position elemtents (like buttons, images ...) freely via absolute positioning. Every element has a parent composite. These composites should span/grasp/embrace every element they contain. There can be two or more elements in the same composite and a composite can contain another composite. This image shows how it should look like. To achive this I wrote a custom layoutmanager: import org.eclipse.swt.SWT; import org.eclipse.swt.graphics.Point; import org.eclipse.swt.widgets.Composite; import org.eclipse.swt.widgets.Control; import org.eclipse.swt.widgets.Layout; public class SpanLayout extends Layout { Point[] sizes; int calcedHeight, calcedWidth, calcedX, calcedY; Point[] positions; /* * (non-Javadoc) * * @see * org.eclipse.swt.widgets.Layout#computeSize(org.eclipse.swt.widgets.Composite * , int, int, boolean) * * A composite calls computeSize() on its associated layout to determine the * minimum size it should occupy, while still holding all its child controls * at their minimum sizes. */ @Override protected Point computeSize(Composite composite, int wHint, int hHint, boolean flushCache) { int width = wHint, height = hHint; if (wHint == SWT.DEFAULT) width = composite.getBounds().width; if (hHint == SWT.DEFAULT) height = composite.getBounds().height; return new Point(width, height); } /* * (non-Javadoc) * * @see * org.eclipse.swt.widgets.Layout#layout(org.eclipse.swt.widgets.Composite, * boolean) * * Calculates the positions and sizes for the children of the passed * Composite, then places them accordingly by calling setBounds() on each * one. */ @Override protected void layout(Composite composite, boolean flushCache) { Control children[] = composite.getChildren(); for (int i = 0; i < children.length; i++) { calcedX = calcX(children[i]); calcedY = calcY(children[i]); calcedHeight = calcHeight(children[i]) - calcedY; calcedWidth = calcWidth(children[i]) - calcedX; if (composite instanceof Composite) { calcedX = calcedX - composite.getLocation().x; calcedY = calcedY - composite.getLocation().y; } children[i].setBounds(calcedX, calcedY, calcedWidth, calcedHeight); } } private int calcHeight(Control control) { int maximum = 0; if (control instanceof Composite) { if (((Composite) control).getChildren().length > 0) { for (Control child : ((Composite) control).getChildren()) { int calculatedHeight = calcHeight(child); if (calculatedHeight > maximum) { maximum = calculatedHeight; } } return maximum; } } return control.computeSize(SWT.DEFAULT, SWT.DEFAULT, true).y + control.getLocation().y; } private int calcWidth(Control control) { int maximum = 0; if (control instanceof Composite) { if (((Composite) control).getChildren().length > 0) { for (Control child : ((Composite) control).getChildren()) { int calculatedWidth = calcWidth(child); if (calculatedWidth > maximum) { maximum = calculatedWidth; } } return maximum; } } return control.computeSize(SWT.DEFAULT, SWT.DEFAULT, true).x + control.getLocation().x; } private int calcX(Control control) { int minimum = Integer.MAX_VALUE; if (control instanceof Composite) { if (((Composite) control).getChildren().length > 0) { for (Control child : ((Composite) control).getChildren()) { int calculatedX = calcX(child); if (calculatedX < minimum) { minimum = calculatedX; } } return minimum; } } return control.getLocation().x; } private int calcY(Control control) { int minimum = Integer.MAX_VALUE; if (control instanceof Composite) { if (((Composite) control).getChildren().length > 0) { for (Control child : ((Composite) control).getChildren()) { int calculatedY = calcY(child); if (calculatedY < minimum) { minimum = calculatedY; } } return minimum; } } return control.getLocation().y; } } The problem with it is that it always positions the composite at the position (0,0). This is because it tries to change the absolute positioning into a relative one. Lets say I position a image at position (100,100) and one at (200,200). Then it has to calculate the location of the composite to be at (100,100) and spanning the one at (200,200). But as all child positions are relative to their parents I have to change the positions of the children to remove the 100px offset of the parent. When the layout gets updated it moves everything to the top-left corner (as seen in the image) because the position of the image is not (100,100) but (0,0) since I tried to remove the 100px offset of the partent. Where is my error in reasoning? Is this maybe a totally wrong approach? Is there maybe an other way to achive the desired behavior? Thanks in advance! Best regards, Ed

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >