Search Results

Search found 18135 results on 726 pages for 'shared objects'.

Page 114/726 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Nesting Linq-to-Objects query within Linq-to-Entities query –what is happening under the covers?

    - by carewithl
    var numbers = new int[] { 1, 2, 3, 4, 5 }; var contacts = from c in context.Contacts where c.ContactID == numbers.Max() | c.ContactID == numbers.FirstOrDefault() select c; foreach (var item in contacts) Console.WriteLine(item.ContactID); Linq-to-Entities query is first translated into Linq expression tree, which is then converted by Object Services into command tree. And if Linq-to-Entities query nests Linq-to-Objects query, then this nested query also gets translated into an expression tree. a) I assume none of the operators of the nested Linq-to-Objects query actually get executed, but instead data provider for particular DB (or perhaps Object Services) knows how to transform the logic of Linq-to-Objects operators into appropriate SQL statements? b) Data provider knows how to create equivalent SQL statements only for some of the Linq-to-Objects operators? c) Similarly, data provider knows how to create equivalent SQL statements only for some of the non-Linq methods in the Net Framework class library? EDIT: I know only some Sql so I can't be completely sure, but reading Sql query generated for the above code it seems data provider didn't actually execute numbers.Max method, but instead just somehow figured out that numbers.Max should return the maximum value and then proceed to include in generated Sql query a call to TSQL's build-in MAX function. It also put all the values held by numbers array into a Sql query. SELECT CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN '0X0X' ELSE '0X1X' END AS [C1], [Extent1].[ContactID] AS [ContactID], [Extent1].[FirstName] AS [FirstName], [Extent1].[LastName] AS [LastName], [Extent1].[Title] AS [Title], [Extent1].[AddDate] AS [AddDate], [Extent1].[ModifiedDate] AS [ModifiedDate], [Extent1].[RowVersion] AS [RowVersion], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[CustomerTypeID] END AS [C2], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[InitialDate] END AS [C3], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[PrimaryDesintation] END AS [C4], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[SecondaryDestination] END AS [C5], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[PrimaryActivity] END AS [C6], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[SecondaryActivity] END AS [C7], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[Notes] END AS [C8], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[RowVersion] END AS [C9], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[BirthDate] END AS [C10], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[HeightInches] END AS [C11], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[WeightPounds] END AS [C12], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[DietaryRestrictions] END AS [C13] FROM [dbo].[Contact] AS [Extent1] LEFT OUTER JOIN (SELECT [Extent2].[ContactID] AS [ContactID], [Extent2].[BirthDate] AS [BirthDate], [Extent2].[HeightInches] AS [HeightInches], [Extent2].[WeightPounds] AS [WeightPounds], [Extent2].[DietaryRestrictions] AS [DietaryRestrictions], [Extent3].[CustomerTypeID] AS [CustomerTypeID], [Extent3].[InitialDate] AS [InitialDate], [Extent3].[PrimaryDesintation] AS [PrimaryDesintation], [Extent3].[SecondaryDestination] AS [SecondaryDestination], [Extent3].[PrimaryActivity] AS [PrimaryActivity], [Extent3].[SecondaryActivity] AS [SecondaryActivity], [Extent3].[Notes] AS [Notes], [Extent3].[RowVersion] AS [RowVersion], cast(1 as bit) AS [C1] FROM [dbo].[ContactPersonalInfo] AS [Extent2] INNER JOIN [dbo].[Customers] AS [Extent3] ON [Extent2].[ContactID] = [Extent3].[ContactID]) AS [Project1] ON [Extent1].[ContactID] = [Project1].[ContactID] LEFT OUTER JOIN (SELECT TOP (1) [c].[C1] AS [C1] FROM (SELECT [UnionAll3].[C1] AS [C1] FROM (SELECT [UnionAll2].[C1] AS [C1] FROM (SELECT [UnionAll1].[C1] AS [C1] FROM (SELECT 1 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable1] UNION ALL SELECT 2 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable2]) AS [UnionAll1] UNION ALL SELECT 3 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable3]) AS [UnionAll2] UNION ALL SELECT 4 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable4]) AS [UnionAll3] UNION ALL SELECT 5 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable5]) AS [c]) AS [Limit1] ON 1 = 1 LEFT OUTER JOIN (SELECT TOP (1) [c].[C1] AS [C1] FROM (SELECT [UnionAll7].[C1] AS [C1] FROM (SELECT [UnionAll6].[C1] AS [C1] FROM (SELECT [UnionAll5].[C1] AS [C1] FROM (SELECT 1 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable6] UNION ALL SELECT 2 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable7]) AS [UnionAll5] UNION ALL SELECT 3 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable8]) AS [UnionAll6] UNION ALL SELECT 4 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable9]) AS [UnionAll7] UNION ALL SELECT 5 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable10]) AS [c]) AS [Limit2] ON 1 = 1 CROSS JOIN (SELECT MAX([UnionAll12].[C1]) AS [A1] FROM (SELECT [UnionAll11].[C1] AS [C1] FROM (SELECT [UnionAll10].[C1] AS [C1] FROM (SELECT [UnionAll9].[C1] AS [C1] FROM (SELECT 1 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable11] UNION ALL SELECT 2 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable12]) AS [UnionAll9] UNION ALL SELECT 3 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable13]) AS [UnionAll10] UNION ALL SELECT 4 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable14]) AS [UnionAll11] UNION ALL SELECT 5 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable15]) AS [UnionAll12]) AS [GroupBy1] WHERE [Extent1].[ContactID] IN ([GroupBy1].[A1], (CASE WHEN ([Limit1].[C1] IS NULL) THEN 0 ELSE [Limit2].[C1] END)) Based on this, is it possible that Linq2Entities provider indeed doesn't execute non-Linq and Linq-to-Object methods, but instead creates equivalent SQL statements for some of them ( and for others it throws an exception )? Thank you in advance

    Read the article

  • How do engines avoid "Phase Lock" (multiple objects in same location) in a Physics Engine?

    - by C0M37
    Let me explain Phase Lock first: When two objects of non zero mass occupy the same space but have zero energy (no velocity). Do they bump forever with zero velocity resolution vectors or do they just stay locked together until an outside force interacts? In my home brewed engine, I realized that if I loaded a character into a tree and moved them, they would signal a collision and hop back to their original spot. I suppose I could fix this by implementing impulses in the event of a collision instead of just jumping back to the last spot I was in (my implementation kind of sucks). But while I make my engine more robust, I'm just curious on how most other physics engines handle this case. Do objects that start in the same spot with no movement speed just shoot out from each other in a random direction? Or do they sit there until something happens? Which option is generally the best approach?

    Read the article

  • SQL SERVER – ERROR: FIX using Compatibility Level – Database diagram support objects cannot be installed because this database does not have a valid owner – Part 2

    - by pinaldave
    Earlier I wrote a blog post about how to resolve the error with database diagram. Today I faced the same error when I was dealing with a database which is upgraded from SQL Server 2005 to SQL Server 2008 R2. When I was searching for the solution online I ended up on my own earlier solution SQL SERVER – ERROR: FIX – Database diagram support objects cannot be installed because this database does not have a valid owner. I really found it interesting that I ended up on my own solution. However, the solution to the problem this time was a bit different. Let us see how we can resolve the same. Error: Database diagram support objects cannot be installed because this database does not have a valid owner. To continue, first use the Files page of the Database Properties dialog box or the ALTER AUTHORIZATION statement to set the database owner to a valid login, then add the database diagram support objects. Workaround / Fix / Solution : Follow the steps listed below and it should for sure solve your problem. (NOTE: Please try this for the databases upgraded from previous version. For everybody else you should just follow the steps mentioned here.) Select your database >> Right Click >> Select Properties Go to the Options In the Dropdown at right labeled “Compatibility Level” choose “SQL Server 2005(90)” Select FILE in left side of page In the OWNER box, select button which has three dots (…) in it Now select user ‘sa’ or NT AUTHORITY\SYSTEM and click OK. This will solve your problem. However, there is one very important note you must consider. When you change any database owner, there are always security related implications. I suggest you check your security policies before changing authorization. I did this to quickly solve my problem on my development server. If you are on production server, you may open yourself to potential security compromise. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Should concrete classes avoid calling other concrete classes, except for data objects?

    - by Kazark
    In Appendix A to The Art of Unit Testing, Roy Osherove, speaking about ways to write testable code from the start, says, An abstract class shouldn't call concrete classes, and concerete classes shouldn't call concrete classes either, unless they're data objects (objects holding data, with no behavior). (259) The first half of the sentence is simply Dependency Inversion from SOLID. The second half seems rather extreme to me. That means that every time I'm going to write a class that isn't a simple data structure, which is most classes, I should write an interface or abstract class first, right? Is it really worthwhile to go that far in defining abstract classes an interfaces? Can anyone explain why in more detail, or refute it in spite of its benefit for testability?

    Read the article

  • Does shared hosting hold some benefits over a VPS? [closed]

    - by John Nevermore
    I was looking a for a windows host for my ASP.NET MVC app and the prices in softsyshosting looked very decent. However i fail to understand, why do they offer codename "Enterprise" Shared hosting at the same price point as the codename "Economy" VPS ? Enterprise Shared: http://www.softsyshosting.com/windows.aspx The First Economy VPS: http://www.softsyshosting.com/windows-vps.aspx Why would someone be willing to pay the same amount of money for 350GB less bandwith, less database storage, less disk space, no RDP control .. ?

    Read the article

  • Qt Linking Error.

    - by Wallah
    Hi, I configure qt-x11 with following options ./configure -prefix /iTalk/qtx11 -prefix-install -bindir /iTalk/qtx11-install/bin -libdir /iTalk/qtx11-install/lib -docdir /iTalk/qtx11-install/doc -headerdir /iTalk/qtx11-install/include -datadir /iTalk/qtx11-install/data -examplesdir /iTalk/qtx11-install/examples -demosdir /iTalk/qtx11-install/demos -debug. Now I am getting following errors in Fedora Core 6. Can you please tell me where the problem is? obj/debug-shared/qapplication_x11.o: In function `qt_init(QApplicationPrivate*, int, _XDisplay*, unsigned long, unsigned long)': /iTalk/QT4/qt/src/gui/kernel/qapplication_x11.cpp:1713: undefined reference to `FcInit' .obj/debug-shared/qfontdatabase.o: In function `queryFont': /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1727: undefined reference to `FcFreeTypeQuery' .obj/debug-shared/qfontdatabase.o: In function `registerFont': /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1959: undefined reference to `FcConfigGetCurrent' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1963: undefined reference to `FcConfigGetFonts' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1965: undefined reference to `FcConfigAppFontAddFile' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1966: undefined reference to `FcConfigGetFonts' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1985: undefined reference to `FcConfigGetBlanks' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1997: undefined reference to `FcPatternDel' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1998: undefined reference to `FcPatternAddString' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:2001: undefined reference to `FcPatternGetString' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:2006: undefined reference to `FcFontSetAdd' .obj/debug-shared/qfontdatabase.o: In function `qt_FcPatternToQFontDef(_FcPattern*, QFontDef const&)': /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:746: undefined reference to `FcPatternGetString' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:751: undefined reference to `FcPatternGetDouble' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:759: undefined reference to `FcPatternGetDouble' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:771: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:776: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:786: undefined reference to `FcPatternGetBool' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:793: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:800: undefined reference to `FcPatternGetInteger' .obj/debug-shared/qfontdatabase.o: In function `FcFontSetRemove': /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1573: undefined reference to `FcPatternDestroy' .obj/debug-shared/qfontdatabase.o: In function `qt_fontSetForPattern(_FcPattern*, QFontDef const&)': /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1657: undefined reference to `FcFontSort' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1671: undefined reference to `FcPatternGetBool' .obj/debug-shared/qfontdatabase.o: In function `qt_addPatternProps(_FcPattern*, int, int, QFontDef const&)': /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1449: undefined reference to `FcPatternAddInteger' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1456: undefined reference to `FcPatternAddInteger' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1459: undefined reference to `FcPatternAddDouble' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1464: undefined reference to `FcPatternAddInteger' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1468: undefined reference to `FcPatternAddBool' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1471: undefined reference to `FcPatternAddBool' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1476: undefined reference to `FcLangSetCreate' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1477: undefined reference to `FcLangSetAdd' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1478: undefined reference to `FcPatternAddLangSet' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1479: undefined reference to `FcLangSetDestroy' .obj/debug-shared/qfontdatabase.o: In function `tryPatternLoad': /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1588: undefined reference to `FcPatternDuplicate' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1593: undefined reference to `FcConfigSubstitute' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1594: undefined reference to `FcDefaultSubstitute' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1596: undefined reference to `FcFontMatch' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1606: undefined reference to `FcPatternDuplicate' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1613: undefined reference to `FcPatternGetCharSet' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1615: undefined reference to `FcCharSetHasChar' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1619: undefined reference to `FcPatternGetLangSet' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1621: undefined reference to `FcLangSetHasLang' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1628: undefined reference to `FcPatternDel' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1629: undefined reference to `FcPatternAddBool' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1646: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1648: undefined reference to `FcPatternDestroy' .obj/debug-shared/qfontdatabase.o: In function `loadFontConfig': /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1023: undefined reference to `FcObjectSetCreate' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1024: undefined reference to `FcPatternCreate' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1037: undefined reference to `FcObjectSetAdd' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1040: undefined reference to `FcFontList' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1041: undefined reference to `FcObjectSetDestroy' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1042: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1046: undefined reference to `FcPatternGetString' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1057: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1059: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1061: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1063: undefined reference to `FcPatternGetString' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1065: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1067: undefined reference to `FcPatternGetBool' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1069: undefined reference to `FcPatternGetString' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1074: undefined reference to `FcPatternGetLangSet' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1081: undefined reference to `FcLangSetHasLang' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1100: undefined reference to `FcPatternGetCharSet' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1107: undefined reference to `FcCharSetHasChar' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1116: undefined reference to `FcPatternGetString' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1136: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1153: undefined reference to `FcPatternGetDouble' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1161: undefined reference to `FcFontSetDestroy' .obj/debug-shared/qfontdatabase.o: In function `getFcPattern': /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1494: undefined reference to `FcPatternCreate' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1509: undefined reference to `FcPatternAdd' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1516: undefined reference to `FcPatternAddWeak' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1524: undefined reference to `FcPatternAddWeak' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1531: undefined reference to `FcPatternAddInteger' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1533: undefined reference to `FcPatternAddBool' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1535: undefined reference to `FcPatternAddBool' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1539: undefined reference to `FcDefaultSubstitute' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1540: undefined reference to `FcConfigSubstitute' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1541: undefined reference to `FcConfigSubstitute' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1550: undefined reference to `FcPatternAddWeak' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1557: undefined reference to `FcPatternAddWeak' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1564: undefined reference to `FcPatternAddWeak' .obj/debug-shared/qfontdatabase.o: In function `loadFc': /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1707: undefined reference to `FcFontSetDestroy' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1716: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1718: undefined reference to `FcPatternDestroy' .obj/debug-shared/qfontdatabase.o: In function `QFontDatabase::removeAllApplicationFonts()': /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:2048: undefined reference to `FcConfigAppFontClear' .obj/debug-shared/qfontdatabase.o: In function `QFontDatabase::removeApplicationFont(int)': /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:2027: undefined reference to `FcConfigAppFontClear' .obj/debug-shared/qfontengine_x11.o: In function `qt_x11ft_convert_pattern(_FcPattern*, QByteArray*, int*, bool*)': /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:970: undefined reference to `FcPatternGetString' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:972: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:975: undefined reference to `FcPatternGetBool' .obj/debug-shared/qfontengine_x11.o: In function `QFontEngineX11FT': /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:999: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1016: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1041: undefined reference to `FcPatternGetBool' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1077: undefined reference to `FcPatternGetBool' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1106: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1112: undefined reference to `FcPatternGetCharSet' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1113: undefined reference to `FcCharSetCopy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1115: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:999: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1016: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1041: undefined reference to `FcPatternGetBool' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1077: undefined reference to `FcPatternGetBool' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1106: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1112: undefined reference to `FcPatternGetCharSet' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1113: undefined reference to `FcCharSetCopy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1115: undefined reference to `FcPatternDestroy' .obj/debug-shared/qfontengine_x11.o: In function `engineForPattern': /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:868: undefined reference to `FcFontMatch' .obj/debug-shared/qfontengine_x11.o: In function `QFontEngineMultiFT::loadEngine(int)': /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:929: undefined reference to `FcPatternEqual' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:932: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:941: undefined reference to `FcPatternDuplicate' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:951: undefined reference to `FcConfigSubstitute' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:952: undefined reference to `FcDefaultSubstitute' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:956: undefined reference to `FcPatternDestroy' .obj/debug-shared/qfontengine_x11.o: In function `~QFontEngineMultiFT': /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:895: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:897: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:899: undefined reference to `FcFontSetDestroy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:895: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:897: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:899: undefined reference to `FcFontSetDestroy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:895: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:897: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:899: undefined reference to `FcFontSetDestroy' .obj/debug-shared/qfontengine_ft.o: In function `QFontEngineFT::stringToCMap(QChar const*, int, QGlyphLayout*, int*, QFlags) const': /iTalk/QT4/qt/src/gui/text/qfontengine_ft.cpp:1546: undefined reference to `FcCharSetHasChar' /iTalk/QT4/qt/src/gui/text/qfontengine_ft.cpp:1581: undefined reference to `FcCharSetHasChar' .obj/debug-shared/qfontengine_ft.o: In function `QFreetypeFace::release(QFontEngine::FaceId const&)': /iTalk/QT4/qt/src/gui/text/qfontengine_ft.cpp:308: undefined reference to `FcCharSetDestroy' collect2: ld returned 1 exit status make[1]: *** [../../lib/libQtGui.so.4.5.3] Error 1 make[1]: Leaving directory `/iTalk/QT4/qt/src/gui' make: *** [sub-gui-make_default-ordered] Error 2

    Read the article

  • git: Is it possible to save the packed objects of a dry run and push them later?

    - by shovavnik
    I'm trying to push a bunch of commits that contain a lot of code and a few thousand MP3 and PDF files besides (ranging from 5-40 MB each). Git successfully packs the objects: C:\MyProject> git push Counting objects: 7582, done. Delta compression using up to 2 threads. Compressing objects: 100% (7510/7510), done. But it fails to send the push for some as yet unknown reason. The problem is that it takes it a very long time to repack the files (I'm on a battery-powered laptop and it took about 20 minutes to pack). So I guess my question can be phrases thus: Is it possible to save the packed objects created in a dry run? Once saved, is it possible to push those packed objects and avoid repacking? I looked it up in the git manual and elsewhere and couldn't find anything conclusive. Any help or pointers are appreciated.

    Read the article

  • Windows Azure Evolution &ndash; Caching (Preview)

    - by Shaun
    Caching is a popular topic when we are building a high performance and high scalable system not only on top of the cloud platform but the on-premise environment as well. On March 2011 the Windows Azure AppFabric Caching had been production launched. It provides an in-memory, distributed caching service over the cloud. And now, in this June 2012 update, the cache team announce a grand new caching solution on Windows Azure, which is called Windows Azure Caching (Preview). And the original Windows Azure AppFabric Caching was renamed to Windows Azure Shared Caching.   What’s Caching (Preview) If you had been using the Shared Caching you should know that it is constructed by a bunch of cache servers. And when you want to use you should firstly create a cache account from the developer portal and specify the size you want to use, which means how much memory you can use to store your data that wanted to be cached. Then you can add, get and remove them through your code through the cache URL. The Shared Caching is a multi-tenancy system which host all cached items across all users. So you don’t know which server your data was located. This caching mode works well and can take most of the cases. But it has some problems. The first one is the performance. Since the Shared Caching is a multi-tenancy system, which means all cache operations should go through the Shared Caching gateway and then routed to the server which have the data your are looking for. Even though there are some caches in the Shared Caching system it also takes time from your cloud services to the cache service. Secondary, the Shared Caching service works as a block box to the developer. The only thing we know is my cache endpoint, and that’s all. Someone may satisfied since they don’t want to care about anything underlying. But if you need to know more and want more control that’s impossible in the Shared Caching. The last problem would be the price and cost-efficiency. You pay the bill based on how much cache you requested per month. But when we host a web role or worker role, it seldom consumes all of the memory and CPU in the virtual machine (service instance). If using Shared Caching we have to pay for the cache service while waste of some of our memory and CPU locally. Since the issues above Microsoft offered a new caching mode over to us, which is the Caching (Preview). Instead of having a separated cache service, the Caching (Preview) leverage the memory and CPU in our cloud services (web role and worker role) as the cache clusters. Hence the Caching (Preview) runs on the virtual machines which hosted or near our cloud applications. Without any gateway and routing, since it located in the same data center and same racks, it provides really high performance than the Shared Caching. The Caching (Preview) works side-by-side to our application, initialized and worked as a Windows Service running in the virtual machines invoked by the startup tasks from our roles, we could get more information and control to them. And since the Caching (Preview) utilizes the memory and CPU from our existing cloud services, so it’s free. What we need to pay is the original computing price. And the resource on each machines could be used more efficiently.   Enable Caching (Preview) It’s very simple to enable the Caching (Preview) in a cloud service. Let’s create a new windows azure cloud project from Visual Studio and added an ASP.NET Web Role. Then open the role setting and select the Caching page. This is where we enable and configure the Caching (Preview) on a role. To enable the Caching (Preview) just open the “Enable Caching (Preview Release)” check box. And then we need to specify which mode of the caching clusters we want to use. There are two kinds of caching mode, co-located and dedicate. The co-located mode means we use the memory in the instances we run our cloud services (web role or worker role). By using this mode we must specify how many percentage of the memory will be used as the cache. The default value is 30%. So make sure it will not affect the role business execution. The dedicate mode will use all memory in the virtual machine as the cache. In fact it will reserve some for operation system, azure hosting etc.. But it will try to use as much as the available memory to be the cache. As you can see, the Caching (Preview) was defined based on roles, which means all instances of this role will apply the same setting and play as a whole cache pool, and you can consume it by specifying the name of the role, which I will demonstrate later. And in a windows azure project we can have more than one role have the Caching (Preview) enabled. Then we will have more caches. For example, let’s say I have a web role and worker role. The web role I specified 30% co-located caching and the worker role I specified dedicated caching. If I have 3 instances of my web role and 2 instances of my worker role, then I will have two caches. As the figure above, cache 1 was contributed by three web role instances while cache 2 was contributed by 2 worker role instances. Then we can add items into cache 1 and retrieve it from web role code and worker role code. But the items stored in cache 1 cannot be retrieved from cache 2 since they are isolated. Back to our Visual Studio we specify 30% of co-located cache and use the local storage emulator to store the cache cluster runtime status. Then at the bottom we can specify the named caches. Now we just use the default one. Now we had enabled the Caching (Preview) in our web role settings. Next, let’s have a look on how to consume our cache.   Consume Caching (Preview) The Caching (Preview) can only be consumed by the roles in the same cloud services. As I mentioned earlier, a cache contributed by web role can be connected from a worker role if they are in the same cloud service. But you cannot consume a Caching (Preview) from other cloud services. This is different from the Shared Caching. The Shared Caching is opened to all services if it has the connection URL and authentication token. To consume the Caching (Preview) we need to add some references into our project as well as some configuration in the Web.config. NuGet makes our life easy. Right click on our web role project and select “Manage NuGet packages”, and then search the package named “WindowsAzure.Caching”. In the package list install the “Windows Azure Caching Preview”. It will download all necessary references from the NuGet repository and update our Web.config as well. Open the Web.config of our web role and find the “dataCacheClients” node. Under this node we can specify the cache clients we are going to use. For each cache client it will use the role name to identity and find the cache. Since we only have this web role with the Caching (Preview) enabled so I pasted the current role name in the configuration. Then, in the default page I will add some code to show how to use the cache. I will have a textbox on the page where user can input his or her name, then press a button to generate the email address for him/her. And in backend code I will check if this name had been added in cache. If yes I will return the email back immediately. Otherwise, I will sleep the tread for 2 seconds to simulate the latency, then add it into cache and return back to the page. 1: protected void btnGenerate_Click(object sender, EventArgs e) 2: { 3: // check if name is specified 4: var name = txtName.Text; 5: if (string.IsNullOrWhiteSpace(name)) 6: { 7: lblResult.Text = "Error. Please specify name."; 8: return; 9: } 10:  11: bool cached; 12: var sw = new Stopwatch(); 13: sw.Start(); 14:  15: // create the cache factory and cache 16: var factory = new DataCacheFactory(); 17: var cache = factory.GetDefaultCache(); 18:  19: // check if the name specified is in cache 20: var email = cache.Get(name) as string; 21: if (email != null) 22: { 23: cached = true; 24: sw.Stop(); 25: } 26: else 27: { 28: cached = false; 29: // simulate the letancy 30: Thread.Sleep(2000); 31: email = string.Format("{0}@igt.com", name); 32: // add to cache 33: cache.Add(name, email); 34: } 35:  36: sw.Stop(); 37: lblResult.Text = string.Format( 38: "Cached = {0}. Duration: {1}s. {2} => {3}", 39: cached, sw.Elapsed.TotalSeconds.ToString("0.00"), name, email); 40: } The Caching (Preview) can be used on the local emulator so we just F5. The first time I entered my name it will take about 2 seconds to get the email back to me since it was not in the cache. But if we re-enter my name it will be back at once from the cache. Since the Caching (Preview) is distributed across all instances of the role, so we can scaling-out it by scaling-out our web role. Just use 2 instances and tweak some code to show the current instance ID in the page, and have another try. Then we can see the cache can be retrieved even though it was added by another instance.   Consume Caching (Preview) Across Roles As I mentioned, the Caching (Preview) can be consumed by all other roles within the same cloud service. For example, let’s add another web role in our cloud solution and add the same code in its default page. In the Web.config we add the cache client to one enabled in the last role, by specifying its role name here. Then we start the solution locally and go to web role 1, specify the name and let it generate the email to us. Since there’s no cache for this name so it will take about 2 seconds but will save the email into cache. And then we go to web role 2 and specify the same name. Then you can see it retrieve the email saved by the web role 1 and returned back very quickly. Finally then we can upload our application to Windows Azure and test again. Make sure you had changed the cache cluster status storage account to the real azure account.   More Awesome Features As a in-memory distributed caching solution, the Caching (Preview) has some fancy features I would like to highlight here. The first one is the high availability support. This is the first time I have heard that a distributed cache support high availability. In the distributed cache world if a cache cluster was failed, the data it stored will be lost. This behavior was introduced by Memcached and is followed by almost all distributed cache productions. But Caching (Preview) provides high availability, which means you can specify if the named cache will be backup automatically. If yes then the data belongs to this named cache will be replicated on another role instance of this role. Then if one of the instance was failed the data can be retrieved from its backup instance. To enable the backup just open the Caching page in Visual Studio. In the named cache you want to enable backup, change the Backup Copies value from 0 to 1. The value of Backup Copies only for 0 and 1. “0” means no backup and no high availability while “1” means enabled high availability with backup the data into another instance. But by using the high availability feature there are something we need to make sure. Firstly the high availability does NOT means the data in cache will never be lost for any kind of failure. For example, if we have a role with cache enabled that has 10 instances, and 9 of them was failed, then most of the cached data will be lost since the primary and backup instance may failed together. But normally is will not be happened since MS guarantees that it will use the instance in the different fault domain for backup cache. Another one is that, enabling the backup means you store two copies of your data. For example if you think 100MB memory is OK for cache, but you need at least 200MB if you enabled backup. Besides the high availability, the Caching (Preview) support more features introduced in Windows Server AppFabric Caching than the Windows Azure Shared Caching. It supports local cache with notification. It also support absolute and slide window expiration types as well. And the Caching (Preview) also support the Memcached protocol as well. This means if you have an application based on Memcached, you can use Caching (Preview) without any code changes. What you need to do is to change the configuration of how you connect to the cache. Similar as the Windows Azure Shared Caching, MS also offers the out-of-box ASP.NET session provider and output cache provide on top of the Caching (Preview).   Summary Caching is very important component when we building a cloud-based application. In the June 2012 update MS provides a new cache solution named Caching (Preview). Different from the existing Windows Azure Shared Caching, Caching (Preview) runs the cache cluster within the role instances we have deployed to the cloud. It gives more control, more performance and more cost-effect. So now we have two caching solutions in Windows Azure, the Shared Caching and Caching (Preview). If you need a central cache service which can be used by many cloud services and web sites, then you have to use the Shared Caching. But if you only need a fast, near distributed cache, then you’d better use Caching (Preview).   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Building a SOA/BPM/BAM Cluster Part I &ndash; Preparing the Environment

    - by antony.reynolds
    An increasing number of customers are using SOA Suite in a cluster configuration, I might hazard to say that the majority of production deployments are now using SOA clusters.  So I thought it may be useful to detail the steps in building an 11g cluster and explain a little about why things are done the way they are. In this series of posts I will explain how to build a SOA/BPM cluster using the Enterprise Deployment Guide. This post will explain the setting required to prepare the cluster for installation and configuration. Software Required The following software is required for an 11.1.1.3 SOA/BPM install. Software Version Notes Oracle Database Certified databases are listed here SOA & BPM Suites require a working database installation. Repository Creation Utility (RCU) 11.1.1.3 If upgrading an 11.1.1.2 repository then a separate script is available. Web Tier Utilities 11.1.1.3 Provides Web Server, 11.1.1.3 is an upgrade to 11.1.1.2, so 11.1.1.2 must be installed first. Web Tier Utilities 11.1.1.3 Web Server, 11.1.1.3 Patch.  You can use the 11.1.1.2 version without problems. Oracle WebLogic Server 11gR1 10.3.3 This is the host platform for 11.1.1.3 SOA/BPM Suites. SOA Suite 11.1.1.2 SOA Suite 11.1.1.3 is an upgrade to 11.1.1.2, so 11.1.1.2 must be installed first. SOA Suite 11.1.1.3 SOA Suite 11.1.1.3 patch, requires 11.1.12 to have been installed. My installation was performed on Oracle Enterprise Linux 5.4 64-bit. Database I will not cover setting up the database in this series other than to identify the database requirements.  If setting up a SOA cluster then ideally we would also be using a RAC database.  I assume that this is running on separate machines to the SOA cluster.  Section 2.1, “Database”, of the EDG covers the database configuration in detail. Settings The database should have processes set to at least 400 if running SOA/BPM and BAM. alter system set processes=400 scope=spfile Run RCU The Repository Creation Utility creates the necessary database tables for the SOA Suite.  The RCU can be run from any machine that can access the target database.  In 11g the RCU creates a number of pre-defined users and schema with a user defiend prefix.  This allows you to have multiple 11g installations in the same database. After running the RCU you need to grant some additional privileges to the soainfra user.  The soainfra user should have privileges on the transaction tables. grant select on sys.dba_pending_transactions to prefix_soainfra Grant force any transaction to prefix_soainfra Machines The cluster will be built on the following machines. EDG Name is the name used for this machine in the EDG. Notes are a description of the purpose of the machine. EDG Name Notes LB External load balancer to distribute load across and failover between web servers. WEBHOST1 Hosts a web server. WEBHOST2 Hosts a web server. SOAHOST1 Hosts SOA components. SOAHOST2 Hosts SOA components. BAMHOST1 Hosts BAM components. BAMHOST2 Hosts BAM components. Note that it is possible to collapse the BAM servers so that they run on the same machines as the SOA servers. In this case BAMHOST1 and SOAHOST1 would be the same, as would BAMHOST2 and SOAHOST2. The cluster may include more than 2 servers and in this case we add SOAHOST3, SOAHOST4 etc as needed. My cluster has WEBHOST1, SOAHOST1 and BAMHOST1 all running on a single machine. Software Components The cluster will use the following software components. EDG Name is the name used for this machine in the EDG. Type is the type of component, generally a WebLogic component. Notes are a description of the purpose of the component. EDG Name Type Notes AdminServer Admin Server Domain Admin Server WLS_WSM1 Managed Server Web Services Manager Policy Manager Server WLS_WSM2 Managed Server Web Services Manager Policy Manager Server WLS_SOA1 Managed Server SOA/BPM Managed Server WLS_SOA2 Managed Server SOA/BPM Managed Server WLS_BAM1 Managed Server BAM Managed Server running Active Data Cache WLS_BAM2 Managed Server BAM Manager Server without Active Data Cache   Node Manager Will run on all hosts with WLS servers OHS1 Web Server Oracle HTTP Server OHS2 Web Server Oracle HTTP Server LB Load Balancer Load Balancer, not part of SOA Suite The above assumes a 2 node cluster. Network Configuration The SOA cluster requires an extensive amount of network configuration.  I would recommend assigning a private sub-net (internal IP addresses such as 10.x.x.x, 192.168.x.x or 172.168.x.x) to the cluster for use by addresses that only need to be accessible to the Load Balancer or other cluster members.  Section 2.2, "Network", of the EDG covers the network configuration in detail. EDG Name is the hostname used in the EDG. IP Name is the IP address name used in the EDG. Type is the type of IP address: Fixed is fixed to a single machine. Floating is assigned to one of several machines to allow for server migration. Virtual is assigned to a load balancer and used to distribute load across several machines. Host is the host where this IP address is active.  Note for floating IP addresses a range of hosts is given. Bound By identifies which software component will use this IP address. Scope shows where this IP address needs to be resolved. Cluster scope addresses only have to be resolvable by machines in the cluster, i.e. the machines listed in the previous section.  These addresses are only used for inter-cluster communication or for access by the load balancer. Internal scope addresses Notes are comments on why that type of IP is used. EDG Name IP Name Type Host Bound By Scope Notes ADMINVHN VIP1 Floating SOAHOST1-SOAHOSTn AdminServer Cluster Admin server, must be able to migrate between SOA server machines. SOAHOST1 IP1 Fixed SOAHOST1 NodeManager, WLS_WSM1 Cluster WSM Server 1 does not require server migration. SOAHOST2 IP2 Fixed SOAHOST1 NodeManager, WLS_WSM2 Cluster WSM Server 2 does not require server migration SOAHOST1VHN VIP2 Floating SOAHOST1-SOAHOSTn WLS_SOA1 Cluster SOA server 1, must be able to migrate between SOA server machines SOAHOST2VHN VIP3 Floating SOAHOST1-SOAHOSTn WLS_SOA2 Cluster SOA server 2, must be able to migrate between SOA server machines BAMHOST1 IP4 Fixed BAMHOST1 NodeManager Cluster   BAMHOST1VHN VIP4 Floating BAMHOST1-BAMHOSTn WLS_BAM1 Cluster BAM server 1, must be able to migrate between BAM server machines BAMHOST2 IP3 Fixed BAMHOST2 NodeManager, WLS_BAM2 Cluster BAM server 2 does not require server migration WEBHOST1 IP5 Fixed WEBHOST1 OHS1 Cluster   WEBHOST2 IP6 Fixed WEBHOST2 OHS2 Cluster   soa.mycompany.com VIP5 Virtual LB LB Public External access point to SOA cluster. admin.mycompany.com VIP6 Virtual LB LB Internal Internal access to WLS console and EM soainternal.mycompany.com VIP7 Virtual LB LB Internal Internal access point to SOA cluster Floating IP addresses are IP addresses that may be re-assigned between machines in the cluster.  For example in the event of failure of SOAHOST1 then WLS_SOA1 will need to be migrated to another server.  In this case VIP2 (SOAHOST1VHN) will need to be activated on the new target machine.  Once set up the node manager will manage registration and removal of the floating IP addresses with the exception of the AdminServer floating IP address. Note that if the BAMHOSTs and SOAHOSTs are the same machine then you can obviously share the hostname and fixed IP addresses, but you still need separate floating IP addresses for the different managed servers.  The hostnames don’t have to be the ones given in the EDG, but they must be distinct in the same way as the ETC names are distinct.  If the type is a fixed IP then if the addresses are the same you can use the same hostname, for example if you collapse the soahost1, bamhost1 and webhost1 onto a single machine then you could refer to them all as HOST1 and give them the same IP address, however SOAHOST1VHN can never be the same as BAMHOST1VHN because these are floating IP addresses. Notes on DNS IP addresses that are of scope “Cluster” just need to be in the hosts file (/etc/hosts on Linux, C:\Windows\System32\drivers\etc\hosts on Windows) of all the machines in the cluster and the load balancer.  IP addresses that are of scope “Internal” need to be available on the internal DNS servers, whilst IP addresses of scope “Public” need to be available on external and internal DNS servers. Shared File System At a minimum the cluster needs shared storage for the domain configuration, XA transaction logs and JMS file stores.  It is also possible to place the software itself on a shared server.  I strongly recommend that all machines have the same file structure for their SOA installation otherwise you will experience pain!  Section 2.3, "Shared Storage and Recommended Directory Structure", of the EDG covers the shared storage recommendations in detail. The following shorthand is used for locations: ORACLE_BASE is the root of the file system used for software and configuration files. MW_HOME is the location used by the installed SOA/BPM Suite installation.  This is also used by the web server installation.  In my installation it is set to <ORACLE_BASE>/SOA11gPS2. ORACLE_HOME is the location of the Oracle SOA components or the Oracle Web components.  This directory is installed under the the MW_HOME but the name is decided by the user at installation, default values are Oracle_SOA1 and Oracle_Web1.  In my installation they are set to <MW_HOME>/Oracle_SOA and <MW_HOME>/Oracle _WEB. ORACLE_COMMON_HOME is the location of the common components and is located under the MW_HOME directory.  This is always <MW_HOME>/oracle_common. ORACLE_INSTANCE is used by the Oracle HTTP Server and/or Oracle Web Cache.  It is recommended to create it under <ORACLE_BASE>/admin.  In my installation they are set to <ORACLE_BASE>/admin/Web1, <ORACLE_BASE>/admin/Web2 and <ORACLE_BASE>/admin/WC1. WL_HOME is the WebLogic server home and is always found at <MW_HOME>/wlserver_10.3. Key file locations are shown below. Directory Notes <ORACLE_BASE>/admin/domain_name/aserver/domain_name Shared location for domain.  Used to allow admin server to manually fail over between machines.  When creating domain_name provide the aserver directory as the location for the domain. In my install this is <ORACLE_BASE>/admin/aserver/soa_domain as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/aserver/applications Shared location for deployed applications.  Needs to be provided when creating the domain. In my install this is <ORACLE_BASE>/admin/aserver/applications as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/mserver/domain_name Either unique location for each machine or can be shared between machines to simplify task of packing and unpacking domain.  This acts as the managed server configuration location.  Keeping it separate from Admin server helps to avoid problems with the managed servers messing up the Admin Server. In my install this is <ORACLE_BASE>/admin/mserver/soa_domain as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/mserver/applications Either unique location for each machine or can be shared between machines.  Holds deployed applications. In my install this is <ORACLE_BASE>/admin/mserver/applications as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/soa_cluster_name Shared directory to hold the following   dd – deployment descriptors   jms – shared JMS file stores   fadapter – shared file adapter co-ordination files   tlogs – shared transaction log files In my install this is <ORACLE_BASE>/admin/soa_cluster. <ORACLE_BASE>/admin/instance_name Local folder for web server (OHS) instance. In my install this is <ORACLE_BASE>/admin/web1 and <ORACLE_BASE>/admin/web2. I also have <ORACLE_BASE>/admin/wc1 for the Web Cache I use as a load balancer. <ORACLE_BASE>/product/fmw This can be a shared or local folder for the SOA/BPM Suite software.  I used a shared location so I only ran the installer once. In my install this is <ORACLE_BASE>/SOA11gPS2 All the shared files need to be put onto a shared storage media.  I am using NFS, but recommendation for production would be a SAN, with mirrored disks for resilience. Collapsing Environments To reduce the hardware requirements it is possible to collapse the BAMHOST, SOAHOST and WEBHOST machines onto a single physical machine.  This will require more memory but memory is a lot cheaper than additional machines.  For environments that require higher security then stay with a separate WEBHOST tier as per the EDG.  Similarly for high volume environments then keep a separate set of machines for BAM and/or Web tier as per the EDG. Notes on Dev Environments In a dev environment it is acceptable to use a a single node (non-RAC) database, but be aware that the config of the data sources is different (no need to use multi-data source in WLS).  Typically in a dev environment we will collapse the BAMHOST, SOAHOST and WEBHOST onto a single machine and use a software load balancer.  To test a cluster properly we will need at least 2 machines. For my test environment I used Oracle Web Cache as a load balancer.  I ran it on one of the SOA Suite machines and it load balanced across the Web Servers on both machines.  This was easy for me to set up and I could administer it from a web based console.

    Read the article

  • State of the art Culling and Batching techniques in rendering

    - by Kristian Skarseth
    I'm currently working with upgrading and restructuring an OpenGL render engine. The engine is used for visualising large scenes of architectural data (buildings with interior), and the amount of objects can become rather large. As is the case with any building, there is a lot of occluded objects within walls, and you naturally only see the objects that are in the same room as you, or the exterior if you are on the outside. This leaves a large number of objects that should be occluded through occlusion culling and frustum culling. At the same time there is a lot of repetative geometry that can be batched in renderbatches, and also a lot of objects that can be rendered with instanced rendering. The way I see it, it can be difficult to combine renderbatching and culling in an optimal fashion. If you batch too many objects in the same VBO it's difficult to cull the objects on the CPU in order to skip rendering that batch. At the same time if you skip the culling on the cpu, a lot of objects will be processed by the GPU while they are not visible. If you skip batching copletely in order to more easily cull on the CPU, there will be an unwanted high amount of render calls. I have done some research into existing techniques and theories as to how these problems are solved in modern graphics, but I have not been able to find any concrete solution. An idea a colleague and me came up with was restricting batches to objects relatively close to eachother e.g all chairs in a room or within a radius of n meeters. This could be simplified and optimized through use of oct-trees. Does anyone have any pointers to techniques used for scene managment, culling, batching etc in state of the art modern graphics engines?

    Read the article

  • I'm using a shared server, and as such Gmail marks my email as spam (all from headers are different from the same IP)

    - by chipperyman573
    I have a shared server, meaning many people share the same IP. When I send an email, the @website.com is different from someone else that shares the same IP with me, therefore Gmail marks it as spam. For example: My website's IP is 1.2.3.4. My website is mywebsite.com Person 2's website's IP is hosted by the same host, and as such their IP is 1.2.3.4 Person 2's website is person2.com. When they send an email, it gets sent from [email protected] When I send an email, it gets sent from [email protected] According to Gmail's spam thing: "Use the same address in the 'From:' header on every bulk mail you send." Again, the only similarities between our websites is the IP. However, this causes Gmail to mark both our mail as spam. Is there a way to sort this out with Gmail?

    Read the article

  • Versioning APIs

    - by Sharon
    Suppose that you have a large project supported by an API base. The project also ships a public API that end(ish) users can use. Sometimes you need to make changes to the API base that supports your project. For example, you need to add a feature that needs an API change, a new method, or requires altering of one of the objects, or the format of one of those objects, passed to or from the API. Assuming that you are also using these objects in your public API, the public objects will also change any time you do this, which is undesirable as your clients may rely on the API objects remaining identical for their parsing code to work. (cough C++ WSDL clients...) So one potential solution is to version the API. But when we say "version" the API, it sounds like this also must mean to version the API objects as well as well as providing duplicate method calls for each changed method signature. So I would then have a plain old clr object for each version of my api, which again seems undesirable. And even if I do this, I surely won't be building each object from scratch as that would end up with vast amounts of duplicated code. Rather, the API is likely to extend the private objects we are using for our base API, but then we run into the same problem because added properties would also be available in the public API when they are not supposed to be. So what is some sanity that is usually applied to this situation? I know many public services such as Git for Windows maintains a versioned API, but I'm having trouble imagining an architecture that supports this without vast amounts of duplicate code covering the various versioned methods and input/output objects. I'm aware that processes such as semantic versioning attempt to put some sanity on when public API breaks should occur. The problem is more that it seems like many or most changes require breaking the public API if the objects aren't more separated, but I don't see a good way to do that without duplicating code.

    Read the article

  • How to access shared folders in Ubuntu VM(Oracle Virtualbox) and link them to the home folder(Answered)

    - by Njihia
    I have configured a shared folder between the Windows host and Ubuntu guest. The folder mounts at start up but its empty(It also has a padlock sign.). I have to run the command below to access its content(the padlock sign disappears). sudo mount.vboxsf media ~/media How can i configure it to be run automatically at start up. I've tried adding to the start up programs but nothing happens. Am new to Linux so try to put your answer in a layman's language. Thanks.

    Read the article

  • EPM 11.1.2 - Receive Anonymous Level Security token message in IE8 when trying to access Shared Services or Workspace URL

    - by Ahmed A
    If you get "Receive Anonymous Level Security token" message in IE8 when trying to access Shared Services or Workspace URL.Workaround:a. Go to Start > Run and enter dcomcnfgb. Expand Component Services, Expand Computers and right click on My Computer and select Propertiesc. Click on the Default Properties tab.  Change the Default Authentication Level to Connect.  Click apply and then OK.d. Launch the IE browser again and you will be able to access the URL.

    Read the article

  • Hosting rails sites; vps or shared, and how much ram? [closed]

    - by raphael_turtle
    Possible Duplicate: How to find web hosting that meets my requirements? I have 3 rails sites to launch, all of which are fairly small and consisting of a custom cms, one with an online store, and 2 sinatra sites which are mainly static, portfolio sites. What would be the best way to host these sites (I've deployed on dreamhost shared before and some vps's) Is it best to manage them together under one vps? e.g linode $20/m (for the cheapest option, 512mb and would that even be enough ram?) or keep each rails site separate and host each one on a small vps? e.g $4/m (there's often lots of deals like this on webhostingtalk) I'm currently hosting the sinatra sites for free on heroku but finding it a bit slow sometimes.

    Read the article

  • Excellence of Shared Service (ESS)?????? ~After J-SOX????

    - by ???02
    ?? ?1?:After J-SOX?????????????????? ?????????????????????????????????????????????????????1?????J-SOX?????????????????1??????????????????????????????????????????????? ????? ?2?:????????????????????????? ????????????????????????????????????????????????????????????????????????????????????????????? ????? ?3?:SSC??????????????????????? ?????????????????????????????????????????Excellence of Shared Service(ESS)??????????????????????????????????????????????ESS???????????? ?????   ?? ?? ????????????????????&????????? ?????? ????IT??????????????????????????????????????????????????????????????????????????????????????????2005?4??????????????????????????IT???????????????????????????????????????????????????????????????????????????????????????????????????????(???? ?????) ????????????????

    Read the article

  • Ruby: Why is Array.sort slow for large objects?

    - by David Waller
    A colleague needed to sort an array of ActiveRecord objects in a Rails app. He tried the obvious Array.sort! but it seemed surprisingly slow, taking 32s for an array of 3700 objects. So just in case it was these big fat objects slowing things down, he reimplemented the sort by sorting an array of small objects, then reordering the original array of ActiveRecord objects to match - as shown in the code below. Tada! The sort now takes 700ms. That really surprised me. Does Ruby's sort method end up copying objects about the place rather than just references? He's using Ruby 1.8.6/7. def self.sort_events(events) event_sorters = Array.new(events.length) {|i| EventSorter.new(i, events[i])} event_sorters.sort! event_sorters.collect {|es| events[es.index]} end private # Class used by sort_events class EventSorter attr_reader :sqn attr_reader :time attr_reader :index def initialize(index, event) @index = index @sqn = event.sqn @time = event.time end def <=>(b) @time != b.time ? @time <=> b.time : @sqn <=> b.sqn end end

    Read the article

  • Why do I get null objects in a many-to-many bag?

    - by Jim Geurts
    I have a bag defined for a many-to-many list: <class name="Author" table="Authors"> <id name="Id" column="AuthorId"> <generator class="identity" /> </id> <property name="Name" /> <bag name="Books" table="Author_Book_Map" where="IsDeleted=0" fetch="join"> <key column="AuthorId" /> <many-to-many class="Book" column="BookId" where="IsDeleted=0" /> </bag> </class> If I return all author objects using something like the following, I will get what initially appeared to be duplicate Author records: Session.Query<Author>().List<Author>() The extra author objects are created when an author is mapped to Book objects that have IsDeleted = 1 and IsDeleted = 0. Rather than creating one Author object with an enumerable that contains only the books with IsDeleted = 0, it will create two author objects. The first author object has a Books enumerable that contains books with IsDeleted = 0. The second author object will contain an enumerable of null book objects. Similarly, if an object only has one book map, and that map points to a book with IsDeleted = 1, then an author object is returned with a Books collection having one null object. I'm thinking part of the problem stems from the map table objects linking to rows that satisfy the where condition on the bag object but do not meet the many-to-many where condition. This is happening with NHibernate version 3.0.0.4980. Is this a configuration issue or something else?

    Read the article

  • New features of C# 4.0

    This article covers New features of C# 4.0. Article has been divided into below sections. Introduction. Dynamic Lookup. Named and Optional Arguments. Features for COM interop. Variance. Relationship with Visual Basic. Resources. Other interested readings… 22 New Features of Visual Studio 2008 for .NET Professionals 50 New Features of SQL Server 2008 IIS 7.0 New features Introduction It is now close to a year since Microsoft Visual C# 3.0 shipped as part of Visual Studio 2008. In the VS Managed Languages team we are hard at work on creating the next version of the language (with the unsurprising working title of C# 4.0), and this document is a first public description of the planned language features as we currently see them. Please be advised that all this is in early stages of production and is subject to change. Part of the reason for sharing our plans in public so early is precisely to get the kind of feedback that will cause us to improve the final product before it rolls out. Simultaneously with the publication of this whitepaper, a first public CTP (community technology preview) of Visual Studio 2010 is going out as a Virtual PC image for everyone to try. Please use it to play and experiment with the features, and let us know of any thoughts you have. We ask for your understanding and patience working with very early bits, where especially new or newly implemented features do not have the quality or stability of a final product. The aim of the CTP is not to give you a productive work environment but to give you the best possible impression of what we are working on for the next release. The CTP contains a number of walkthroughs, some of which highlight the new language features of C# 4.0. Those are excellent for getting a hands-on guided tour through the details of some common scenarios for the features. You may consider this whitepaper a companion document to these walkthroughs, complementing them with a focus on the overall language features and how they work, as opposed to the specifics of the concrete scenarios. C# 4.0 The major theme for C# 4.0 is dynamic programming. Increasingly, objects are “dynamic” in the sense that their structure and behavior is not captured by a static type, or at least not one that the compiler knows about when compiling your program. Some examples include a. objects from dynamic programming languages, such as Python or Ruby b. COM objects accessed through IDispatch c. ordinary .NET types accessed through reflection d. objects with changing structure, such as HTML DOM objects While C# remains a statically typed language, we aim to vastly improve the interaction with such objects. A secondary theme is co-evolution with Visual Basic. Going forward we will aim to maintain the individual character of each language, but at the same time important new features should be introduced in both languages at the same time. They should be differentiated more by style and feel than by feature set. The new features in C# 4.0 fall into four groups: Dynamic lookup Dynamic lookup allows you to write method, operator and indexer calls, property and field accesses, and even object invocations which bypass the C# static type checking and instead gets resolved at runtime. Named and optional parameters Parameters in C# can now be specified as optional by providing a default value for them in a member declaration. When the member is invoked, optional arguments can be omitted. Furthermore, any argument can be passed by parameter name instead of position. COM specific interop features Dynamic lookup as well as named and optional parameters both help making programming against COM less painful than today. On top of that, however, we are adding a number of other small features that further improve the interop experience. Variance It used to be that an IEnumerable<string> wasn’t an IEnumerable<object>. Now it is – C# embraces type safe “co-and contravariance” and common BCL types are updated to take advantage of that. Dynamic Lookup Dynamic lookup allows you a unified approach to invoking things dynamically. With dynamic lookup, when you have an object in your hand you do not need to worry about whether it comes from COM, IronPython, the HTML DOM or reflection; you just apply operations to it and leave it to the runtime to figure out what exactly those operations mean for that particular object. This affords you enormous flexibility, and can greatly simplify your code, but it does come with a significant drawback: Static typing is not maintained for these operations. A dynamic object is assumed at compile time to support any operation, and only at runtime will you get an error if it wasn’t so. Oftentimes this will be no loss, because the object wouldn’t have a static type anyway, in other cases it is a tradeoff between brevity and safety. In order to facilitate this tradeoff, it is a design goal of C# to allow you to opt in or opt out of dynamic behavior on every single call. The dynamic type C# 4.0 introduces a new static type called dynamic. When you have an object of type dynamic you can “do things to it” that are resolved only at runtime: dynamic d = GetDynamicObject(…); d.M(7); The C# compiler allows you to call a method with any name and any arguments on d because it is of type dynamic. At runtime the actual object that d refers to will be examined to determine what it means to “call M with an int” on it. The type dynamic can be thought of as a special version of the type object, which signals that the object can be used dynamically. It is easy to opt in or out of dynamic behavior: any object can be implicitly converted to dynamic, “suspending belief” until runtime. Conversely, there is an “assignment conversion” from dynamic to any other type, which allows implicit conversion in assignment-like constructs: dynamic d = 7; // implicit conversion int i = d; // assignment conversion Dynamic operations Not only method calls, but also field and property accesses, indexer and operator calls and even delegate invocations can be dispatched dynamically: dynamic d = GetDynamicObject(…); d.M(7); // calling methods d.f = d.P; // getting and settings fields and properties d[“one”] = d[“two”]; // getting and setting thorugh indexers int i = d + 3; // calling operators string s = d(5,7); // invoking as a delegate The role of the C# compiler here is simply to package up the necessary information about “what is being done to d”, so that the runtime can pick it up and determine what the exact meaning of it is given an actual object d. Think of it as deferring part of the compiler’s job to runtime. The result of any dynamic operation is itself of type dynamic. Runtime lookup At runtime a dynamic operation is dispatched according to the nature of its target object d: COM objects If d is a COM object, the operation is dispatched dynamically through COM IDispatch. This allows calling to COM types that don’t have a Primary Interop Assembly (PIA), and relying on COM features that don’t have a counterpart in C#, such as indexed properties and default properties. Dynamic objects If d implements the interface IDynamicObject d itself is asked to perform the operation. Thus by implementing IDynamicObject a type can completely redefine the meaning of dynamic operations. This is used intensively by dynamic languages such as IronPython and IronRuby to implement their own dynamic object models. It will also be used by APIs, e.g. by the HTML DOM to allow direct access to the object’s properties using property syntax. Plain objects Otherwise d is a standard .NET object, and the operation will be dispatched using reflection on its type and a C# “runtime binder” which implements C#’s lookup and overload resolution semantics at runtime. This is essentially a part of the C# compiler running as a runtime component to “finish the work” on dynamic operations that was deferred by the static compiler. Example Assume the following code: dynamic d1 = new Foo(); dynamic d2 = new Bar(); string s; d1.M(s, d2, 3, null); Because the receiver of the call to M is dynamic, the C# compiler does not try to resolve the meaning of the call. Instead it stashes away information for the runtime about the call. This information (often referred to as the “payload”) is essentially equivalent to: “Perform an instance method call of M with the following arguments: 1. a string 2. a dynamic 3. a literal int 3 4. a literal object null” At runtime, assume that the actual type Foo of d1 is not a COM type and does not implement IDynamicObject. In this case the C# runtime binder picks up to finish the overload resolution job based on runtime type information, proceeding as follows: 1. Reflection is used to obtain the actual runtime types of the two objects, d1 and d2, that did not have a static type (or rather had the static type dynamic). The result is Foo for d1 and Bar for d2. 2. Method lookup and overload resolution is performed on the type Foo with the call M(string,Bar,3,null) using ordinary C# semantics. 3. If the method is found it is invoked; otherwise a runtime exception is thrown. Overload resolution with dynamic arguments Even if the receiver of a method call is of a static type, overload resolution can still happen at runtime. This can happen if one or more of the arguments have the type dynamic: Foo foo = new Foo(); dynamic d = new Bar(); var result = foo.M(d); The C# runtime binder will choose between the statically known overloads of M on Foo, based on the runtime type of d, namely Bar. The result is again of type dynamic. The Dynamic Language Runtime An important component in the underlying implementation of dynamic lookup is the Dynamic Language Runtime (DLR), which is a new API in .NET 4.0. The DLR provides most of the infrastructure behind not only C# dynamic lookup but also the implementation of several dynamic programming languages on .NET, such as IronPython and IronRuby. Through this common infrastructure a high degree of interoperability is ensured, but just as importantly the DLR provides excellent caching mechanisms which serve to greatly enhance the efficiency of runtime dispatch. To the user of dynamic lookup in C#, the DLR is invisible except for the improved efficiency. However, if you want to implement your own dynamically dispatched objects, the IDynamicObject interface allows you to interoperate with the DLR and plug in your own behavior. This is a rather advanced task, which requires you to understand a good deal more about the inner workings of the DLR. For API writers, however, it can definitely be worth the trouble in order to vastly improve the usability of e.g. a library representing an inherently dynamic domain. Open issues There are a few limitations and things that might work differently than you would expect. · The DLR allows objects to be created from objects that represent classes. However, the current implementation of C# doesn’t have syntax to support this. · Dynamic lookup will not be able to find extension methods. Whether extension methods apply or not depends on the static context of the call (i.e. which using clauses occur), and this context information is not currently kept as part of the payload. · Anonymous functions (i.e. lambda expressions) cannot appear as arguments to a dynamic method call. The compiler cannot bind (i.e. “understand”) an anonymous function without knowing what type it is converted to. One consequence of these limitations is that you cannot easily use LINQ queries over dynamic objects: dynamic collection = …; var result = collection.Select(e => e + 5); If the Select method is an extension method, dynamic lookup will not find it. Even if it is an instance method, the above does not compile, because a lambda expression cannot be passed as an argument to a dynamic operation. There are no plans to address these limitations in C# 4.0. Named and Optional Arguments Named and optional parameters are really two distinct features, but are often useful together. Optional parameters allow you to omit arguments to member invocations, whereas named arguments is a way to provide an argument using the name of the corresponding parameter instead of relying on its position in the parameter list. Some APIs, most notably COM interfaces such as the Office automation APIs, are written specifically with named and optional parameters in mind. Up until now it has been very painful to call into these APIs from C#, with sometimes as many as thirty arguments having to be explicitly passed, most of which have reasonable default values and could be omitted. Even in APIs for .NET however you sometimes find yourself compelled to write many overloads of a method with different combinations of parameters, in order to provide maximum usability to the callers. Optional parameters are a useful alternative for these situations. Optional parameters A parameter is declared optional simply by providing a default value for it: public void M(int x, int y = 5, int z = 7); Here y and z are optional parameters and can be omitted in calls: M(1, 2, 3); // ordinary call of M M(1, 2); // omitting z – equivalent to M(1, 2, 7) M(1); // omitting both y and z – equivalent to M(1, 5, 7) Named and optional arguments C# 4.0 does not permit you to omit arguments between commas as in M(1,,3). This could lead to highly unreadable comma-counting code. Instead any argument can be passed by name. Thus if you want to omit only y from a call of M you can write: M(1, z: 3); // passing z by name or M(x: 1, z: 3); // passing both x and z by name or even M(z: 3, x: 1); // reversing the order of arguments All forms are equivalent, except that arguments are always evaluated in the order they appear, so in the last example the 3 is evaluated before the 1. Optional and named arguments can be used not only with methods but also with indexers and constructors. Overload resolution Named and optional arguments affect overload resolution, but the changes are relatively simple: A signature is applicable if all its parameters are either optional or have exactly one corresponding argument (by name or position) in the call which is convertible to the parameter type. Betterness rules on conversions are only applied for arguments that are explicitly given – omitted optional arguments are ignored for betterness purposes. If two signatures are equally good, one that does not omit optional parameters is preferred. M(string s, int i = 1); M(object o); M(int i, string s = “Hello”); M(int i); M(5); Given these overloads, we can see the working of the rules above. M(string,int) is not applicable because 5 doesn’t convert to string. M(int,string) is applicable because its second parameter is optional, and so, obviously are M(object) and M(int). M(int,string) and M(int) are both better than M(object) because the conversion from 5 to int is better than the conversion from 5 to object. Finally M(int) is better than M(int,string) because no optional arguments are omitted. Thus the method that gets called is M(int). Features for COM interop Dynamic lookup as well as named and optional parameters greatly improve the experience of interoperating with COM APIs such as the Office Automation APIs. In order to remove even more of the speed bumps, a couple of small COM-specific features are also added to C# 4.0. Dynamic import Many COM methods accept and return variant types, which are represented in the PIAs as object. In the vast majority of cases, a programmer calling these methods already knows the static type of a returned object from context, but explicitly has to perform a cast on the returned value to make use of that knowledge. These casts are so common that they constitute a major nuisance. In order to facilitate a smoother experience, you can now choose to import these COM APIs in such a way that variants are instead represented using the type dynamic. In other words, from your point of view, COM signatures now have occurrences of dynamic instead of object in them. This means that you can easily access members directly off a returned object, or you can assign it to a strongly typed local variable without having to cast. To illustrate, you can now say excel.Cells[1, 1].Value = "Hello"; instead of ((Excel.Range)excel.Cells[1, 1]).Value2 = "Hello"; and Excel.Range range = excel.Cells[1, 1]; instead of Excel.Range range = (Excel.Range)excel.Cells[1, 1]; Compiling without PIAs Primary Interop Assemblies are large .NET assemblies generated from COM interfaces to facilitate strongly typed interoperability. They provide great support at design time, where your experience of the interop is as good as if the types where really defined in .NET. However, at runtime these large assemblies can easily bloat your program, and also cause versioning issues because they are distributed independently of your application. The no-PIA feature allows you to continue to use PIAs at design time without having them around at runtime. Instead, the C# compiler will bake the small part of the PIA that a program actually uses directly into its assembly. At runtime the PIA does not have to be loaded. Omitting ref Because of a different programming model, many COM APIs contain a lot of reference parameters. Contrary to refs in C#, these are typically not meant to mutate a passed-in argument for the subsequent benefit of the caller, but are simply another way of passing value parameters. It therefore seems unreasonable that a C# programmer should have to create temporary variables for all such ref parameters and pass these by reference. Instead, specifically for COM methods, the C# compiler will allow you to pass arguments by value to such a method, and will automatically generate temporary variables to hold the passed-in values, subsequently discarding these when the call returns. In this way the caller sees value semantics, and will not experience any side effects, but the called method still gets a reference. Open issues A few COM interface features still are not surfaced in C#. Most notably these include indexed properties and default properties. As mentioned above these will be respected if you access COM dynamically, but statically typed C# code will still not recognize them. There are currently no plans to address these remaining speed bumps in C# 4.0. Variance An aspect of generics that often comes across as surprising is that the following is illegal: IList<string> strings = new List<string>(); IList<object> objects = strings; The second assignment is disallowed because strings does not have the same element type as objects. There is a perfectly good reason for this. If it were allowed you could write: objects[0] = 5; string s = strings[0]; Allowing an int to be inserted into a list of strings and subsequently extracted as a string. This would be a breach of type safety. However, there are certain interfaces where the above cannot occur, notably where there is no way to insert an object into the collection. Such an interface is IEnumerable<T>. If instead you say: IEnumerable<object> objects = strings; There is no way we can put the wrong kind of thing into strings through objects, because objects doesn’t have a method that takes an element in. Variance is about allowing assignments such as this in cases where it is safe. The result is that a lot of situations that were previously surprising now just work. Covariance In .NET 4.0 the IEnumerable<T> interface will be declared in the following way: public interface IEnumerable<out T> : IEnumerable { IEnumerator<T> GetEnumerator(); } public interface IEnumerator<out T> : IEnumerator { bool MoveNext(); T Current { get; } } The “out” in these declarations signifies that the T can only occur in output position in the interface – the compiler will complain otherwise. In return for this restriction, the interface becomes “covariant” in T, which means that an IEnumerable<A> is considered an IEnumerable<B> if A has a reference conversion to B. As a result, any sequence of strings is also e.g. a sequence of objects. This is useful e.g. in many LINQ methods. Using the declarations above: var result = strings.Union(objects); // succeeds with an IEnumerable<object> This would previously have been disallowed, and you would have had to to some cumbersome wrapping to get the two sequences to have the same element type. Contravariance Type parameters can also have an “in” modifier, restricting them to occur only in input positions. An example is IComparer<T>: public interface IComparer<in T> { public int Compare(T left, T right); } The somewhat baffling result is that an IComparer<object> can in fact be considered an IComparer<string>! It makes sense when you think about it: If a comparer can compare any two objects, it can certainly also compare two strings. This property is referred to as contravariance. A generic type can have both in and out modifiers on its type parameters, as is the case with the Func<…> delegate types: public delegate TResult Func<in TArg, out TResult>(TArg arg); Obviously the argument only ever comes in, and the result only ever comes out. Therefore a Func<object,string> can in fact be used as a Func<string,object>. Limitations Variant type parameters can only be declared on interfaces and delegate types, due to a restriction in the CLR. Variance only applies when there is a reference conversion between the type arguments. For instance, an IEnumerable<int> is not an IEnumerable<object> because the conversion from int to object is a boxing conversion, not a reference conversion. Also please note that the CTP does not contain the new versions of the .NET types mentioned above. In order to experiment with variance you have to declare your own variant interfaces and delegate types. COM Example Here is a larger Office automation example that shows many of the new C# features in action. using System; using System.Diagnostics; using System.Linq; using Excel = Microsoft.Office.Interop.Excel; using Word = Microsoft.Office.Interop.Word; class Program { static void Main(string[] args) { var excel = new Excel.Application(); excel.Visible = true; excel.Workbooks.Add(); // optional arguments omitted excel.Cells[1, 1].Value = "Process Name"; // no casts; Value dynamically excel.Cells[1, 2].Value = "Memory Usage"; // accessed var processes = Process.GetProcesses() .OrderByDescending(p =&gt; p.WorkingSet) .Take(10); int i = 2; foreach (var p in processes) { excel.Cells[i, 1].Value = p.ProcessName; // no casts excel.Cells[i, 2].Value = p.WorkingSet; // no casts i++; } Excel.Range range = excel.Cells[1, 1]; // no casts Excel.Chart chart = excel.ActiveWorkbook.Charts. Add(After: excel.ActiveSheet); // named and optional arguments chart.ChartWizard( Source: range.CurrentRegion, Title: "Memory Usage in " + Environment.MachineName); //named+optional chart.ChartStyle = 45; chart.CopyPicture(Excel.XlPictureAppearance.xlScreen, Excel.XlCopyPictureFormat.xlBitmap, Excel.XlPictureAppearance.xlScreen); var word = new Word.Application(); word.Visible = true; word.Documents.Add(); // optional arguments word.Selection.Paste(); } } The code is much more terse and readable than the C# 3.0 counterpart. Note especially how the Value property is accessed dynamically. This is actually an indexed property, i.e. a property that takes an argument; something which C# does not understand. However the argument is optional. Since the access is dynamic, it goes through the runtime COM binder which knows to substitute the default value and call the indexed property. Thus, dynamic COM allows you to avoid accesses to the puzzling Value2 property of Excel ranges. Relationship with Visual Basic A number of the features introduced to C# 4.0 already exist or will be introduced in some form or other in Visual Basic: · Late binding in VB is similar in many ways to dynamic lookup in C#, and can be expected to make more use of the DLR in the future, leading to further parity with C#. · Named and optional arguments have been part of Visual Basic for a long time, and the C# version of the feature is explicitly engineered with maximal VB interoperability in mind. · NoPIA and variance are both being introduced to VB and C# at the same time. VB in turn is adding a number of features that have hitherto been a mainstay of C#. As a result future versions of C# and VB will have much better feature parity, for the benefit of everyone. Resources All available resources concerning C# 4.0 can be accessed through the C# Dev Center. Specifically, this white paper and other resources can be found at the Code Gallery site. Enjoy! span.fullpost {display:none;}

    Read the article

  • F# Objects &ndash; Integration with the other .Net Languages &ndash; Part 2

    - by MarkPearl
    So in part one of my posting I covered the real basics of object creation. Today I will hopefully dig a little deeper… My expert F# book brings up an interesting point – properties in F# are just syntactic sugar for method calls. This makes sense… for instance assume I had the following object with the property exposed called Firstname. type Person(Firstname : string, Lastname : string) = member v.Firstname = Firstname I could extend the Firstname property with the following code and everything would be hunky dory… type Person(Firstname : string, Lastname : string) = member v.Firstname = Console.WriteLine("Side Effect") Firstname   All that this would do is each time I use the property Firstname, I would see the side effect printed to the screen saying “Side Effect”. Member methods have a very similar look & feel to properties, in fact the only difference really is that you declare that parameters are being passed in. type Person(Firstname : string, Lastname : string) = member v.FullName(middleName) = Firstname + " " + middleName + " " + Lastname   In the code above, FullName requires the parameter middleName, and if viewed from another project in C# would show as a method and not a property. Precomputation Optimizations Okay, so something that is obvious once you think of it but that poses an interesting side effect of mutable value holders is pre-computation of results. All it is, is a slight difference in code but can result in quite a huge saving in performance. Basically pre-computation means you would not need to compute a value every time a method is called – but could perform the computation at the creation of the object (I hope I have got it right). In a way I battle to differentiate this from lazy evaluation but I will show an example to explain the principle. Let me try and show an example to illustrate the principle… assume the following F# module namespace myNamespace open System module myMod = let Add val1 val2 = Console.WriteLine("Compute") val1 + val2 type MathPrecompute(val1 : int, val2 : int) = let precomputedsum = Add val1 val2 member v.Sum = precomputedsum type MathNormalCompute(val1 : int, val2 : int) = member v.Sum = Add val1 val2 Now assume you have a C# console app that makes use of the objects with code similar to the following… using System; using myNamespace; namespace CSharpTest { class Program { static void Main(string[] args) { Console.WriteLine("Constructing Objects"); var myObj1 = new myMod.MathNormalCompute(10, 11); var myObj2 = new myMod.MathPrecompute(10, 11); Console.WriteLine(""); Console.WriteLine("Normal Compute Sum..."); Console.WriteLine(myObj1.Sum); Console.WriteLine(myObj1.Sum); Console.WriteLine(myObj1.Sum); Console.WriteLine(""); Console.WriteLine("Pre Compute Sum..."); Console.WriteLine(myObj2.Sum); Console.WriteLine(myObj2.Sum); Console.WriteLine(myObj2.Sum); Console.ReadKey(); } } } The output when running the console application would be as follows…. You will notice with the normal compute object that the system would call the Add function every time the method was called. With the Precompute object it only called the compute method when the object was created. Subtle, but something that could lead to major performance benefits. So… this post has gone off in a slight tangent but still related to F# objects.

    Read the article

  • Cyrus: In practical terms, how do end users administer their shared mailboxes?

    - by Nick
    Let's say we have four customer service reps: Billy, Bob, Joe, and Tom. Tom is the department manager. There's a shared Customer Service mailbox on the Cyrus server that they all have access to. Tom, as the manager also has administrative privileges for the shared mailbox. They decide they want to create sub-folders a certain way, and Tom creates them. They're all running Thunderbird, so Tom right-clicks the main folder and chooses "New Subfolder". Now Tom has the Subfolders he needs and the other sales reps have... nothing! Because Cyrus created the Subfolders giving Tom "Full Access" permissions, and everyone else gets no access. So how does Tom give the other reps in his department access to the new folders? As far as Cyrus is concerned, Tom has permission to grant others access to his new mailboxes- But as far as I can tell, there's no option in Thunderbird for granting mailbox permissions. An IT staff member should not have to receive a support request every time someone wants to add a Subfolder to a shared mailbox. That's why we make certain users into mailbox admins in the first place! But asking (non-technical) users to SSH into an IMAP server to run cyradm seems like a bad idea too. Certainly someone has found a solution for this dilemma. Perhaps a Thunderbird extension for setting Cyrus permissions? Or something like umask that forces subfolders to have identical permissions to their parents on creation? And related, what about Sieve configuration? Is there anyway that can be done from the client machine too? Thanks, Nick

    Read the article

  • Collision Resolution

    - by CiscoIPPhone
    I know quite well how to check for collisions, but I don't know how to handle the collision in a good way. Simplified, if two objects collide I use some calculations to change the velocity direction. If I don't move the two objects they will still overlap and if the velocity is not big enough they will still collide after next update. This can cause objects to get stuck in each other. But what if I try to move the two objects so they do not overlap. This sounds like a good idea but I have realised that if there is more than two objects this becomes very complicated. What if I move the two objects and one of them collides with other objects so I have to move them too and they may collide with walls etc. I have a top down 2D game in mind but I don't think that has much to do with it. How are collisions usually handled? This question is asked on behalf of Wooh

    Read the article

  • How do I create statistics to make ‘small’ objects appear ‘large’ to the Optmizer?

    - by Maria Colgan
    I recently spoke with a customer who has a development environment that is a tiny fraction of the size of their production environment. His team has been tasked with identifying problem SQL statements in this development environment before new code is released into production. The problem is the objects in the development environment are so small, the execution plans selected in the development environment rarely reflects what actually happens in production. To ensure the development environment accurately reflects production, in the eyes of the Optimizer, the statistics used in the development environment must be the same as the statistics used in production. This can be achieved by exporting the statistics from production and import them into the development environment. Even though the underlying objects are a fraction of the size of production, the Optimizer will see them as the same size and treat them the same way as it would in production. Below are the necessary steps to achieve this in their environment. I am using the SH sample schema as the application schema who's statistics we want to move from production to development. Step 1. Create a staging table, in the production environment, where the statistics can be stored Step 2. Export the statistics for the application schema, from the data dictionary in production, into the staging table Step 3. Create an Oracle directory on the production system where the export of the staging table will reside and grant the SH user the necessary privileges on it. Step 4. Export the staging table from production using data pump export Step 5. Copy the dump file containing the stating table from production to development Step 6. Create an Oracle directory on the development system where the export of the staging table resides and grant the SH user the necessary privileges on it.  Step 7. Import the staging table into the development environment using data pump import Step 8. Import the statistics from the staging table into the dictionary in the development environment. You can get a copy of the script I used to generate this post here. +Maria Colgan

    Read the article

  • How can I author objects with perspective that fit into a tile-based map but span multiple tiles?

    - by Growler
    I'm creating a tilemap city and trying to figure out the most efficient way to create unique building scenes. The trick is, I need to maintain a sort of 2D, almost-top-down perspective, which is hard to do with buildings or large objects that span multiple tiles. I've tried doing three buildings at a time, and mixing and matching the base layer and colors, like this: This creates a weird overlapping effect, and also doesn't seem that efficient from a production standpoint. But it was the best way to have shadows appear correctly on the neighboring buildings. I'm wondering if modular buildings would be the way to go? That way I can mix and match any set of buildings together as tiles: I guess I would have to risk some perspective and shadowing to get the buildings to align correctly. What sort of authoring process could I use to allow me to create a variety of buildings (or other objects) that maintain this perspective while spanning multiple tiles worth of screen space? Would you recommend creating blank buildings, and then affixing art overlays as necessary to make the buildings unique? Or should they be directly part of the building tile (for example, create a separate tileset of buildings signs and colorings)?

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >