Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 391/537 | < Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >

  • Developing Schema Compare for Oracle (Part 2): Dependencies

    - by Simon Cooper
    In developing Schema Compare for Oracle, one of the issues we came across was the size of the databases. As detailed in my last blog post, we had to allow schema pre-filtering due to the number of objects in a standard Oracle database. Unfortunately, this leads to some quite tricky situations regarding object dependencies. This post explains how we deal with these dependencies. 1. Cross-schema dependencies Say, in the following database, you're populating SchemaA, and synchronizing SchemaA.Table1: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(Col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1(Col1)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); We need to do a rebuild of SchemaA.Table1 to change Col1 from a VARCHAR2(100) to a NUMBER. This consists of: Creating a table with the new schema Inserting data from the old table to the new table, with appropriate conversion functions (in this case, TO_NUMBER) Dropping the old table Rename new table to same name as old table Unfortunately, in this situation, the rebuild will fail at step 1, as we're trying to create a NUMBER column with a foreign key reference to a VARCHAR2(100) column. As we're only populating SchemaA, the naive implementation of the object population prefiltering (sticking a WHERE owner = 'SCHEMAA' on all the data dictionary queries) will generate an incorrect sync script. What we actually have to do is: Drop foreign key constraint on SchemaA.Table1 Rebuild SchemaB.Table1 Rebuild SchemaA.Table1, adding the foreign key constraint to the new table This means that in order to generate a correct synchronization script for SchemaA.Table1 we have to know what SchemaB.Table1 is, and that it also needs to be rebuilt to successfully rebuild SchemaA.Table1. SchemaB isn't the schema that the user wants to synchronize, but we still have to load the table and column information for SchemaB.Table1 the same way as any table in SchemaA. Fortunately, Oracle provides (mostly) complete dependency information in the dictionary views. Before we actually read the information on all the tables and columns in the database, we can get dependency information on all the objects that are either pointed at by objects in the schemas we’re populating, or point to objects in the schemas we’re populating (think about what would happen if SchemaB was being explicitly populated instead), with a suitable query on all_constraints (for foreign key relationships) and all_dependencies (for most other types of dependencies eg a function using another function). The extra objects found can then be included in the actual object population, and the sync wizard then has enough information to figure out the right thing to do when we get to actually synchronize the objects. Unfortunately, this isn’t enough. 2. Dependency chains The solution above will only get the immediate dependencies of objects in populated schemas. What if there’s a chain of dependencies? A.tbl1 -> B.tbl1 -> C.tbl1 -> D.tbl1 If we’re only populating SchemaA, the implementation above will only include B.tbl1 in the dependent objects list, whereas we might need to know about C.tbl1 and D.tbl1 as well, in order to ensure a modification on A.tbl1 can succeed. What we actually need is a graph traversal on the dependency graph that all_dependencies represents. Fortunately, we don’t have to read all the database dependency information from the server and run the graph traversal on the client computer, as Oracle provides a method of doing this in SQL – CONNECT BY. So, we can put all the dependencies we want to include together in big bag with UNION ALL, then run a SELECT ... CONNECT BY on it, starting with objects in the schema we’re populating. We should end up with all the objects that might be affected by modifications in the initial schema we’re populating. Good solution? Well, no. For one thing, it’s sloooooow. all_dependencies, on my test databases, has got over 110,000 rows in it, and the entire query, for which Oracle was creating a temporary table to hold the big bag of graph edges, was often taking upwards of two minutes. This is too long, and would only get worse for large databases. But it had some more fundamental problems than just performance. 3. Comparison dependencies Consider the following schema: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); What will happen if we used the dependency algorithm above on the source & target database? Well, SchemaA.Table1 has a foreign key reference to SchemaB.Table1, so that will be included in the source database population. On the target, SchemaA.Table1 has no such reference. Therefore SchemaB.Table1 will not be included in the target database population. In the resulting comparison of the two objects models, what you will end up with is: SOURCE  TARGET SchemaA.Table1 -> SchemaA.Table1 SchemaB.Table1 -> (no object exists) When this comparison is synchronized, we will see that SchemaB.Table1 does not exist, so we will try the following sequence of actions: Create SchemaB.Table1 Rebuild SchemaA.Table1, with foreign key to SchemaB.Table1 Oops. Because the dependencies are only followed within a single database, we’ve tried to create an object that already exists. To fix this we can include any objects found as dependencies in the source or target databases in the object population of both databases. SchemaB.Table1 will then be included in the target database population, and we won’t try and create objects that already exist. All good? Well, consider the following schema (again, only explicitly populating SchemaA, and synchronizing SchemaA.Table1): SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); CREATE TABLE SchemaC.Table1 ( Col1 NUMBER);   CREATE TABLE SchemaC.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1); Although we’re now including SchemaB.Table1 on both sides of the comparison, there’s a third table (SchemaC.Table1) that we don’t know about that will cause the rebuild of SchemaB.Table1 to fail if we try and synchronize SchemaA.Table1. That’s because we’re only running the dependency query on the schemas we’re explicitly populating; to solve this issue, we would have to run the dependency query again, but this time starting the graph traversal from the objects found in the other database. Furthermore, this dependency chain could be arbitrarily extended.This leads us to the following algorithm for finding all the dependencies of a comparison: Find initial dependencies of schemas the user has selected to compare on the source and target Include these objects in both the source and target object populations Run the dependency query on the source, starting with the objects found as dependents on the target, and vice versa Repeat 2 & 3 until no more objects are found For the schema above, this will result in the following sequence of actions: Find initial dependenciesSchemaA.Table1 -> SchemaB.Table1 found on sourceNo objects found on target Include objects in both source and targetSchemaB.Table1 included in source and target Run dependency query, starting with found objectsNo objects to start with on sourceSchemaB.Table1 -> SchemaC.Table1 found on target Include objects in both source and targetSchemaC.Table1 included in source and target Run dependency query on found objectsNo objects found in sourceNo objects to start with in target Stop This will ensure that we include all the necessary objects to make any synchronization work. However, there is still the issue of query performance; the CONNECT BY on the entire database dependency graph is still too slow. After much sitting down and drawing complicated diagrams, we decided to move the graph traversal algorithm from the server onto the client (which turned out to run much faster on the client than on the server); and to ensure we don’t read the entire dependency graph onto the client we also pull the graph across in bits – we start off with dependency edges involving schemas selected for explicit population, and whenever the graph traversal comes across a dependency reference to a schema we don’t yet know about a thunk is hit that pulls in the dependency information for that schema from the database. We continue passing more dependent objects back and forth between the source and target until no more dependency references are found. This gives us the list of all the extra objects to populate in the source and target, and object population can then proceed. 4. Object blacklists and fast dependencies When we tested this solution, we were puzzled in that in some of our databases most of the system schemas (WMSYS, ORDSYS, EXFSYS, XDB, etc) were being pulled in, and this was increasing the database registration and comparison time quite significantly. After debugging, we discovered that the culprits were database tables that used one of the Oracle PL/SQL types (eg the SDO_GEOMETRY spatial type). These were creating a dependency chain from the database tables we were populating to the system schemas, and hence pulling in most of the system objects in that schema. To solve this we introduced blacklists of objects we wouldn’t follow any dependency chain through. As well as the Oracle-supplied PL/SQL types (MDSYS.SDO_GEOMETRY, ORDSYS.SI_COLOR, among others) we also decided to blacklist the entire PUBLIC and SYS schemas, as any references to those would likely lead to a blow up in the dependency graph that would massively increase the database registration time, and could result in the client running out of memory. Even with these improvements, each dependency query was taking upwards of a minute. We discovered from Oracle execution plans that there were some columns, with dependency information we required, that were querying system tables with no indexes on them! To cut a long story short, running the following query: SELECT * FROM all_tab_cols WHERE data_type_owner = ‘XDB’; results in a full table scan of the SYS.COL$ system table! This single clause was responsible for over half the execution time of the dependency query. Hence, the ‘Ignore slow dependencies’ option was born – not querying this and a couple of similar clauses to drastically speed up the dependency query execution time, at the expense of producing incorrect sync scripts in rare edge cases. Needless to say, along with the sync script action ordering, the dependency code in the database registration is one of the most complicated and most rewritten parts of the Schema Compare for Oracle engine. The beta of Schema Compare for Oracle is out now; if you find a bug in it, please do tell us so we can get it fixed!

    Read the article

  • Rendering a big game universe - bitmaps or vector graphics?

    - by user1641923
    I am new to an Android development, though I have much experience with Java, C++, PHP programming and a bit experience with vector graphics too (basic 3d Studio Max, Flash, etc). I am starting to work on an Android game. It is going to be a 2D space shooter/RPG, and I am not going to use any game engines and any 3D party libs. I really want to create a very large game universe, or even pseudo-infinite (without visible borders, as if it were a 2D projection of a sphere). It should include 10-12 clusters of 7-8 planets/other space objects and random amount of single asteroids/comets, which player can interact with and also not interactive background. I am looking for a least complicated aproach to create such a universe. My current ideas are: Simply create bitmaps with space scenery background so that they can be tiled seamlessly repeated and construct my 2D universe of this tiles, then place interactive objects (planets, other spaceships) on it. Using vector graphics. I would have a solid color background, some random background objects and gradients here and there. My problems here: Lack of knowledge of how well vector graphics is integrated in Android. Performance? Memory usage? Does Android manage big bitmaps well? Do all of the bitmaps have to be in memory during all game process? I am interested in technical details regarding each of the ideas and a suggestion, which I should go with.

    Read the article

  • How can I decrease relevancy of Creative Commons footer text? (In Google Webmaster Tools)

    - by anonymous coward
    I know that I may just have to link the image to make this happen, but I figured it was worth asking, just in case there's some other semantic markup or tips I could use... I have a site that uses the textual Creative Commons blurb in the footer. The markup is like so: <div class="footer"> <!-- snip --> <!-- Creative Commons License --> <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-sa/3.0/us/80x15.png" /></a><br />This work by <a xmlns:cc="http://creativecommons.org/ns#" href="http://www.xmemphisx.com/" property="cc:attributionName" rel="cc:attributionURL">xMEMPHISx.com</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/">Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License</a>. <!-- /Creative Commons License --> </div> Within Google Webmaster Tools, the list of relevant keywords is heavily saturated with the text from that blurb. For instance, 50% of my top-ten most relevant keywords (including the site name): [site name] license [keyword] commons creative [keyword] alike [keyword] attribution [keyword] I have not done any extensive testing to find out rather or not this list even matters, and so far this doesn't impact performance in any way. The site is well designed for humans, and it is as findable as it needs to be at the moment. But, out of mostly curiosity: Do you have any tips for decreasing the relevancy of the text from the Creative Commons footer blurb?

    Read the article

  • Experience the Oracle Support Stars Bar

    - by Oracle OpenWorld Blog Team
    By Gina WolfDon't miss the opportunity to meet with the stars of Oracle Support, live and in person at the Moscone West Level 2 lobby. Ask our experts your toughest questions about the Oracle hardware, software, and engineered systems you use to run your business. Explore new Oracle Support innovations including Oracle Platinum Services, My Oracle Support Mobile, and the Oracle Enterprise Manager Ops Center Everywhere program. Learn the latest best practices for problem prevention, rapid resolution, and product upgrades. In addition, discover how Oracle Advanced Customer Support Services can help you maximize the performance of all mission-critical Oracle systems. Come meet the stars behind your support: our trusted experts are ready to assist! The Oracle Support Stars Bar at the Moscone West Level 2 lobby is open all conference week at the following times: Sunday, September 30, 12:00 p.m. – 4:00 p.m. Monday, October 1, 10:00 a.m. – 6:00 p.m. Tuesday, October 2, 10:00 a.m. – 6:00 p.m. Wednesday, October 3, 9:00 a.m. – 5:00 p.m. Thursday, October 4, 9:00 a.m. – 1:00 p.m. Attend one or more of the 27 Oracle Customer Support Services sessions during Oracle OpenWorld to learn how Oracle Support enables you to gain maximum value from your Oracle hardware and software investments.

    Read the article

  • Considering Embedding a Database? Choose MySQL!

    - by Bertrand Matthelié
    The M of the LAMP stack and the #1 database for Web-based applications, MySQL is also an extremely popular choice as embedded database. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Access our Resource Kit to discover the top reasons why:   3,000 ISVs and OEMs rely on MySQL as their embedded database 8 of the top 10 software vendors and hundreds of startups selected MySQL to power their cloud, on-premise and appliance-based offerings Leading mobile and SaaS providers ensure continuous service availability and scalability with lower cost and risk using MySQL Cluster. Learn how you can reduce costs and accelerate time to market while increasing performance and reliability. Access white papers, webinars, case studies and other resources in our Resource Kit.  

    Read the article

  • What is the best type of c# timer to use with an Unity game that uses many timers simultaneously?

    - by Kyle Seidlitz
    I am developing a stand-alone 3d game in Unity that will have anywhere from 1 to 200 timers running simultaneously. For this game timer durations will range from 5 minutes to 4 days. There will not be any countdown displays or any UI for the timers. An object will be selected, a menu choice will then be selected, and the timer will start. Several events will occur at different intervals during the duration of the timer. The events will be confined to changing the material of the selected object, and calling a 1 second sound effect like a chime or a bell. If the user wants to save or end the game before all the timers are done, the start of the still running timers is to be saved to an XML file such that when the game is started again, any still running timers will have a calculation done to see if the timer is then done, where the game will change the materials appropriately. I am still trying to figure out what type of timer to use, and see also if there are any suggestions for saving and calculating times over several days. What class(es) of timers should I use? Are there any special issues I should look out for in terms of performance?

    Read the article

  • Have you used nDepend?

    - by Nick Harrison
    Have you Used NDepend? I have often wanted to use it, but never spent the money on it.   I have developed many tools that try to do pieces of what NDepend does, but never with as much success as they reach. Put simply, it is a tool that will allow you to udnerstand and monitor the architecture of your software, and it does it in some pretty amazing ways. One of the most impressive features is something that they call Code Query Language.   It allows you to write queries very similar to SQL to track the performance of various software metrics and use this to identify areas that are out of compliance with your standards and architecture. For instance, once you have analyzed your project, you can write queries such as : SELECT METHODS WHERE IsPublic AND CouldBePrivate  You can also set up such queries to provide warnings if there are records returned.    You can incorporae this into your daily build and compare build against build. There are over 82 metrics included to allow you to view your code in a variety of angles. I have often advocated for a "Code Inventory" database to track the state of software and the ROI on software investments.    This tool alone will take you about 90% of the way there. If you are not using it yet,  I strongly recommend that you do!

    Read the article

  • Ubuntu unstable, showing awkward behavior

    - by Christophe De Troyer
    Let me start off by saying that this problem can't be described in such a way that allows me to find other topics, which have some relevance to this problem. That's why I created this question. In case this question might have been asked before, I apologize. So what is the problem: My computer (Intel Core I5 2500K - with HD3000 graphics -, 6 gb DDR, 1 SSD, 3 HDD's and an Asus P8Z68 mobo) runs Windows on the SDD. But I decided to give Ubuntu a chance to be my daily OS for basic needs since it's open source and I find it a handicap of not being able to work with it. I decided to run the windows installer and install Ubuntu 12.04 to my 320 Gb hdd which was not being used in my computer. After installing it and booting it, it worked great! I spent the rest of the day/night using it, and falling for it. Great, today I booted Ubuntu (I had the choice in the bootloader as I expected). It asked me for my login and it started logging in. Now, after a few (literally) minutes of letting it "boot" I tried determining the cause of this. What I've figured out so far: When I left click on my desktop it freezes completely for a few seconds I have something like tearing in the left side menu (in games, you know) when I move my mouse around It runs well when just hovering around with my mouse, but from the point I click on something it freezes. What have I tried? I ran a HD Tune diagnostic on the HDD but the performance seems to be very close to the stock values, so I'm taking it as a good HDD. I'm trying to get to the drivers update panel for Ubuntu, but with the state it's in, it's taking a lot of time.. Could anyone point me in a direction for troubleshooting this? I'm not really a noob at all, just when using Linux.. :) Thanks in advance! Christophe,

    Read the article

  • Prevent Looping and Inefficient Rule Executions by C2B2

    - by JuergenKress
    This recipe, taken from the recently published Oracle SOA Suite 11g Performance Cookbook gives guidance on how to avoid rule executions that will loop, potentially indefinitely! We’ll use an inbound XML fact and a local RL fact as an example. Getting ready You’ll need access to a SOA composite containing an Oracle Business Rules component in JDeveloper to apply this recipe. We’ll assume you have an XSD schema with an input type RequestInput containing input and bonus String types, and output String value called output in a type ResponseOutput. These aren’t efficient but serve as an example. We’ll step through adding a rule to a composite and creating an RL fact. How to do it... Open a SOA composite. Right click on the Project and select Business Rules (Service Components), use the search box if it is not immediately available. Give the rule a name and click the green plus icon to add the RequestInput to the input and ResponseOutput to the output types. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: C2B2,looping,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Current SPARC Architectures

    - by Darryl Gove
    Different generations of SPARC processors implement different architectures. The architecture that the compiler targets is controlled implicitly by the -xtarget flag and explicitly by the -arch flag. If an application targets a recent architecture, then the compiler gets to play with all the instructions that the new architecture provides. The downside is that the application won't work on older processors that don't have the new instructions. So for developer's there is a trade-off between performance and portability. The way we have solved this in the compiler is to assume a "generic" architecture, and we've made this the default behaviour of the compiler. The only flag that doesn't make this assumption is -fast which tells the compiler to assume that the build machine is also the deployment machine - so the compiler can use all the instructions that the build machine provides. The -xtarget=generic flag tells the compiler explicitly to use this generic model. We work hard on making generic code work well across all processors. So in most cases this is a very good choice. It is also of interest to know what processors support the various architectures. The following Venn diagram attempts to show this: A textual description is as follows: The T1 and T2 processors, in addition to most other SPARC processors that were shipped in the last 10+ years supported V9b, or sparcvis2. The SPARC64 processors from Fujitsu, used in the M-series machines, added support for the floating point multiply accumulate instruction in the sparcfmaf architecture. Support for this instruction also appeared in the T3 - this is called sparcvis3 Later SPARC64 processors added the integer multiply accumulate instruction, this architecture is sparcima. Finally the T4 includes support for both the integer and floating point multiply accumulate instructions in the sparc4 architecture. So the conclusion should be: Floating point multiply accumulate is supported in both the T-series and M-series machines, so it should be a relatively safe bet to start using it. The T4 is a very good machine to deploy to because it supports all the current instruction sets.

    Read the article

  • Java SE 6 Update 23 is released!

    - by [email protected]
    Java SE 6 Update 23 includes the following key changes: Java VisualVM 1.3.1 introduces the following features and enhancements:           o Added Java version and vendor information to the application Overview view           o Built on NetBeans Platform and profiler 6.9.1 Swing JMenuItem Corrections for right-to-left languages           o Non-default alignment and text orientation for the menu items in Swing have been fixed           o The position of the icon and the text where text used to overlap the icon in a menu item was corrected           o All platform Look and Feel configurations will now handle menu items in right-to-left language situations. Updated HotSpot 19 JVM Engine with improvements to overall performance and reliability. Additional Languages support in SuSE Linux Enterprise Server 10 and 11 on Chinese(Simplified), Chinese (Traditional), Japanese, and Korean locales. Supported system configuration, check - http://java.sun.com/javase/6/webnotes/install/system-configurations.htmlSee Release Notes for additional details.

    Read the article

  • How to go from Mainframe to the Cloud?

    - by Ruma Sanyal
    Running applications on IBM mainframes is expensive, complex, and hinders IT responsiveness. The high costs from frequent forced upgrades, long integration cycles, and complex operations infrastructures can only be alleviated by migrating away from a mainframe environment.  Further, data centers are planning for cloud enablement pinned on principles of operating at significantly lower cost, very low upfront investment, operating on commodity hardware and open, standards based systems, and decoupling of hardware, infrastructure software, and business applications. These operating principles are in direct contrast with the principles of operating businesses on mainframes. By utilizing technologies such as Oracle Tuxedo, Oracle Coherence, and Oracle GoldenGate, businesses are able to quickly and safely migrate away from their IBM mainframe environments. Further, running Oracle Tuxedo and Oracle Coherence on Oracle Exalogic, the first and only integrated cloud machine on the market, Oracle customers can not only run their applications on standards-based open systems, significantly cutting their time to market and costs, they can start their journey of cloud enabling their mainframe applications. Oracle Tuxedo re-hosting tools and techniques can provide automated migration coverage for more than 95% of mainframe application assets, at a fraction of the cost Oracle GoldenGate can migrate data from mainframe systems to open systems, eliminating risks associated with the data migration Oracle Coherence hosts transactional data in memory providing mainframe-like data performance and linear scalability Running Oracle software on top of Oracle Exalogic empowers customers to start their journey of cloud enabling their mainframe applications Join us in a series of events across the globe where you you'll learn how you can build your enterprise cloud and add tremendous value to your business. In addition, meet with Oracle experts and your peers to discuss best practices and see how successful organizations are lowering total cost of ownership and achieving rapid returns by moving to the cloud. Register for the Oracle Fusion Middleware Forum event in a city new you!

    Read the article

  • ClearTrace Supports Statement Level Events

    - by Bill Graziano
    One of the requests I get on a regular basis is to capture the performance of statement level events.  The latest beta has this feature available.  If you’re interested in this I’d like to get some feedback. I handle the SP:StmtCompleted and the SQL:StmtCompleted events.  These report CPU, reads, writes and duration. I’m not in any way saying it’s a good idea to trace these events.  Use with caution as this can make your traces much larger. If there are statement level events in the trace file they will be processed.  However the query screen displays batch level *OR* statement level events.  If it did both we’d be double counting. I don’t have very many traces with statement completed events in them.  That means I only did limited testing of how it parses these events.  It seems to work well so far though.  Your feedback is appreciated. If you ever write loops or cursors in stored procedures you’re going to get huge trace files.  Be warned. I also fixed an annoying bug where ClearTrace would fail and tell you a value had already been added.  This is a result of the collection I use being case-sensitive and SQL Server not being case-sensitive.  I thought I had properly coded around that but finally realized I hadn’t.  It should be fixed now. If you have any questions or problems the ClearTrace support forum is the best place for those.

    Read the article

  • Glimpse: Open Source Web Development

    - by Elizabeth Ayer
    We’re delighted to announce that Red Gate will be backing Glimpse! For those of you who aren’t familiar with the project, Glimpse is an open source tool which does for the server what Firebug does for the client. It’s been in beta for the last year, and we’re very excited to give Glimpse the support and dedicated effort needed to take it to a v1 and beyond. Glimpse’s founders (Nik Molnar and Anthony van der Hoorn) have joined Red Gate, and they’re just as excited as we are about the opportunities that active development of Glimpse will bring. They will continue to write code, support the community and drive the project forward (as they’ve done since its inception). With full-time attention on growing Glimpse and its community, users and developers can expect the project to accelerate, with frequent releases of new functionality. Red Gate is excited about its first major involvement with open source. You may well be wondering, though, why Red Gate is doing this. Glimpse dovetails beautifully with Red Gate’s .NET tools, which makes Glimpse an ideal framework for plugging in advanced, paid-for functionality (like performance analysis) the way web developers want to see it. As a means to this end, we will contribute to the Glimpse open source project in order to broaden its adoption and delight web developers. Since bringing in .NET Reflector in 2008, we’ve learnt sharp lessons from the community about the right and wrong ways to engage with developers, not to mention the enduring value of free. Glimpse further shows what the .NET community can achieve through open source collaboration, and we’re looking forward to working with the Glimpse community to make something enduring and awesome. Nik and Anthony, themselves passionate advocates of community-driven software, will continue to control the Glimpse project, steering it to best meet the needs of its users and contributors. If you have any questions or queries about Glimpse, or Red Gate’s involvement in the project, please tweet with the #glimpse hashtag, contact us at Red Gate on [email protected], or post to the Glimpse Development Forum on Google Groups.

    Read the article

  • Auto-hydrate your objects with ADO.NET

    - by Jake Rutherford
    Recently while writing the monotonous code for pulling data out of a DataReader to hydrate some objects in an application I suddenly wondered "is this really necessary?" You've probably asked yourself the same question, and many of you have: - Used a code generator - Used a ORM such as Entity Framework - Wrote the code anyway because you like busy work     In most of the cases I've dealt with when making a call to a stored procedure the column names match up with the properties of the object I am hydrating. Sure that isn't always the case, but most of the time it's 1 to 1 mapping.  Given that fact I whipped up the following method of hydrating my objects without having write all of the code. First I'll show the code, and then explain what it is doing.      /// <summary>     /// Abstract base class for all Shared objects.     /// </summary>     /// <typeparam name="T"></typeparam>     [Serializable, DataContract(Name = "{0}SharedBase")]     public abstract class SharedBase<T> where T : SharedBase<T>     {         private static List<PropertyInfo> cachedProperties;         /// <summary>         /// Hydrates derived class with values from record.         /// </summary>         /// <param name="dataRecord"></param>         /// <param name="instance"></param>         public static void Hydrate(IDataRecord dataRecord, T instance)         {             var instanceType = instance.GetType();                         //Caching properties to avoid repeated calls to GetProperties.             //Noticable performance gains when processing same types repeatedly.             if (cachedProperties == null)             {                 cachedProperties = instanceType.GetProperties().ToList();             }                         foreach (var property in cachedProperties)             {                 if (!dataRecord.ColumnExists(property.Name)) continue;                 var ordinal = dataRecord.GetOrdinal(property.Name);                 var isNullable = property.PropertyType.IsGenericType &&                                  property.PropertyType.GetGenericTypeDefinition() == typeof (Nullable<>);                 var isNull = dataRecord.IsDBNull(ordinal);                 var propertyType = property.PropertyType;                 if (isNullable)                 {                     if (!string.IsNullOrEmpty(propertyType.FullName))                     {                         var nullableType = Type.GetType(propertyType.FullName);                         propertyType = nullableType != null ? nullableType.GetGenericArguments()[0] : propertyType;                     }                 }                 switch (Type.GetTypeCode(propertyType))                 {                     case TypeCode.Int32:                         property.SetValue(instance,                                           (isNullable && isNull) ? (int?) null : dataRecord.GetInt32(ordinal), null);                         break;                     case TypeCode.Double:                         property.SetValue(instance,                                           (isNullable && isNull) ? (double?) null : dataRecord.GetDouble(ordinal),                                           null);                         break;                     case TypeCode.Boolean:                         property.SetValue(instance,                                           (isNullable && isNull) ? (bool?) null : dataRecord.GetBoolean(ordinal),                                           null);                         break;                     case TypeCode.String:                         property.SetValue(instance, (isNullable && isNull) ? null : isNull ? null : dataRecord.GetString(ordinal),                                           null);                         break;                     case TypeCode.Int16:                         property.SetValue(instance,                                           (isNullable && isNull) ? (int?) null : dataRecord.GetInt16(ordinal), null);                         break;                     case TypeCode.DateTime:                         property.SetValue(instance,                                           (isNullable && isNull)                                               ? (DateTime?) null                                               : dataRecord.GetDateTime(ordinal), null);                         break;                 }             }         }     }   Here is a class which utilizes the above: [Serializable] [DataContract] public class foo : SharedBase<foo> {     [DataMember]     public int? ID { get; set; }     [DataMember]     public string Name { get; set; }     [DataMember]     public string Description { get; set; }     [DataMember]     public string Subject { get; set; }     [DataMember]     public string Body { get; set; }            public foo(IDataRecord record)     {         Hydrate(record, this);                }     public foo() {} }   Explanation: - Class foo inherits from SharedBase specifying itself as the type. (NOTE SharedBase is abstract here in the event we want to provide additional methods which could be overridden by the instance class) public class foo : SharedBase<foo> - One of the foo class constructors accepts a data record which then calls the Hydrate method on SharedBase passing in the record and itself. public foo(IDataRecord record) {      Hydrate(record, this); } - Hydrate method on SharedBase will use reflection on the object passed in to determine its properties. At the same time, it will effectively cache these properties to avoid repeated expensive reflection calls public static void Hydrate(IDataRecord dataRecord, T instance) {      var instanceType = instance.GetType();      //Caching properties to avoid repeated calls to GetProperties.      //Noticable performance gains when processing same types repeatedly.      if (cachedProperties == null)      {           cachedProperties = instanceType.GetProperties().ToList();      } . . . - Hydrate method on SharedBase will iterate each property on the object and determine if a column with matching name exists in data record foreach (var property in cachedProperties) {      if (!dataRecord.ColumnExists(property.Name)) continue;      var ordinal = dataRecord.GetOrdinal(property.Name); . . . NOTE: ColumnExists is an extension method I put on IDataRecord which I’ll include at the end of this post. - Hydrate method will determine if the property is nullable and whether the value in the corresponding column of the data record has a null value var isNullable = property.PropertyType.IsGenericType && property.PropertyType.GetGenericTypeDefinition() == typeof (Nullable<>); var isNull = dataRecord.IsDBNull(ordinal); var propertyType = property.PropertyType; . . .  - If Hydrate method determines the property is nullable it will determine the underlying type and set propertyType accordingly - Hydrate method will set the value of the property based upon the propertyType   That’s it!!!   The magic here is in a few places. First, you may have noticed the following: public abstract class SharedBase<T> where T : SharedBase<T> This says that SharedBase can be created with any type and that for each type it will have it’s own instance. This is important because of the static members within SharedBase. We want this behavior because we are caching the properties for each type. If we did not handle things in this way only 1 type could be cached at a time, or, we’d need to create a collection that allows us to cache the properties for each type = not very elegant.   Second, in the constructor for foo you may have noticed this (literally): public foo(IDataRecord record) {      Hydrate(record, this); } I wanted the code for auto-hydrating to be as simple as possible. At first I wasn’t quite sure how I could call Hydrate on SharedBase within an instance of the class and pass in the instance itself. Fortunately simply passing in “this” does the trick. I wasn’t sure it would work until I tried it out, and fortunately it did.   So, to actually use this feature when utilizing ADO.NET you’d do something like the following:        public List<foo> GetFoo(int? fooId)         {             List<foo> fooList;             const string uspName = "usp_GetFoo";             using (var conn = new SqlConnection(_dbConnection))             using (var cmd = new SqlCommand(uspName, conn))             {                 cmd.CommandType = CommandType.StoredProcedure;                 cmd.Parameters.Add(new SqlParameter("@FooID", SqlDbType.Int)                                        {Direction = ParameterDirection.Input, Value = fooId});                 conn.Open();                 using (var dr = cmd.ExecuteReader())                 {                     fooList= (from row in dr.Cast<DbDataRecord>()                                             select                                                 new foo(row)                                            ).ToList();                 }             }             return fooList;         }   Nice! Instead of having line after line manually assigning values from data record to an object you simply create a new instance and pass in the data record. Note that there are certainly instances where columns returned from stored procedure do not always match up with property names. In this scenario you can still use the above method and simply do your manual assignments afterward.

    Read the article

  • A case for not installing your own software

    - by James Gentsch
    This week I watched some of the Oracle Open World presentations (from the comfort of my Oracle office) and happened on some of Larry Ellison’s comments about cloud computing and engineered systems.  Larry said he sees the move to these as analogous to the moves made by the original adopters of electricity.  The argument goes that the first consumers of electricity had to set up their own power plant.  Then, as the market and infrastructure for electricity matured, power consumers moved from using their own personal power plant to purchasing power from another entity that was focused on power production as their primary product. In the end this was a cheaper and more reliable solution. Now, there are lots of compelling reasons to be looking very seriously at cloud computing and engineered systems for enterprise application deployment.  However, speaking as a software developer of enterprise applications, the part of this that I really love (besides Larry’s early electricity adopter analogy) is that as a mode of application deployment it provides me and my customers a consistent environment in which the applications I am providing will be run.  This cuts way down on the environmental surprises that consistently lead to the hated “well, it works here” situation with the support desk. And just to be clear, I think I hate this situation more than my clients, who I think are happy that at least it is working somewhere.  I hate this because when a problem happens, and let’s face it customers are not wasting their time calling in easy problems, we are seriously disabled when we cannot reproduce the issue which is triggered by something unforeseen in the environment where the application is running.  This situation is incredibly frustrating and an all too often occurrence. I look selfishly forward to cloud computing and engineered systems dramatically reducing the occurrence of problems triggered by unforeseen environmental situations in the software I am responsible for.  I think this is an evolutionary game changer that will be a huge benefit to the reliability and consistent performance of the software for my customers, and may make “well, it works here” a well forgotten phase for future software developers. It may even impact the stress squeeze toy industry.  Well, maybe at least for my group.

    Read the article

  • AMD FX CPU drivers/patch

    - by Mubeen Shahid
    I am using Ubuntu since 2009 on notebooks with Intel CPUs. However now, that I am using AMD's FX 6300, I am interested in knowing if there exists anything from Ubuntu (specifically any kernel enhancements/drivers/patches) for AMD's FX "family 15h" Piledrivers. Reason: I would like to have a kernel which uses the hardware to its full capacity, be able to use the latest instruction sets, for max. performance. I did some tests, started with compiling stable 3.9.7 on my 12.04 LTS box, and during compilation I choose processor vendor AMD (unchecked Intel/VIA/etc.), and when I started Ubuntu with this compiled kernel, in the section "System Settings - Additional Drivers" I found that, in addition to graphic card's drivers, there were AMD family 15h drivers also. However, I would prefer something in this regard tested/signed by Ubuntu developers. P.S: 1- the kernel that I have compiled has some issues with Nvidia graphics drivers, so I deleted kernel 3.9.7 and installed signed 3.8.xx from Ubuntu repositories. 2- incase if somebody is planning to advise me to install "AMD64", I am not talking about AMD64 (which is in fact for 64-bit platform).

    Read the article

  • Seeking solution for printing-reporting .NET

    - by Parhs
    I am developing an application that prints in separate threads in extreme cases about 20-25 pages per minute to various thermal printers. Currently templates for these are XAML xps documents. All printers have graphics drivers that support EMF/GDI printing. So GDI-EMF is done by operating system resulting in slower performance. Sending raw text for printing is another good solution but doesnt work always , because some clients have old chinese thermal printer that nobody support thus impossible to change codepage / emulation. So it doesnt work always. Also most computers running my software are low end ATOM CPU. So I am thinking to return to GDI, EMF printing and have both Text-Only reports and EMF reports. Another reason i want EMF is because here receipts are signed by Electronic Fiscal Memory device.Most of these dont do good job extracting text from XPS as they dont follow the standard but how windows convert GDI to XPS.Even with text-only mode some of them dont support all character encodings and are impossible to send paper cut command after the sign. I know that using a reporting engine would solve rendering problem but I dont want to buy one. All I want is to be able to show tabular data and insert an image and replaced text.I know there is StringTemplate that could do the generation of template but the problem is i should parse somehow the template and render it using GDI commands. Is there any other solution/approach for this ? Or is there anything ready ?

    Read the article

  • Oracle Java Embedded Client 1.1 Released

    - by Roger Brinkley
    Yesterday an update release of Oracle Java Embedded Client (OJEC) 1.1 quietly slipped out door for general availability. Until last year it was pretty difficult to get your hands on either a Connected Limited Device Configuration (CLDC) for small devices or a Connected Device Configuration (CDC) for medium devices java implementation without a substantial initial commitment. But with the the release of OJWC (CLDC) and OJEC (CDC) last year that has changed. OJEC 1.1 is a binary distribution designed for installation on medium configurations which is a mid range processor requiring a  slow startup time, seamless upgrades, in a cost sensitive hardware environment  anywhere from 3.5mb to 8 mb. There are headless as well as headed versions available. It is intended for devices, such as Blu-­-ray Disc players, set-­-top boxes, residential gateways,VOIP phones, and similar. From a software point of view, OJEC is the Java runtime platform implementation of Connected Device Configuration (CDC v1.1, JSR-­-218), Foundation Profile (FP v1.1, JSR-­-219), and Personal Basis Profile (PBP v1.1, JSR-­-217)  and includes optional packages RMI (JSR 66), JDBC (JSR 169) and XML API for Java ME (JSR 280), and Java TV (JSR-­-927). New to this release is support for the XML API (JSR 280) and a number of bug fixes and performance enhancements, including an improved Just-in-Time (JIT) compilation for the x86 chipset architecture. The platforms supported include ArmV5, ArmV6/ArmV7, MIPS 32 74K, and X86 in headless mode. For embedded developers there are number of advantages to using Java and if you have shied away from the JavaME edition in the past I would encourage you to look into the updated version of OJEC 1.1.

    Read the article

  • ODI 11g – Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • Server 2008R2 in Extra Small Windows Azure Instance?

    - by Shawn Eary
    Windows Azure hosting for an Extra Small (XS) Windows VM seems to come out to be about $10 a month right now. I think this XS instance gives you the equivalent of a 1 GHZ CPU with 768MB of RAM. I think the minimum requirements for Server 2008 is 1GHZ CPU with 512MB of RAM. Also, I think the minimum requirements for SQL Server Express is 1GHZ CPU with 256 MB of RAM and that the minimum requirements for Team Foundation Server Express 11 Beta is 2.2 GHZ CPU with 1 Gig of RAM (this 2.2 GHZ part could be a problem for my 1 GHZ XS VM...). Given the performance of the XS Azure instance, would I be able to install: a very basic MVC web site; a free instance of SQL Server Express; a free single user instance of Team Foundation Server Express 11 Beta and run the XS VM instance without serious crashing? I know there are other Shared WebHost providers that can provide these features for me, but those hosting providers have the following disadvantages: They sometimes cost a lot of money after all of the "addons" are in place They probably don't provide the level of security and employee integrity that Microsoft can provide They don't provide the total control that an Azure VM seems to provide

    Read the article

  • Generic Repositories with DI & Data Intensive Controllers

    - by James
    Usually, I consider a large number of parameters as an alarm bell that there may be a design problem somewhere. I am using a Generic Repository for an ASP.NET application and have a Controller with a growing number of parameters. public class GenericRepository<T> : IRepository<T> where T : class { protected DbContext Context { get; set; } protected DbSet<T> DbSet { get; set; } public GenericRepository(DbContext context) { Context = context; DbSet = context.Set<T>(); } ...//methods excluded to keep the question readable } I am using a DI container to pass in the DbContext to the generic repository. So far, this has met my needs and there are no other concrete implmentations of IRepository<T>. However, I had to create a dashboard which uses data from many Entities. There was also a form containing a couple of dropdown lists. Now using the generic repository this makes the parameter requirments grow quickly. The Controller will end up being something like public HomeController(IRepository<EntityOne> entityOneRepository, IRepository<EntityTwo> entityTwoRepository, IRepository<EntityThree> entityThreeRepository, IRepository<EntityFour> entityFourRepository, ILogError logError, ICurrentUser currentUser) { } It has about 6 IRepositories plus a few others to include the required data and the dropdown list options. In my mind this is too many parameters. From a performance point of view, there is only 1 DBContext per request and the DI container will serve the same DbContext to all of the Repositories. From a code standards/readability point of view it's ugly. Is there a better way to handle this situation? Its a real world project with real world time constraints so I will not dwell on it too long, but from a learning perspective it would be good to see how such situations are handled by others.

    Read the article

  • Does immutability entirely eliminate the need for locks in multi-processor programming?

    - by GlenPeterson
    Part 1 Clearly Immutability minimizes the need for locks in multi-processor programming, but does it eliminate that need, or are there instances where immutability alone is not enough? It seems to me that you can only defer processing and encapsulate state so long before most programs have to actually DO something. If a program performs actions on multiple processors, something needs to collect and aggregate the results. All this involves multi-process communication before, after, and possibly during some transformations. The start and end state of the machines are different. Can this always be done with no locks just by throwing out each object and creating a new one instead of changing the original (a crude view of immutability)? What cases still require locking? I'm interested in both the theoretical/academic answer and the practical/real-world answer. I know a lot of functional programmers like to talk about "no side effect" but in the "real world" everything has a side effect. Every processor click takes time and electricity and machine resources away from other processes. So I understand that there may be more than one perspective to answer this question from. If immutability is safe, given certain bounds or assumptions, I want to know what the borders of the "safety zone" are exactly. Some examples of possible boundaries: I/O Exceptions/errors Interfaces with programs written in other languages Interfaces with other machines (physical, virtual, or theoretical) Special thanks to @JimmaHoffa for his comment which started this question! Part 2 Multi-processor programming is often used as an optimization technique - to make some code run faster. When is it faster to use locks vs. immutable objects? Given the limits set out in Amdahl's Law, when can you achieve better over-all performance (with or without the garbage collector taken into account) with immutable objects vs. locking mutable ones? Summary I'm combining these two questions into one to try to get at where the bounding box is for Immutability as a solution to threading problems.

    Read the article

  • invite: WEBLOGIC 12c HANDS-ON WORKSHOP IN PARIS

    - by mseika
    Oracle WebLogic 12c InnovationWorkshopApril 24-26, 2012: Colombes, France Workshop Description Oracle Fusion Middleware is the #1 application infrastructure foundation and WebLogic Server is the #1 Application Server across conventional and cloud environments. It enables enterprises to create and run agile and intelligent business applications and maximize IT efficiency by exploiting modern hardware and software architectures. Do you want to learn more about innovative features, capabilities and roadmap of WebLogic Server 12c? Then this technical hands-on workshop is for you. Agenda Outline WebLogic introduction WebLogic Topology WebLogic Clustering and High Availibility Coherence Troubleshooting Entreprise Messaging Development Tools & Productivity Performance Exalogic Introduction Entreprise Manager Grid Control Oracle Public Cloud Oracle Traffic Director Lab Outline WebLogic Installation & Configuration WebLogic Clustering & HA Coherence Use Cases & Monitoring WebLogic Active GridLink for RAC Integration Messaging: JMS Audience WebLogic Consultants & Architects Prerequisites Basic knowledge in Java and JavaEE Understanding the Application Server concept Basic knowledge in older releases of WebLogic Server would be beneficial Equipment Requirements This workshop requires attendees to provide their own laptops for this class. Attendee laptops must meet the following minimum hardware/software requirements: Minimum 4GB RAM, 30GB free disk space Internet Explorer 7 or Firefox 3 or higher Download and install Oracle VM VirtualBox 4.1.8 AgendaThis workshop is 3 days. 8:30 am Sign-In and technical set up9:00 am: Workshop starts5:00 pm: Workshop ends This workshop is Free but space is limited. Register now!Register Here!

    Read the article

  • What are the common mistakes in 'tailored Scrum approaches'?

    - by Clark Gable
    I have seen this before. Management wants to be agile and be scrummified, but does not want to step out of the status quo. My latest observation is no different; here, the Scrum is 'tailored' to the organization; specifically into a weird many-people-process. The diagram showing the different participants. I am putting together a document listing why this will not work. Here are the obvious ones: 1. There are product owner agents (an obvious WTF), who report to the product owner: causing dilution of decision making capability 2. There is a role that looks similar to a manager in the traditional approach - development manager: an obvious attempt at command-and-control model 3. The ScrumMaster's role includes collecting timesheets, which are used to track progress instead of burndown charts: detrimental to agile's efforts to build teams with motivated individuals Leaving the question "how would you convince the management?", my question is more at, "what else do you see as failures in this/similar 'tailored Scrum approaches'? EDIT: The diagram might use a few more details 1. The development manager is not part of the development team, with not very clearly defined responsibilities, except: developer performance assessemnt, recruitment, etc., 2. There are more than two teams (with ScrumMaster+development manager+dev team) with the same product owner for all teams!

    Read the article

< Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >