Search Results

Search found 64621 results on 2585 pages for 'asp net performance'.

Page 217/2585 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • Retrieve Performance Data from SOA Infrastructure Database

    - by fip
    My earlier blog posting shows how to enable, retrieve and interpret BPEL engine performance statistics to aid performance troubleshooting. The strength of BPEL engine statistics at EM is its break down per request. But there are some limitations with the BPEL performance statistics mentioned in that blog posting: The statistics were stored in memory instead of being persisted. To avoid memory overflow, the data are stored to a buffer with limited size. When the statistic entries exceed the limitation, old data will be flushed out to give ways to new statistics. Therefore it can only keep the last X number of entries of data. The statistics 5 hour ago may not be there anymore. The BPEL engine performance statistics only includes latencies. It does not provide throughputs. Fortunately, Oracle SOA Suite runs with the SOA Infrastructure database and a lot of performance data are naturally persisted there. It is at a more coarse grain than the in-memory BPEL Statistics, but it does have its own strengths as it is persisted. Here I would like offer examples of some basic SQL queries you can run against the infrastructure database of Oracle SOA Suite 11G to acquire the performance statistics for a given period of time. You can run it immediately after you modify the date range to match your actual system. 1. Asynchronous/one-way messages incoming rates The following query will show number of messages sent to one-way/async BPEL processes during a given time period, organized by process names and states select composite_name composite, state, count(*) Count from dlv_message where receive_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and receive_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, state order by Count; 2. Throughput of BPEL process instances The following query shows the number of synchronous and asynchronous process instances created during a given time period. It list instances of all states, including the unfinished and faulted ones. The results will include all composites cross all SOA partitions select state, count(*) Count, composite_name composite, component_name,componenttype from cube_instance where creation_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype order by count(*) desc; 3. Throughput and latencies of BPEL process instances This query is augmented on the previous one, providing more comprehensive information. It gives not only throughput but also the maximum, minimum and average elapse time BPEL process instances. select composite_name Composite, component_name Process, componenttype, state, count(*) Count, trunc(Max(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MaxTime, trunc(Min(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MinTime, trunc(AVG(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) AvgTime from cube_instance where creation_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype, state order by count(*) desc;   4. Combine all together Now let's combine all of these 3 queries together, and parameterize the start and end time stamps to make the script a bit more robust. The following script will prompt for the start and end time before querying against the database: accept startTime prompt 'Enter start time (YYYY-MM-DD HH24:MI:SS)' accept endTime prompt 'Enter end time (YYYY-MM-DD HH24:MI:SS)' Prompt "==== Rejected Messages ===="; REM 2012-10-24 21:00:00 REM 2012-10-24 21:59:59 select count(*), composite_dn from rejected_message where created_time >= to_timestamp('&&StartTime','YYYY-MM-DD HH24:MI:SS') and created_time <= to_timestamp('&&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_dn; Prompt " "; Prompt "==== Throughput of one-way/asynchronous messages ===="; select state, count(*) Count, composite_name composite from dlv_message where receive_date >= to_timestamp('&StartTime','YYYY-MM-DD HH24:MI:SS') and receive_date <= to_timestamp('&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_name, state order by Count; Prompt " "; Prompt "==== Throughput and latency of BPEL process instances ====" select state, count(*) Count, trunc(Max(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MaxTime, trunc(Min(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MinTime, trunc(AVG(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) AvgTime, composite_name Composite, component_name Process, componenttype from cube_instance where creation_date >= to_timestamp('&StartTime','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype, state order by count(*) desc;  

    Read the article

  • Performance tuning of tabular data models in Analysis Services

    - by Greg Low
    More and more practical information around working with tabular data models is starting to appear as more and more sites get deployed.At SQL Down Under, we've already helped quite a few customers move to tabular data models in Analysis Services and have started to collect quite a bit of information on what works well (and what doesn't) in terms of performance of these models. We've also been running a lot of training on tabular data models.It was great to see a whitepaper on the performance of these models released today.Performance Tuning of Tabular Models in SQL Server 2012 Analysis Services was written by John Sirmon, Greg Galloway, Cindy Gross and Karan Gulati. You'll find it here: http://msdn.microsoft.com/en-us/library/dn393915.aspx

    Read the article

  • .NET Reflector Pro to the rescue

    Almost all applications have to interface with components or modules written by somebody else, for which you don't have the source code. This is fine until things go wrong, but when you need to refactor your code and you keep getting strange exceptions, you'll start to wish you could place breakpoints in someone else's code and step through it. Now, of course, you can, as Geoffrey Braaf discovered.

    Read the article

  • Is .NET WCF worth the effort?

    - by SnOrfus
    Maybe it's me, but I've never encountered nearly as many problems, annoying challenges, indirect error messages and general frustrations with any other technology as I have with WCF. What are the ACTUAL benefits? Not the MS Press Release benefits of 'a unified architecture for something something that's not going to work anyway.' And are those benefits worth the annoying frustrations? I'm normally a big MS fanboy over here, but I just can't get behind WCF.

    Read the article

  • Gartner: Magic Quadrant for Corporate Performance Management Suites, 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    Hyperion clearly leads the pack again in Gartner’s analysis of the CPM / EPM market, saying; “Oracle is a Leader in CPM suites, with one of the most widely distributed solutions in the market. Oracle Hyperion Enterprise Performance Management is recognized by CFOs worldwide. The vendor has a well-established partner channel, with both large and smaller CPM SI specialists. Hyperion skills are also plentiful among the independent consultant community, given the well-established products. “ “Oracle continues to innovate, bringing incremental improvements across the portfolio as well as new financial close management, disclosure management and predictive planning additions. Furthermore, Oracle has improved integration of Hyperion with the Oracle BI platform, and has improved planning performance, enabling Hyperion Planning to use Oracle Exalytics In-Memory Machine.” For the full article see here: Gartner: Magic Quadrant for Corporate Performance Management Suites, 2012 And if you missed it, here is also the MQ for BI: Gartner: Magic Quadrant for Business Intelligence Platforms, 2012

    Read the article

  • Examples of permission-based authorization systems in .Net?

    - by Rachel
    I'm trying to figure out how to do roles/permissions in our application, and I am wondering if anyone knows of a good place to get a list of different permission-based authorization systems (preferably with code samples) and perhaps a list of pros/cons for each method. I've seen examples using simple dictionaries, custom attributes, claims-based authorization, and custom frameworks, but I can't find a simple explanation of when to use one over another and what the pros/cons are to using each method. (I'm sure there's other ways than the ones I've listed....) I have never done anything complex with permissions/authorization before, so all of this seems a little overwhelming to me and I'm having trouble figuring out what what is useful information that I can use and what isn't. What I DO know is that this is for a Windows environment using C#/WPF and WCF services. Some permission checks are done on the WCF service and some on the client. Some are business rules, some are authorization checks, and others are UI-related (such as what forms a user can see). They can be very generic like boolean or numeric values, or they can be more complex such as a range of values or a list of database items to be checked/unchecked. Permissions can be set on the group-level, user-level, branch-level, or a custom level, so I do not want to use role-based authorization. Users can be in multiple groups, and users with the appropriate authorization are in charge of creating/maintaining these groups. It is not uncommon for new groups to be created, so they can't be hard-coded.

    Read the article

  • Designing a Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { var fileData = File.ReadAllBytes(file); // Upload using some wrapper for an ORM or DAL someInterface.Upload(fileData, meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • 302: this blog will be closed

    - by preishuber
    After nearly 7 years I will discontinue blogging on this site. My resources are limited. You can reach my German blog which is used to support my customers. Looking back to a long an interesting journey ASP.NET by ScottGu That was the reason to attend this site and support Microsoft as much as I can. For that I was honored as ASP.NET MVP- thanks again. Meet Scoot several times. Great guy! Forums I have left NNTP forums a few years ago and now Microsoft closed it- It was my idea ;-) AJAX Was the wrong way- JQuery won the game IIS7 That is really a great plattform and the IIS team rules. I am sad that is so silent around that topic. ASP.NET after 2.0 Is no longer my world. I love ASP.NET and ASP.NET Server controls. I hate the discussion about how to follow the holy rules of MVC. Microsoft have dropped the goal to bring ASP.NET to #1 and accepted PHP is it. Facebook & Twittering Microblogging takes over a part of the blogging business. Shorter faster cheaper- or as SteveB mentioned - do more with less. Google Google is taking over the web. I am using Bing every time as I can but Google have more options. Sorry Microsoft you will loose that game. Apple That is not the biggest problem of Microsoft. the Ixxx takes over a small part but big money of the market, but the customers are not strongly linked. New wave new hype- Game over Apple. Silverlight My new home. I can reuse a lot of my skills and love the possibilitys. Silverligth will passing WPF-and strike Flash Windows phone 7 Also my skills fit. I just will use it for fun. I am not really satisfied about what I have heard from MIX. Guys from Redmond, I am sad to say you have been the best Smartphone OS and lost everything. The ADO vNext Story That will be the next mystic point. WCF, REST, JSON, ATOM and now OData. Nothing about SQL commands. LINQ, ORM is also not the final solution for multilayered disconnected async scenarios. Personally I prefere the OData idea and dislike the Swiss Army Knife (German Eierlegende Wollmilchsau) WCF. I am still in INETA Speakers board and I am glad to come to your user group. In all other cases you can hire me over ppedv AG. Good by and have good live.

    Read the article

  • Anatomy of a .NET Assembly - CLR metadata 2

    - by Simon Cooper
    Before we look any further at the CLR metadata, we need a quick diversion to understand how the metadata is actually stored. Encoding table information As an example, we'll have a look at a row in the TypeDef table. According to the spec, each TypeDef consists of the following: Flags specifying various properties of the class, including visibility. The name of the type. The namespace of the type. What type this type extends. The field list of this type. The method list of this type. How is all this data actually represented? Offset & RID encoding Most assemblies don't need to use a 4 byte value to specify heap offsets and RIDs everywhere, however we can't hard-code every offset and RID to be 2 bytes long as there could conceivably be more than 65535 items in a heap or more than 65535 fields or types defined in an assembly. So heap offsets and RIDs are only represented in the full 4 bytes if it is required; in the header information at the top of the #~ stream are 3 bits indicating if the #Strings, #GUID, or #Blob heaps use 2 or 4 bytes (the #US stream is not accessed from metadata), and the rowcount of each table. If the rowcount for a particular table is greater than 65535 then all RIDs referencing that table throughout the metadata use 4 bytes, else only 2 bytes are used. Coded tokens Not every field in a table row references a single predefined table. For example, in the TypeDef extends field, a type can extend another TypeDef (a type in the same assembly), a TypeRef (a type in a different assembly), or a TypeSpec (an instantiation of a generic type). A token would have to be used to let us specify the table along with the RID. Tokens are always 4 bytes long; again, this is rather wasteful of space. Cutting the RID down to 2 bytes would make each token 3 bytes long, which isn't really an optimum size for computers to read from memory or disk. However, every use of a token in the metadata tables can only point to a limited subset of the metadata tables. For the extends field, we only need to be able to specify one of 3 tables, which we can do using 2 bits: 0x0: TypeDef 0x1: TypeRef 0x2: TypeSpec We could therefore compress the 4-byte token that would otherwise be needed into a coded token of type TypeDefOrRef. For each type of coded token, the least significant bits encode the table the token points to, and the rest of the bits encode the RID within that table. We can work out whether each type of coded token needs 2 or 4 bytes to represent it by working out whether the maximum RID of every table that the coded token type can point to will fit in the space available. The space available for the RID depends on the type of coded token; a TypeOrMethodDef coded token only needs 1 bit to specify the table, leaving 15 bits available for the RID before a 4-byte representation is needed, whereas a HasCustomAttribute coded token can point to one of 18 different tables, and so needs 5 bits to specify the table, only leaving 11 bits for the RID before 4 bytes are needed to represent that coded token type. For example, a 2-byte TypeDefOrRef coded token with the value 0x0321 has the following bit pattern: 0 3 2 1 0000 0011 0010 0001 The first two bits specify the table - TypeRef; the other bits specify the RID. Because we've used the first two bits, we've got to shift everything along two bits: 000000 1100 1000 This gives us a RID of 0xc8. If any one of the TypeDef, TypeRef or TypeSpec tables had more than 16383 rows (2^14 - 1), then 4 bytes would need to be used to represent all TypeDefOrRef coded tokens throughout the metadata tables. Lists The third representation we need to consider is 1-to-many references; each TypeDef refers to a list of FieldDef and MethodDef belonging to that type. If we were to specify every FieldDef and MethodDef individually then each TypeDef would be very large and a variable size, which isn't ideal. There is a way of specifying a list of references without explicitly specifying every item; if we order the MethodDef and FieldDef tables by the owning type, then the field list and method list in a TypeDef only have to be a single RID pointing at the first FieldDef or MethodDef belonging to that type; the end of the list can be inferred by the field list and method list RIDs of the next row in the TypeDef table. Going back to the TypeDef If we have a look back at the definition of a TypeDef, we end up with the following reprensentation for each row: Flags - always 4 bytes Name - a #Strings heap offset. Namespace - a #Strings heap offset. Extends - a TypeDefOrRef coded token. FieldList - a single RID to the FieldDef table. MethodList - a single RID to the MethodDef table. So, depending on the number of entries in the heaps and tables within the assembly, the rows in the TypeDef table can be as small as 14 bytes, or as large as 24 bytes. Now we've had a look at how information is encoded within the metadata tables, in the next post we can see how they are arranged on disk.

    Read the article

  • Designing Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { // Upload using some wrapper for an ORM an someInterface.Upload(meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • Maximize Performance and Availability with Oracle Data Integration

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-fareast-font-family:Calibri; mso-bidi-font-family:"Times New Roman";} Alert: Oracle is hosting the 12c Launch Webcast for Oracle Data Integration and Oracle Golden Gate on Tuesday, November 12 (tomorrow) to discuss the new capabilities in detail and share customer perspectives. Hear directly from customer experts and executives from SolarWorld Industries America, British Telecom and Rittman Mead and get your questions answered live by product experts. Register for this complimentary webcast today and join in the discussion tomorrow. Author: Irem Radzik, Senior Principal Product Director, Oracle Organizations that want to use IT as a strategic point of differentiation prefer Oracle’s complete application offering to drive better business performance and optimize their IT investments. These enterprise applications are in the center of business operations and they contain critical data that needs to be accessed continuously, as well as analyzed and acted upon in a timely manner. These systems also need to operate with high-performance and availability, which means analytical functions should not degrade applications performance, and even system maintenance and upgrades should not interrupt availability. Oracle’s data integration products, Oracle Data Integrator, Oracle GoldenGate, and Oracle Enterprise Data Quality, provide the core foundation for bringing data from various business-critical systems to gain a broader, unified view. As a more advance offering to 3rd party products, Oracle’s data integration products facilitate real-time reporting for Oracle Applications without impacting application performance, and provide ability to upgrade and maintain the system without taking downtime. Oracle GoldenGate is certified for Oracle Applications, including E-Business Suite, Siebel CRM, PeopleSoft, and JD Edwards, for moving transactional data in real-time to a dedicated operational reporting environment. This solution allows the app users to offload the resource-heavy queries to the reporting instance(s), reducing CPU utilization, improving OLTP performance, and extending the lifetime of existing IT assets. In addition, having a dedicated reporting instance with up-to-the-second transactional data allows optimizing the reporting environment and even decreasing costs as GoldenGate can move only the required data from expensive mainframe environments to cost-efficient open system platforms.  With real-time data replication capabilities GoldenGate is also certified to enable application upgrades and database/hardware/OS migration without impacting business operations. GoldenGate is certified for Siebel CRM, Communications Billing and Revenue Management and JD Edwards for supporting zero downtime upgrades to the latest app version. GoldenGate synchronizes a parallel, upgraded system with the old version in real time, thus enables continuous operations during the process. Oracle GoldenGate is also certified for minimal downtime database migrations for Oracle E-Business Suite and other key applications. GoldenGate’s solution also minimizes the risk by offering a failback option after the switchover to the new environment. Furthermore, Oracle GoldenGate’s bidirectional active-active data replication is certified for Oracle ATG Web Commerce to enable geographically load balancing and high availability for ATG customers. For enabling better business insight, Oracle Data Integration products power Oracle BI Applications with high performance bulk and real-time data integration. Oracle Data Integrator (ODI) is embedded in Oracle BI Applications version 11.1.1.7.1 and helps to integrate data end-to-end across the full BI Applications architecture, supporting capabilities such as data-lineage, which helps business users identify report-to-source capabilities. ODI is integrated with Oracle GoldenGate and provides Oracle BI Applications customers the option to use real-time transactional data in analytics, and do so non-intrusively. By using Oracle GoldenGate with the latest release of Oracle BI Applications, organizations not only leverage fresh data in analytics, but also eliminate the need for an ETL batch window and minimize the impact on OLTP systems. You can learn more about Oracle Data Integration products latest 12c version in our upcoming launch webcast and access the app-specific free resources in the new Data Integration for Oracle Applications Resource Center.

    Read the article

  • Common Javascript mistakes that severely affect performance?

    - by melee
    At a recent UI/UX MeetUp that I attended, I gave some feedback on a website that used Javascript (jQuery) for its interaction and UI - it was fairly simple animations and manipulation, but the performance on a decent computer was horrific. It actually reminded me of a lot of sites/programs that I've seen with the same issue, where certain actions just absolutely destroy performance. It is mostly in (or at least more noticeable in) situations where Javascript is almost serving as a Flash replacement. This is in stark contrast to some of the webapps that I have used that have far more Javascript and functionality but run very smoothly (COGNOS by IBM is one I can think of off the top of my head). I'd love to know some of the common issues that aren't considered when developing JS that will kill the performance of the site.

    Read the article

  • .NET Libraries Cost More Than Windows?

    - by Kevin Mark
    When looking into libraries to make my programming life a little bit easier I've (almost) always been disappointed by the prices offered. For instance, Actipro's WPF Studio is $650. I suppose that's worth it if you plan to make money from the use of those controls. But take a look at, say, Windows. Windows 7 Ultimate is just about $220. I consider Windows to be a far more complex and "worth-it" product/purchase than a library that runs on it. Why the significant difference in pricing? Do libraries really need to be so expensive, or do they need to charge more in order to make a decent some of money?

    Read the article

  • The convergence of Risk and Performance Management

    Historically, the market has viewed Enterprise Performance Management (EPM) and Governance, Risk and Compliance (GRC) as separate processes and solutions. But these two worlds are coming together – in fact industry analyst firms such as AMR Research believe that by the end of 2009, risk management will be part of every EPM discussion. Tune into this conversation with John O'Rourke, VP of Product Marketing for Oracle Enterprise Performance Management Solutions, and Karen dela Torre, Senior Director of Product Marketing for Financial Applications to learn how EPM and GRC are converging, what the integration points are, and what Oracle is doing to help customers perform more effective risk and performance management.

    Read the article

  • What .NET objects should I use to create a cookie based session in MVC?

    - by makerofthings7
    I'm writing a custom password reset application that uses a validation technique that doesn't fit cleanly with ASP.NET Membership Provider's challenge questions. Namely I need to invoke a workflow and collect information from the end user (backup phone number, email address) after the user logs in using a custom form. The only way I know to create a cookie-based session (without too much "innovation" on my part) is to use WIF. What other standard objects can I use with ASP.NET MVC to create an authenticated session that works with non-windows user stores? Ideally I can store "role" or claim information in the session object such as "admin", "departmentXadmin", "normalUser", or "restrictedUser" The workflow would look like this: User logs in with username and password If the username and pw are correct a (stateless) cookie based session is created The user gets redirected to a HTML form that allows them to enter their backup phone number (for SMS dual factor), or validate it if already set. The user can then change their password using the form provided The "forgot password" would look like this User requests OTP code to be sent to the phone User logs in using username and OTP If the OTP is valid and not expired then create a cookie based session and redirect to a form that allows password reset Show password reset form, and process results.

    Read the article

  • How to recognize my performance plateau?

    - by Dat Chu
    Performance plateau happens right after one becomes "adequately" proficient at a certain task. e.g. You learn a new language/framework/technology. You become better progressively. Then all of the sudden you realize that you have spent quite some time on this technology and you are not getting better at it. As a programmer who is conscious about my performance/knowledge/skill, how do I detect when I am in a performance plateau? What can I do to jump out of it (and keep going upward)?

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • Google I/O 2010 - Architecting for performance with GWT

    Google I/O 2010 - Architecting for performance with GWT Google I/O 2010 - Architecting for performance with GWT GWT 201 Joel Webber, Adam Schuck Modern web applications are quickly evolving to an architecture that has to account for the performance characteristics of the client, the server, and the global network connecting them. Should you render HTML on the server or build DOM structures with JS in the browser, or both? This session discusses this, as well as several other key architectural considerations to keep in mind when building your Next Big Thing. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 9 1 ratings Time: 01:01:09 More in Science & Technology

    Read the article

  • .NET DAL and arhitecture

    - by Parhs
    I have seen lots of articles but none really help me. That is because I want to use dapper as a DAL. Should I create repositories with special functions? Like getStaffActive()? If I use repositories I can implement with dapper-extension a generic crud I have no idea how to handle database connection. Where to open the connection? If I do this at every function then how am I supposed to use transaction scope? Somehow the repositories I work with should share a connection in order transaction to work. But how to do this? Openning connection in BLL? If I use queries and execute them directly then still the same thing.

    Read the article

  • The Convergence of Risk and Performance Management

    Historically, the market has viewed Enterprise Performance Management (EPM) and Governance, Risk and Compliance (GRC) as separate processes and solutions. But these two worlds are coming together-in fact industry analyst firms such as AMR Research believe that by the end of 2009, risk management will be part of every EPM discussion. Tune into this conversation with John O'Rourke, VP of Product Marketing for Oracle Enterprise Performance Management Solutions, and Karen dela Torre, Senior Director of Product Marketing for Financial Applications to learn how EPM and GRC are converging, what the integration points are, and what Oracle is doing to help customers perform more effective risk and performance management.

    Read the article

  • Microsoft Small Basic for .NET

    Microsoft Small Basic is intended to be fun to use. It is that, and more besides. It has a great potential as a way of flinging together quick and cheerful applications, just like those happy days of childhood. Tetris anyone?

    Read the article

  • Speaking at Sinergija12

    - by DigiMortal
    Next week I will be speaker at Sinergija12, the biggest Microsoft conference held in Serbia. The first time I visited Sinergija it was clear to me that this is the event where I should go back. Why? Because technical level of sessions was very well in place and actually sessions I visited were pretty hardcore. Now, two years later, I will be back there but this time I’m there as speaker. My session at Sinergija12 Here are my three almost finished sessions for Sinergija12. ASP.NET MVC 4 Overview Session focuses on new features of ASP.NET MVC 4 and gives the audience good overview about what is coming. Demos cover all important new features - agent based output, new application templates, Web API and Single Page Applications. This session is for everybody who plans to move to ASP.NET MVC 4 or who plans to start building modern web sites.   Building SharePoint Online applications using Napa Office 365 Next version of Office365 allows you to build SharePoint applications using browser based IDE hosted in cloud. This session introduces new tools and shows through practical examples how to build online applications for SharePoint 2013.   Cloud-enabling ASP.NET MVC applications Cloud era is here and over next years more and more web applications will be hosted on cloud environments. Also some of our current web applications will be moved to cloud. This session shows to audience how to change the architecture of ASP.NET web application so it runs on shared hosting and Windows Azure with same code base. Also the audience will see how to debug and deploy web applications to Windows Azure. All developers who are coming to Sinergija12 are welcome to my sessions. See you there! :)

    Read the article

  • .NET Developer Basics – Recursive Algorithms

    Recursion can be a powerful programming technique when used wisely. Some data structures such as tree structures lend themselves far more easily to manipulation by recursive techniques. As it is also a classic Computer Science problem, it is often used in technical interviews to probe a candidate's grounding in basic programming techniques. Whatever the reason, it is well worth brushing up one's understanding with Damon's introduction to Recursion.

    Read the article

  • Confused about my future. Doubt about .Net or Java way.

    - by dotNET
    I'm very confused about choosing the programming langage to follow in the next step of my life. I'm right now so familiar with C++, VB.NET and PHP, but to jump to a higher level I must choose between JEE(JSP, Servlets, JSF, Spring, EJB, Struts, Hibernate,...) and .NET(ASP.NET, C#). Because I cant learn them at the same time. And you realize that, when I mentioned JEE a lot of things comes to the head. In my personnal experience I prefere the .NET, but Java seems to be a better choice. believe me, i'm not writing a subjective topic. I just want to know what must I follow to get succes in my life. The question here is : Is there any things that can be done with Java, and cannot be done with .NET. Is there any chances that I can follow the uncounted number of frameworks that are always in developpement. ... (also something not said) ?

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >