Search Results

Search found 27151 results on 1087 pages for 'end to end'.

Page 293/1087 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • Oracle SQL Developer is for Oracle Database

    - by thatjeffsmith
    What is Oracle SQL Developer? Well, according to this document on OTN… What is SQL Developer? Date: May 2014 Oracle SQL Developer is the Oracle Database IDE. A free graphical user interface, Oracle SQL Developer allows database users and administrators to do their database tasks in fewer clicks and keystrokes. A productivity tool, SQL Developer’s main objective is to help the end user save time and maximize the return on investment in the Oracle Database technology stack. Ok, sounds pretty straightforward. Where does the confusion lie then? Some People Use SQL Developer to Connect to 3rd Party Databases SQL Developer allows you to register 3rd party database JDBC drivers. The 3rd party being a company OTHER than Oracle that makes a database product. You know who they are (SAP, MSFT, IBM, etc.) Registering 3rd party JDBC drivers in SQL Developer But maybe you don’t understand why we support these types of connections? It’s for one driving reason. To Help You Migrate to Oracle Database Yes, you get a worksheet and a tree to query and browse those systems. But, the real meat and bones there are around our migration projects and our translation scratch editor. At the end of the day, it’s there so you can move your data from say Sybase ASE to Oracle Database. On a side note, the migration technology was previously available in a separate application, the Migration Workbench. The technology and the awesome people behind it were folded into SQL Developer. So when asked what SQL Developer is, I say it’s the Database IDE and the official 3rd party database migration to Oracle platform. So anyways, when you ask for better support for another 3rd party provider, we deliver that support based on that business driver. If another 3rd party database jdbc driver is introduced, it’s because we have a lot of customers migrating from that platform. We’re not adding it to make it easier for you to work with SQL Server on your Mac. But, if you find that useful – that is cool. It’s just not why we’ve got the support for SQL Server connections in SQL Developer.

    Read the article

  • Exam 71-515: TS: Web Applications Development with Microsoft .NET Framework 4

    - by Ricardo Peres
    I took the 71-515 exam today. 85 questions, 240 minutes. Here are some notes: Great number of jQuery questions, mostly having to do with AJAX Lots of MVC 2 questions also A number of classic ASP.NET web forms, of which only a few were related with the new 4 features Some Entity Framework Some plain old JavaScript, like, changing an image dynamically I think I did OK. As with my previous exam, I still don't know if I passed or not, will have to wait for the end of the beta period.

    Read the article

  • T-SQL Equivalents for Microsoft Access VBA Functions

    If you need to migrate your Access application to SQL Server, don't count on The SQL Server Upsize Wizard in Microsoft Access to automatically convert your VBA functions. If you want to push the complex query processing done by your Access queries to the back end, you'll have to rewrite them in T-SQL.

    Read the article

  • Did the Community Lose It’s Focus, or Did I?

    - by Jonathan Kehayias
    Late Thursday night, ok it was actually very early Friday morning, I wrote a blog post that stirred a bit of a controversy in the community.  While the outcome of the discussion that was sparked by that post in the community has been good, it is definitely a case where the end isn’t justified by the means.   Hindsight is always 20/20, and while I stand by the point I was trying to make with that post, there are a number of ways I could have gone about making that point without risking...(read more)

    Read the article

  • Should each page of a Blog listing have its own Title

    - by RandomBen
    Should example.com/Blog?Page=1, example.com/Blog?Page=2, etc have the same title? I have done some research on this and SEOMoz's tools say I have duplicate titles and so does Google's Webmaster tools. If you look at top end examples like http://www.seobook.com/blog and http://www.seomoz.org/blog they both use the same title across all query of their ?Page=X URLs. So what is the better choice or does it even matter?

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Automate delivery of Crystal Reports With a Windows Service

    In this article, Vince demonstrates the creation of a Windows Service to automatically run and send a Crystal Report as an email attachment. After a basic introduction, he examines the creation of the database and windows service with the help of relevant source code and explanations. Towards the end of the article, Vince discusses the steps to be followed in order to install the windows service.

    Read the article

  • SQLPeople Interviews - Crys Manson, Jeremiah Peschka, and Tim Mitchell

    - by andyleonard
    Introduction Late last year I announced an exciting new endeavor called SQLPeople . At the end of 2010 I announced the 2010 SQLPeople Person of the Year . Check out these interviews from your favorite SQLPeople ! Interviews To Date Tim Mitchell Jeremiah Peschka Crys Manson Ben McEwan Thomas LaRock Lori Edwards Brent Ozar Michael Coles Rob Farley Jamie Thomson Conclusion I plan to post two or three interviews each week for the forseeable future. SQLPeople is just one of the cool new things I get to...(read more)

    Read the article

  • failed grub installation

    - by user207552
    Today, i installed ubuntu along with my preinstalled windows8 on Lenovo Y580. At the end of installation this messasge popped up saying that Grub installation failed. After this i installed Easy BCD ( http://neosmart.net/EasyBCD/ i use this also on my PC where i use win7 with Ubuntu). The problem i got is - because of choosing installation along with win8 i dont know where ubuntu is installed so i cant boot it even when using Easy BCD. What should i do now? Lucia

    Read the article

  • SharePoint – The Most Important Feature

    - by Bil Simser
    Watching twitter and doing a search for SharePoint and you see a lot (almost one every few minutes) of tweets about the top 10 new features in SharePoint. What answer do you get when you ask the question, “What’s the most important feature in SharePoint?”. Chances are the answer will vary. Some will say it’s the collaboration aspect, others might say it’s the new ribbon interface, multi-item editing, external content types, faceted search, large list support, document versioning, Silverlight, etc. The list goes on. However I think most people might be missing the most important feature that’s sitting right under their noses all this time. The most important feature of SharePoint? It’s called User Empowerment. Huh? What? Is that something I find in the Site Actions menu? Nope. It’s something that’s always been there in SharePoint, you just need to get the word out and support it. How many times have you had a team ask you for a team site (assuming you had SharePoint up and running). Or to create them a contact list. Or how long have you employed that guy in the corner who’s been copying and pasting content from Corporate Communications into the web from a Word document. Let’s stop the insanity. It doesn’t have to be this way. SharePoint’s strongest feature isn’t anything you can find in the Site Settings screen or Central Admin. It’s all about empowering your users and letting them take control of their content. After all, SharePoint really is a bunch of tools to allow users to collaborate on content isn’t it? So why are you stepping in as IT and helping the user every moment along the way. It’s like having to ask users to fill out a help desk ticket or call up the Windows team to create a folder on their desktop or rearrange their Start menu. This isn’t something IT should be spending their time doing nor is it something the users should be burdened with having to wait until their friendly neighborhood tech-guy (or gal) shows up to help them sort the icons on their desktop. SharePoint IS all about empowerment. Site owners can create whatever lists and libraries they need for their team, and if the template isn’t there they can always turn to my friend and yours, the Custom List. From that can spew forth approval tracking systems, new hire checklists, and server inventory. You’re only limited by your imagination and needs. Users should be able to create new sites as they need. Want a blog to let everyone know what your team is up to? Go create one, here’s how. What’s a blog you ask? Here’s what it is and why you would use one. SharePoint is the shift in the balance of power and you need, and an IT group, let go of certain responsibilities and let your users run with the tools. A power user who knows how to create sites and what features are available to them can help a team go from the forming stage to the storming stage overnight. Again, this all hinges on you as an IT organization and what you can and empower your users with as far as features go. Running with tools is great if you know how to use them, running with scissors not recommended unless you enjoy trips to the hospital. With Great Power comes Great Responsibility so don’t go out on Monday and send out a memo to the organization saying “This Bil guy says you peeps can do anything so here it is, knock yourself out” (for one, they’ll have *no* idea who this Bil guy is). This advice comes with the task of getting your users ready for empowerment. Whether it’s through some kind of internal training sessions, in-house documentation; videos; blog posts; on how to accomplish things in SharePoint, or full blown one-on-one sit downs with teams or individuals to help them through their problems. The work is up to you. Helping them along also should be part of your governance (you do have one don’t you?). Just because you have InfoPath client deployed with your Office suite, doesn’t mean users should just start publishing forms all over your SharePoint farm. There should be some governance behind that in what you’ll support and what is possible. The other caveat to all this is that SharePoint is not everything for everyone. It can’t cook you breakfast and impregnate your cat or solve world hunger. It also isn’t suited for every IT solution out there. It’s a horrible source control system (even though some people try to use it as such) and really can’t do financials worth a darn. Again, governance is key here and part of that governance and your responsibility in setting up and unleashing SharePoint into your organization is to provide users guidance on what should be in SharePoint and (more importantly) what should not be in SharePoint. There are boundaries you have to set where you don’t want your end users going as they might be treading into trouble. Again, this is up to you to set these constraints and help users understand why these pylons are there. If someone understands why they can’t do something they might have a better understanding and respect for those that put them there in the first place. Of course you’ll always have the power-users who want to go skiing down dead mans curve so this doesn’t work for everyone, but you can catch the majority of the newbs who don’t wander aimlessly off the beaten path. At the end of the day when all things are going swimmingly your end users should be empowered to solve the needs they have on a day to day basis and not having to keep bugging the IT department to help them create a view to show only approved documents. I wouldn’t go as far as business users building out full blown solutions and handing the keys to SharePoint Designer or (worse) Visual Studio to power-users might not be a path you want to go down but you also don’t have to lock up the SharePoint system in a tight box where users can’t use what’s there. So stop focusing on the shiny things in SharePoint and maybe consider making a shift to what’s really important. Making your day job easier and letting users get the most our of your technology investment.

    Read the article

  • BI Publisher at Collaborate 2010

    - by mike.donohue
    Noelle and I are heading to Collaborate 2010 next week. There are over two dozen sessions on BI Publisher including a Hands On Lab (see below). Very excited to see what our customers and partners will be presenting and how they are using BI Publisher to get better reports and reduce costs. My only regret is that many sessions are scheduled at the same time so I won't get to see all of them. Noelle and I will be presenting the following: Monday, April 19 2:30 pm - 3:30 pm Introduction to Oracle Business Intelligence Publisher Session: 227 Location: Reef F By: Mike 2:30 pm - 3:30 pm The Reporting Platform for Applications: Oracle Business Intelligence Publisher Session: 73170 Location: South Seas Ballroom J By: Noelle 3:45 pm - 4:45 pm Oracle Business Intelligence Publisher Hands On Lab (1) Session: 217 Location: Palm D By: Noelle and Mike Tuesday, April 20 8:00 am - 9:00 am Oracle Business Intelligence Publisher Best Practices Session: 218 Location: Palm D By: Noelle and Mike We will also be at the BI Technology demo pod in the exhibt hall so please stop by and say hello. All BI Publisher related Sessions Sunday, April 18 2:00 pm - 2:50 pm Customizing your Invoices in a Flash! 3:00 pm - 3:50 pm BI Publisher SIG Meeting - Part 1 4:00 pm - 4:50 pm BI Publisher SIG Meeting - Part 2 Monday, April 19 8:00 am - 9:00 am XML Publisher and FSG for Beginners 2:30 pm - 3:30 pm Introduction to Oracle Business Intelligence Publisher 2:30 pm - 3:30 pm The Reporting Platform for Applications: Oracle Business Intelligence Publisher 2:30 pm - 3:30 pm Bay Ballroom A What it Takes to Make Your Business Intelligence Implementation a Success 2:30 pm - 3:30 pm XML Publisher-More Than Just Form Letters 3:45 pm - 4:45 pm JD Edwards EnterpriseOne Reporting and Batch Discussions presented by Technology SIG 3:45 pm - 4:45 pm Hands On Lab: Oracle Business Intelligence Publisher (1) Tuesday, April 20 8:00 am - 9:00 am Oracle Business Intelligence Publisher Best Practices 8:00 am - 9:00 am Creating XML Publisher Documents with PeopleCode 10:30 am - 11:30 am Moving to BI Publisher, Now What? Automated Fax and Email from Oracle EBS 2:00 pm - 3:00 pm Smart Reporting in Oracle Financials Release 12.1 2:00 pm - 3:00 pm Custom Check Printing Framework using XML Publisher 2:00 pm - 3:00 pm BI Publisher and Oracle BI for JD Edwards Wednesday, April 21 8:00 am - 9:00 am XML Publisher Tips for PeopleTools 10:30 am - 11:30 am JD Edwards World - Technical Upgrade Considerations 10:30 am - 11:30 am Data Visualization Best Practices: Know how to design and improve your BI & EPM reports, dashboards, and queries 10:30 am - 11:30 am Oracle BIEE End-to-End 1:00 pm - 2:00 pm Empower JD Edwards Users with Oracle BI Publisher for Ad Hoc Reporting 1:00 pm - 2:00 pm BIP and JD Edwards World - Good Stuff! 2:15 pm - 3:15 pm Proven Strategies for Increasing ROI with PeopleSoft HCM 4:00 pm - 5:00 pm Using Oracle BI Delivers to Send Reports to JD Edwards Users Thursday, April 22 9:45 am - 10:45 am PeopleSoft Recruiting Enhancements You Can Use 9:45 am - 10:45 am Reducing Cost with Oracle's BI Publisher Note (1) the Hands On Lab was not showing in the joint scheduler as of this posting but, it is definitely ON.

    Read the article

  • Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET

    - by user647124
    This year I embarked on a journey to migrate a group of ASP.NET web applications developed to integrate with WebLogic Portal 9.2 via the AquaLogic® Interaction .NET Application Accelerator 1.0 to instead use the Oracle WebCenter WSRP Producer for .NET and integrated with WebLogic Portal 10.3.4. It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings. Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there. For the Curious From the perspective of necessity, this section would be better at the end. If it were there, though, it would probably be read by far fewer people, including those that are actually interested in these types of sections. Those in a hurry may skip past and be none the worst for it in dealing with the hands-on bits of performing a migration from .NET Accelerator to WSRP Producer. For others who want to talk about why they did what they did after they did it, or just want to know for themselves, enjoy. A Brief (and edited) History of the WSRP for .NET Technologies (as Relevant to the this Post) Note: This section is for those who are curious about why the migration path is not as simple as many other Oracle technologies. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The currently deployed architecture that was to be migrated and upgraded achieved initial integration between .NET and J2EE over the WSRP protocol through the use of The AquaLogic Interaction .NET Application Accelerator. The .NET Accelerator allowed the applications that were written in ASP.NET and deployed on a Microsoft Internet Information Server (IIS) to interact with a WebLogic Portal application deployed on a WebLogic (J2EE application) Server (both version 9.2, the state of the art at the time of its creation). At the time this architectural decision for the application was made, both the AquaLogic and WebLogic brands were owned by BEA Systems. The AquaLogic brand included products acquired by BEA through the acquisition of Plumtree, whose flagship product was a portal platform available in both J2EE and .NET versions. As part of this dual technology support an adaptor was created to facilitate the use of WSRP as a communication protocol where customers wished to integrate components from both versions of the Plumtree portal. The adapter evolved over several product generations to include a broad array of both standard and proprietary WSRP integration capabilities. Later, BEA Systems was acquired by Oracle. Over the course of several years Oracle has acquired a large number of portal applications and has taken the strategic direction to migrate users of these myriad (and formerly competitive) products to the Oracle WebCenter technology stack. As part of Oracle’s strategic technology roadmap, older portal products are being schedule for end of life, including the portal products that were part of the BEA acquisition. The .NET Accelerator has been modified over a very long period of time with features driven by users of that product and developed under three different vendors (each a direct competitor in the same solution space prior to merger). The Oracle WebCenter WSRP Producer for .NET was introduced much more recently with the key objective to specifically address the needs of the WebCenter customers developing solutions accessible through both J2EE and .NET platforms utilizing the WSRP specifications. The Oracle Product Development Team also provides these insights on the drivers for developing the WSRP Producer: ***************************************** Support for ASP.NET AJAX. Controls using the ASP.NET AJAX script manager do not function properly in the Application Accelerator for .NET. Support 2 way SSL in WLP. This was not possible with the proxy/bridge set up in the existing Application Accelerator for .NET. Allow developers to code portlets (Web Parts) using the .NET framework rather than a proprietary framework. Developers had to use the Application Accelerator for .NET plug-ins to Visual Studio to manage preferences and profile data. This is now replaced with the .NET Framework Personalization (for preferences) and Profile providers. The WSRP Producer for .NET was created as a new way of developing .NET portlets. It was never designed to be an upgrade path for the Application Accelerator for .NET. .NET developers would create new .NET portlets with the WSRP Producer for .NET and leave any existing .NET portlets running in the Application Accelerator for .NET. ***************************************** The advantage to creating a new solution for WSRP is a product that is far easier for Oracle to maintain and support which in turn improves quality, reliability and maintainability for their customers. No changes to J2EE applications consuming the WSRP portlets previously rendered by the.NET Accelerator is required to migrate from the Aqualogic WSRP solution. For some customers using the .NET Accelerator the challenge is adapting their current .NET applications to work with the WSRP Producer (or any other WSRP adapter as they are proprietary by nature). Part of this adaptation is the need to deploy the .NET applications as a child to the WSRP producer web application as root. Differences between .NET Accelerator and WSRP Producer Note: This section is for those who are curious about why the migration is not as pluggable as something such as changing security providers in WebLogic Server. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The basic terminology used to describe the participating applications in a WSRP environment are the same when applied to either the .NET Accelerator or the WSRP Producer: Producer and Consumer. In both cases the .NET application serves as what is referred to as a WSRP environment as the Producer. The difference lies in how the two adapters create the WSRP translation of the .NET application. The .NET Accelerator, as the name implies, is meant to serve as a quick way of adding WSRP capability to a .NET application. As such, at a high level, the .NET Accelerator behaves as a proxy for requests between the .NET application and the WSRP Consumer. A WSRP request is sent from the consumer to the .NET Accelerator, the.NET Accelerator transforms this request into an ASP.NET request, receives the response, then transforms the response into a WSRP response. The .NET Accelerator is deployed as a stand-alone application on IIS. The WSRP Producer is deployed as a parent application on IIS and all ASP.NET modules that will be made available over WSRP are deployed as children of the WSRP Producer application. In this manner, the WSRP Producer acts more as a Request Filter than a proxy in the WSRP transactions between Producer and Consumer. Highly Recommended Enabling Logging Note: You can skip this section now, but you will most likely want to come back to it later, so why not just read it now? Logging is very helpful in tracking down the causes of any anomalies during testing of migrated portlets. To enable the WSRP Producer logging, update the Application_Start method in the Global.asax.cs for your .NET application by adding log4net.Config.XmlConfigurator.Configure(); IIS logs will usually (in a standard configuration) be in a sub folder under C:\WINDOWS\system32\LogFiles\W3SVC. WSRP Producer logs will be found at C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\Logs\WSRPProducer.log InputTrace.webinfo and OutputTrace.webinfo are located under C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault and can be useful in debugging issues related to markup transformations. Things You Must Do Merge Web.Config Note: If you have been skipping all the sections that you can, now is the time to stop and pay attention J Because the existing .NET application will become a sub-application to the WSRP Producer, you will want to merge required settings from the existing Web.Config to the one in the WSRP Producer. Use the WSRP Producer Master Page The Master Page installed for the WSRP Producer provides common, hiddenform fields and JavaScripts to facilitate portlet instance management and display configuration when the child page is being rendered over WSRP. You add the Master Page by including it in the <@ Page declaration with MasterPageFile="~/portlets/Resources/MasterPages/WSRP.Master" . You then replace: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> With <asp:Content ID="ContentHead1" ContentPlaceHolderID="wsrphead" Runat="Server"> And </HEAD> <body> <form id="theForm" method="post" runat="server"> With </asp:Content> <asp:Content ID="ContentBody1" ContentPlaceHolderID="Main" Runat="Server"> And finally </form> </body> </HTML> With </asp:Content> In the event you already use Master Pages, adapt your existing Master Pages to be sub masters. See Nested ASP.NET Master Pages for a detailed reference of how to do this. It Happened to Me, It Might Happen to You…Or Not Watch for Use of Session or Request in OnInit In the event the .NET application being modified has pages developed to assume the user has been authenticated in an earlier page request there may be direct or indirect references in the OnInit method to request or session objects that may not have been created yet. This will vary from application to application, so the recommended approach is to test first. If there is an issue with a page running as a WSRP portlet then check for potential references in the OnInit method (including references by methods called within OnInit) to session or request objects. If there are, the simplest solution is to create a new method and then call that method once the necessary object(s) is fully available. I find doing this at the start of the Page_Load method to be the simplest solution. Case Sensitivity .NET languages are not case sensitive, but Java is. This means it is possible to have many variations of SRC= and src= or .JPG and .jpg. The preferred solution is to make these mark up instances all lower case in your .NET application. This will allow the default Rewriter rules in wsrp-producer.xml to work as is. If this is not practical, then make duplicates of any rules where an issue is occurring due to upper or mixed case usage in the .NET application markup and match the case in use with the duplicate rule. For example: <RewriterRule> <LookFor>(href=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> May need to be duplicated as: <RewriterRule> <LookFor>(HREF=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> While it is possible to write a regular expression that will handle mixed case usage, it would be long and strenous to test and maintain, so the recommendation is to use duplicate rules. Is it Still Relative? Some .NET applications base relative paths with a fixed root location. With the introduction of the WSRP Producer, the root has moved up one level. References to ~/ will need to be updated to ~/portlets and many ../ paths will need another ../ in front. I Can See You But I Can’t Find You This issue was first discovered while debugging modules with code that referenced the form on a page from the code-behind by name and/or id. The initial error presented itself as run-time error that was difficult to interpret over WSRP but seemed clear when run as straight ASP.NET as it indicated that the object with the form name did not exist. Since the form name was no longer valid after implementing the WSRP Master Page, the likely fix seemed to simply update the references in the code. However, as the WSRP Master Page is external to the code, a compile time error resulted: Error      155         The name 'form1' does not exist in the current context                C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\portlets\legacywebsite\module\Screens \Reporting.aspx.cs                51           52           legacywebsite.module Much hair-pulling research later it was discovered that it was the use of the FindControl method causing the issue. FindControl doesn’t work quite as expected once a Master Page has been introduced as the controls become embedded in controls, require a recursion to find them that is not part of the FindControl method. In code where the page form is referenced by name, there are two steps to the solution. First, the form needs to be referenced in code generically with Page.Form. For example, this: ToggleControl ctrl = new ToggleControl(frmManualEntry, FunctionLibrary.ParseArrayLst(userObj.Roles)); Becomes this: ToggleControl ctrl = new ToggleControl(Page.Form, FunctionLibrary.ParseArrayLst(userObj.Roles)); Generally the form id is referenced in most ASP.NET applications as a path to a control on the form. To reach the control once a MasterPage has been added requires an additional method to recurse through the controls collections within the form and find the control ID. The following method (found at Rick Strahl's Web Log) corrects this very nicely: public static Control FindControlRecursive(Control Root, string Id) { if (Root.ID == Id) return Root; foreach (Control Ctl in Root.Controls) { Control FoundCtl = FindControlRecursive(Ctl, Id); if (FoundCtl != null) return FoundCtl; } return null; } Where the form name is not referenced, simply using the FindControlRecursive method in place of FindControl will be all that is necessary. Following the second part of the example referenced earlier, the method called with Page.Form changes its value extraction code block from this: Label lblErrMsg = (Label)frmRef.FindControl("lblBRMsg" To this: Label lblErrMsg = (Label) FunctionLibrary.FindControlRecursive(frmRef, "lblBRMsg" The Master That Won’t Step Aside In most migrations it is preferable to make as few changes as possible. In one case I ran across an existing Master Page that would not function as a sub-Master Page. While it would probably have been educational to trace down why, the expedient process of updating it to take the place of the WSRP Master Page is the route I took. The changes are highlighted below: … <asp:ContentPlaceHolder ID="wsrphead" runat="server"></asp:ContentPlaceHolder> </head> <body leftMargin="0" topMargin="0"> <form id="TheForm" runat="server"> <input type="hidden" name="key" id="key" value="" /> <input type="hidden" name="formactionurl" id="formactionurl" value="" /> <input type="hidden" name="handle" id="handle" value="" /> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true" > </asp:ScriptManager> This approach did not work for all existing Master Pages, but fortunately all of the other existing Master Pages I have run across worked fine as a sub-Master to the WSRP Master Page. Moving On In Enterprise Portals, even after you get everything working, the work is not finished. Next you need to get it where everyone will work with it. Migration Planning Providing that the server where IIS is running is adequately sized, it is possible to run both the .NET Accelerator and the WSRP Producer on the same server during the upgrade process. The upgrade can be performed incrementally, i.e., one portlet at a time, if server administration processes support it. Those processes would include the ability to manage a second producer in the consuming portal and to change over individual portlet instances from one provider to the other. If processes or requirements demand that all portlets be cut over at the same time, it needs to be determined if this cut over should include a new producer, updating all of the portlets in the consumer, or if the WSRP Producer portlet configuration must maintain the naming conventions used by the .NET Accelerator and simply change the WSRP end point configured in the consumer. In some enterprises it may even be necessary to maintain the same WSDL end point, at which point the IIS configuration will be where the updates occur. The downside to such a requirement is that it makes rolling back very difficult, should the need arise. Location, Location, Location Not everyone wants the web application to have the descriptively obvious wsrpdefault location, or needs to create a second WSRP site on the same server. The instructions below are from the product team and, while targeted towards making a second site, will work for creating a site with a different name and then remove the old site. You can also change just the name in IIS. Manually Creating a WSRP Producer Site Instructions (NOTE: all executables used are the same ones used by the installer and “wsrpdev” will be the name of the new instance): 1. Copy C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault to C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev. 2. Bring up a command window as an administrator 3. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\IISAppAccelSiteCreator.exe install WSRPProducers wsrpdev "C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev" 8678 2.0.50727 4. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev "NETWORK SERVICE" 3 1 5. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev EVERYONE 1 1 6. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\1.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev 7. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\2.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev Tests: 1. Bring up a browser on the host itself and go to http://localhost:8678/wsrpdev/wsdl/1.0/WSRPService.wsdl and make sure that the URLs in the XML returned include the wsrpdev changes you made in step 6. 2. Bring up a browser on the host itself and see if the default sample comes up: http://localhost:8678/wsrpdev/portlets/ASPNET_AJAX_sample/default.aspx 3. Register the producer in WLP and test the portlet. Changing the Port used by WSRP Producer The pre-configured port for the WSRP Producer is 8678. You can change this port by updating both the IIS configuration and C:\Oracle\Middleware\WSRPProducerForDotNet\[WSRP_APP_NAME]\wsdl\1.0\WSRPService.wsdl. Do You Need to Migrate? Oracle Premier Support ended in November of 2010 for AquaLogic Interaction .NET Application Accelerator 1.x and Extended Support ends in November 2012 (see http://www.oracle.com/us/support/lifetime-support/lifetime-support-software-342730.html for other related dates). This means that integration with products released after November of 2010 is not supported. If having such support is the policy within your enterprise, you do indeed need to migrate. If changes in your enterprise cause your current solution with the .NET Accelerator to no longer function properly, you may need to migrate. Migration is a choice, and if the goals of your enterprise are to take full advantage of newer technologies then migration is certainly one activity you should be planning for.

    Read the article

  • Book Review: Pro SQL Server 2008 Relational Database Design and Implementation

    - by Alexander Kuznetsov
    Investing in proper database design is a very efficient way to cut maintenance costs. If we expect a system to last, we need to make sure it has a good solid foundation - high quality database design. Surely we can and sometimes do cut corners and save on database design to get things done faster. Unfortunately, such cutting corners frequently comes back and bites us: we may end up spending a lot of time solving issues caused by poor design. So, solid understanding of relational database design is...(read more)

    Read the article

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • MacBook Pro Late 2009 SATA Resets, Slowness

    - by A Student at a University
    My MacBook Pro runs slower the longer it's on. I am getting kernel warnings. The resets correlate with AC power connects and disconnects. I don't know if the warnings do. (How do I tell?) Are these bus CRC errors? Or something else? Can this damage the drive or corrupt data? What is it seeing that motivates these? 02:37:16 :[ 0.791992] ahci 0000:00:0b.0: PCI INT A -> Link[LSI0] -> GSI 20 (level, low) -> IRQ 20 02:37:16 :[ 0.792053] ahci 0000:00:0b.0: controller can't do PMP, turning off CAP_PMP 02:37:16 :[ 0.792104] ahci 0000:00:0b.0: AHCI 0001.0200 32 slots 6 ports 1.5 Gbps 0x3 impl IDE mode 02:37:16 :[ 0.792107] ahci 0000:00:0b.0: flags: 64bit ncq sntf pm led pio slum part boh 02:37:16 :[ 0.813473] scsi0 : ahci 02:37:16 :[ 0.823340] scsi1 : ahci 02:37:16 :[ 0.848164] ata1: SATA max UDMA/133 abar m8192@0xe7484000 port 0xe7484100 irq 43 02:37:16 :[ 0.848166] ata2: SATA max UDMA/133 abar m8192@0xe7484000 port 0xe7484180 irq 43 02:37:16 :[ 1.190132] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:16 :[ 1.190153] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:16 :[ 1.213568] ata1.00: ATA-8: OCZ-VERTEX2, 1.23, max UDMA/133 02:37:16 :[ 1.213572] ata1.00: 195371568 sectors, multi 1: LBA48 NCQ (depth 31/32) 02:37:16 :[ 1.227293] ata2.00: ATA-8: ST9500420ASG, 0002SDM1, max UDMA/133 02:37:16 :[ 1.227297] ata2.00: 976773168 sectors, multi 16: LBA48 NCQ (depth 31/32) 02:37:16 :[ 1.229570] ata2.00: configured for UDMA/133 02:37:16 :[ 1.240133] ata2: hard resetting link 02:37:16 :[ 1.260738] ata1.00: configured for UDMA/133 02:37:16 :[ 1.280122] ata1: hard resetting link 02:37:16 :[ 1.470125] usb 2-5: new high speed USB device using ehci_hcd and address 3 02:37:16 :[ 1.550165] firewire_core: created device fw0: GUID 58b035fffea99f5c, S800 02:37:16 :[ 1.631306] Initializing USB Mass Storage driver... 02:37:16 :[ 1.631392] scsi6 : usb-storage 2-5:1.0 02:37:16 :[ 1.631454] usbcore: registered new interface driver usb-storage 02:37:16 :[ 1.631455] USB Mass Storage support registered. 02:37:16 :[ 1.960128] usb 4-1: new full speed USB device using ohci_hcd and address 2 02:37:16 :[ 1.990101] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:16 :[ 1.994215] ata2.00: configured for UDMA/133 02:37:16 :[ 1.994220] ata2: EH complete 02:37:16 :[ 2.030097] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:16 :[ 2.090773] ata1.00: configured for UDMA/133 02:37:16 :[ 2.090778] ata1: EH complete 02:37:16 :[ 2.090931] scsi 0:0:0:0: Direct-Access ATA OCZ-VERTEX2 1.23 PQ: 0 ANSI: 5 02:37:16 :[ 2.091045] sd 0:0:0:0: Attached scsi generic sg0 type 0 02:37:16 :[ 2.091121] sd 0:0:0:0: [sda] 195371568 512-byte logical blocks: (100 GB/93.1 GiB) 02:37:16 :[ 2.091159] scsi 1:0:0:0: Direct-Access ATA ST9500420ASG 0002 PQ: 0 ANSI: 5 02:37:16 :[ 2.091163] sd 0:0:0:0: [sda] Write Protect is off 02:37:16 :[ 2.091183] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA 02:37:16 :[ 2.091252] sd 1:0:0:0: Attached scsi generic sg1 type 0 02:37:16 :[ 2.091337] sda: 02:37:16 :[ 2.091446] sd 1:0:0:0: [sdb] 976773168 512-byte logical blocks: (500 GB/465 GiB) 02:37:16 :[ 2.091580] sd 1:0:0:0: [sdb] Write Protect is off 02:37:16 :[ 2.091637] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA 02:37:16 :[ 2.091756] sdb: sda1 sda2 02:37:16 :[ 2.093140] sd 0:0:0:0: [sda] Attached SCSI disk 02:37:16 :[ 2.093505] sdb1 sdb2 sdb3 02:37:16 :[ 2.093773] sd 1:0:0:0: [sdb] Attached SCSI disk 02:37:16 :[ 2.693899] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null) 02:37:16 :[ 5.483492] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro 02:37:16 :[ 7.905040] EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null) 02:37:25 :[ 19.553095] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:37:25 :[ 19.555266] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:37:25 :[ 19.641533] ata1: hard resetting link 02:37:25 :[ 19.642084] ata2: hard resetting link 02:37:26 :[ 20.392606] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:26 :[ 20.392610] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:26 :[ 20.396697] ata2.00: configured for UDMA/133 02:37:26 :[ 20.396703] ata2: EH complete 02:37:26 :[ 20.451491] ata1.00: configured for UDMA/133 02:37:26 :[ 20.451498] ata1: EH complete 02:37:30 :[ 24.563725] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:37:30 :[ 24.565939] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:37:30 :[ 24.627246] ata1: hard resetting link 02:37:30 :[ 24.632250] ata2: hard resetting link 02:37:31 :[ 25.372582] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:31 :[ 25.382615] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:31 :[ 25.386782] ata2.00: configured for UDMA/133 02:37:31 :[ 25.386788] ata2: EH complete 02:37:31 :[ 25.431668] ata1.00: configured for UDMA/133 02:37:31 :[ 25.431674] ata1: EH complete 02:45:54 :[ 529.141844] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 02:45:55 :[ 529.544529] EXT4-fs (dm-2): re-mounted. Opts: commit=0 02:45:55 :[ 529.622561] ata1: limiting SATA link speed to 1.5 Gbps 02:45:55 :[ 529.622583] ata1: hard resetting link 02:45:55 :[ 529.622609] ata2: limiting SATA link speed to 1.5 Gbps 02:45:55 :[ 529.622624] ata2: hard resetting link 02:45:56 :[ 530.380135] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:45:56 :[ 530.380157] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:45:56 :[ 530.384305] ata2.00: configured for UDMA/133 02:45:56 :[ 530.384314] ata2: EH complete 02:45:56 :[ 530.399225] ata1.00: configured for UDMA/133 02:45:56 :[ 530.399233] ata1: EH complete 02:45:58 :[ 532.395990] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:45:58 :[ 532.518270] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:45:58 :[ 532.590983] ata1: hard resetting link 02:45:58 :[ 532.591045] ata2: hard resetting link 02:45:59 :[ 533.340147] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:45:59 :[ 533.340168] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:45:59 :[ 533.344416] ata2.00: configured for UDMA/133 02:45:59 :[ 533.344424] ata2: EH complete 02:45:59 :[ 533.360839] ata1.00: configured for UDMA/133 02:45:59 :[ 533.360847] ata1: EH complete 02:45:59 :[ 533.584449] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 02:45:59 :[ 533.586999] EXT4-fs (dm-2): re-mounted. Opts: commit=0 02:45:59 :[ 533.660132] ata2: hard resetting link 02:45:59 :[ 533.660151] ata1: hard resetting link 02:46:00 :[ 534.412536] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:00 :[ 534.412562] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:00 :[ 534.416768] ata2.00: configured for UDMA/133 02:46:00 :[ 534.416777] ata2: EH complete 02:46:00 :[ 534.431396] ata1.00: configured for UDMA/133 02:46:00 :[ 534.431401] ata1: EH complete 02:46:03 :[ 537.384649] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:46:03 :[ 537.504214] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:46:03 :[ 537.586002] ata1: hard resetting link 02:46:03 :[ 537.586036] ata2: hard resetting link 02:46:04 :[ 538.330147] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:04 :[ 538.330168] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:04 :[ 538.334389] ata2.00: configured for UDMA/133 02:46:04 :[ 538.334398] ata2: EH complete 02:46:04 :[ 538.343511] ata1.00: configured for UDMA/133 02:46:04 :[ 538.343519] ata1: EH complete 02:46:04 :[ 538.456413] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 02:46:04 :[ 538.459404] EXT4-fs (dm-2): re-mounted. Opts: commit=0 02:46:04 :[ 538.540138] ata1.00: limiting speed to UDMA/100:PIO4 02:46:04 :[ 538.540159] ata1: hard resetting link 02:46:04 :[ 538.540202] ata2.00: limiting speed to UDMA/100:PIO4 02:46:04 :[ 538.540220] ata2: hard resetting link 02:46:05 :[ 539.290054] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:05 :[ 539.290041] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:05 :[ 539.294100] ata2.00: configured for UDMA/100 02:46:05 :[ 539.294106] ata2: EH complete 02:46:05 :[ 539.314125] ata1.00: configured for UDMA/100 02:46:05 :[ 539.314132] ------------[ cut here ]------------ 02:46:05 :[ 539.314140] WARNING: at /build/buildd/linux-2.6.35/drivers/ata/libata-eh.c:3638 ata_eh_finish+0xdf/0xf0() 02:46:05 :[ 539.314144] Hardware name: MacBookPro5,3 02:46:05 :[ 539.314146] Modules linked in: michael_mic arc4 xt_multiport binfmt_misc rfcomm sco bnep l2cap parport_pc ppdev nvidia(P) ipt_REJECT xt_recent snd_hda_codec_cirrus xt_limit xt_tcpudp ipt_addrtype xt_state snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi applesmc led_class ip6table_filter lib80211_crypt_tkip snd_rawmidi snd_seq_midi_event ip6_tables input_polldev hid_apple snd_seq wl(P) snd_timer snd_seq_device snd joydev bcm5974 usbhid mbp_nvidia_bl uvcvideo btusb videodev v4l1_compat v4l2_compat_ioctl32 nf_nat_irc hid nf_conntrack_irc soundcore snd_page_alloc i2c_nforce2 coretemp lib80211 bluetooth nf_nat_ftp nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack_ftp nf_conntrack lp parport iptable_filter ip_tables x_tables usb_storage firewire_ohci firewire_core forcedeth crc_itu_t ahci libahci 02:46:05 :[ 539.314221] Pid: 202, comm: scsi_eh_0 Tainted: P 2.6.35-25-generic #44-Ubuntu 02:46:05 :[ 539.314224] Call Trace: 02:46:05 :[ 539.314233] [<ffffffff8106091f>] warn_slowpath_common+0x7f/0xc0 02:46:05 :[ 539.314237] [<ffffffff8106097a>] warn_slowpath_null+0x1a/0x20 02:46:05 :[ 539.314242] [<ffffffff813dc77f>] ata_eh_finish+0xdf/0xf0 02:46:05 :[ 539.314246] [<ffffffff813e441e>] sata_pmp_error_handler+0x2e/0x40 02:46:05 :[ 539.314256] [<ffffffffa00021bf>] ahci_error_handler+0x1f/0x90 [libahci] 02:46:05 :[ 539.314261] [<ffffffff813dd6d2>] ata_scsi_error+0x492/0x5e0 02:46:05 :[ 539.314266] [<ffffffff813b24cd>] scsi_error_handler+0x10d/0x190 02:46:05 :[ 539.314270] [<ffffffff813b23c0>] ? scsi_error_handler+0x0/0x190 02:46:05 :[ 539.314275] [<ffffffff8107f266>] kthread+0x96/0xa0 02:46:05 :[ 539.314280] [<ffffffff8100aee4>] kernel_thread_helper+0x4/0x10 02:46:05 :[ 539.314284] [<ffffffff8107f1d0>] ? kthread+0x0/0xa0 02:46:05 :[ 539.314288] [<ffffffff8100aee0>] ? kernel_thread_helper+0x0/0x10 02:46:05 :[ 539.314291] ---[ end trace 76dbffc2d5d49d9b ]--- 02:46:05 :[ 539.314296] ata1: EH complete 02:46:12 :[ 547.040117] ata1: hard resetting link 02:46:13 :[ 547.390144] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:13 :[ 547.408430] ata1.00: configured for UDMA/100 02:46:13 :[ 547.408438] ------------[ cut here ]------------ 02:46:13 :[ 547.408447] WARNING: at /build/buildd/linux-2.6.35/drivers/ata/libata-eh.c:3638 ata_eh_finish+0xdf/0xf0() 02:46:13 :[ 547.408451] Hardware name: MacBookPro5,3 02:46:13 :[ 547.408453] Modules linked in: michael_mic arc4 xt_multiport binfmt_misc rfcomm sco bnep l2cap parport_pc ppdev nvidia(P) ipt_REJECT xt_recent snd_hda_codec_cirrus xt_limit xt_tcpudp ipt_addrtype xt_state snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi applesmc led_class ip6table_filter lib80211_crypt_tkip snd_rawmidi snd_seq_midi_event ip6_tables input_polldev hid_apple snd_seq wl(P) snd_timer snd_seq_device snd joydev bcm5974 usbhid mbp_nvidia_bl uvcvideo btusb videodev v4l1_compat v4l2_compat_ioctl32 nf_nat_irc hid nf_conntrack_irc soundcore snd_page_alloc i2c_nforce2 coretemp lib80211 bluetooth nf_nat_ftp nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack_ftp nf_conntrack lp parport iptable_filter ip_tables x_tables usb_storage firewire_ohci firewire_core forcedeth crc_itu_t ahci libahci 02:46:13 :[ 547.408528] Pid: 202, comm: scsi_eh_0 Tainted: P W 2.6.35-25-generic #44-Ubuntu 02:46:13 :[ 547.408531] Call Trace: 02:46:13 :[ 547.408540] [<ffffffff8106091f>] warn_slowpath_common+0x7f/0xc0 02:46:13 :[ 547.408544] [<ffffffff8106097a>] warn_slowpath_null+0x1a/0x20 02:46:13 :[ 547.408549] [<ffffffff813dc77f>] ata_eh_finish+0xdf/0xf0 02:46:13 :[ 547.408553] [<ffffffff813e441e>] sata_pmp_error_handler+0x2e/0x40 02:46:13 :[ 547.408563] [<ffffffffa00021bf>] ahci_error_handler+0x1f/0x90 [libahci] 02:46:13 :[ 547.408567] [<ffffffff813dd6d2>] ata_scsi_error+0x492/0x5e0 02:46:13 :[ 547.408572] [<ffffffff813b24cd>] scsi_error_handler+0x10d/0x190 02:46:13 :[ 547.408577] [<ffffffff813b23c0>] ? scsi_error_handler+0x0/0x190 02:46:13 :[ 547.408582] [<ffffffff8107f266>] kthread+0x96/0xa0 02:46:13 :[ 547.408587] [<ffffffff8100aee4>] kernel_thread_helper+0x4/0x10 02:46:13 :[ 547.408591] [<ffffffff8107f1d0>] ? kthread+0x0/0xa0 02:46:13 :[ 547.408595] [<ffffffff8100aee0>] ? kernel_thread_helper+0x0/0x10 02:46:13 :[ 547.408598] ---[ end trace 76dbffc2d5d49d9c ]--- 02:46:13 :[ 547.408620] ata1: EH complete 02:46:13 :[ 547.562470] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:46:13 :[ 547.671380] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:46:13 :[ 547.738198] ata1.00: limiting speed to UDMA/33:PIO4 02:46:13 :[ 547.738218] ata1: hard resetting link 02:46:13 :[ 547.738274] ata2: hard resetting link 02:46:14 :[ 548.482561] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:14 :[ 548.484083] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:14 :[ 548.486809] ata2.00: configured for UDMA/100 02:46:14 :[ 548.486818] ata2: EH complete 02:46:14 :[ 548.498998] ata1.00: configured for UDMA/33 02:46:14 :[ 548.499004] ata1: EH complete 02:46:18 :[ 552.410499] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:46:18 :[ 552.522521] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:46:18 :[ 552.529684] ata1: hard resetting link 02:46:18 :[ 552.529723] ata2: hard resetting link 02:46:19 :[ 553.280059] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:19 :[ 553.280068] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:19 :[ 553.284141] ata2.00: configured for UDMA/100 02:46:19 :[ 553.284150] ata2: EH complete 02:46:19 :[ 553.301629] ata1.00: configured for UDMA/33 02:46:19 :[ 553.301637] ata1: EH complete 02:46:21 :[ 556.078830] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 02:46:21 :[ 556.180361] EXT4-fs (dm-2): re-mounted. Opts: commit=0 02:46:22 :[ 556.262612] ata1: hard resetting link 02:46:22 :[ 556.262617] ata2: hard resetting link 02:46:22 :[ 557.010050] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:22 :[ 557.010070] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:22 :[ 557.014069] ata2.00: configured for UDMA/100 02:46:22 :[ 557.014075] ata2: EH complete 02:46:22 :[ 557.023646] ata1.00: configured for UDMA/33 02:46:22 :[ 557.023654] ata1: EH complete 02:46:30 :[ 565.047438] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:46:30 :[ 565.051554] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:46:30 :[ 565.108332] ata1: hard resetting link 02:46:30 :[ 565.108389] ata2.00: limiting speed to UDMA/33:PIO4 02:46:30 :[ 565.108406] ata2: hard resetting link 02:46:31 :[ 565.850048] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:31 :[ 565.850068] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:31 :[ 565.854304] ata2.00: configured for UDMA/33 02:46:31 :[ 565.854313] ata2: EH complete 02:46:31 :[ 565.868477] ata1.00: configured for UDMA/33 02:46:31 :[ 565.868485] ata1: EH complete 02:46:35 :[ 569.265469] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 02:46:35 :[ 569.268139] EXT4-fs (dm-2): re-mounted. Opts: commit=0 02:46:35 :[ 569.340079] ata1: hard resetting link 02:46:35 :[ 569.340113] ata2: hard resetting link 02:46:35 :[ 570.092568] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:35 :[ 570.092589] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:35 :[ 570.096828] ata2.00: configured for UDMA/33 02:46:35 :[ 570.096837] ata2: EH complete 02:46:35 :[ 570.110727] ata1.00: configured for UDMA/33 02:46:35 :[ 570.110735] ata1: EH complete 02:47:04 :[ 598.528232] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:47:04 :[ 598.653973] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:47:04 :[ 598.730854] ata1: hard resetting link 02:47:04 :[ 598.730910] ata2: hard resetting link 02:47:05 :[ 599.480136] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:47:05 :[ 599.480159] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:47:05 :[ 599.484206] ata2.00: configured for UDMA/33 02:47:05 :[ 599.484213] ata2: EH complete 02:47:05 :[ 599.496699] ata1.00: configured for UDMA/33 02:47:05 :[ 599.496707] ata1: EH complete 04:45:59 :[ 7733.756548] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 04:45:59 :[ 7733.882748] EXT4-fs (dm-2): re-mounted. Opts: commit=0 04:45:59 :[ 7733.960142] ata1: hard resetting link 04:45:59 :[ 7733.960189] ata2: hard resetting link 04:46:00 :[ 7734.701926] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 04:46:00 :[ 7734.719939] ata1.00: configured for UDMA/33 04:46:00 :[ 7734.719946] ata1: EH complete 04:46:00 :[ 7734.722547] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 04:46:00 :[ 7734.726652] ata2.00: configured for UDMA/33 04:46:00 :[ 7734.726659] ata2: EH complete 04:46:02 :[ 7736.656465] ACPI: EC: GPE storm detected, transactions will use polling mode 13:38:49 :[39704.188621] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 13:38:49 :[39704.280588] EXT4-fs (dm-2): re-mounted. Opts: commit=600 13:38:49 :[39704.360819] ata1: hard resetting link 13:38:49 :[39704.360882] ata2: hard resetting link 13:38:50 :[39705.112956] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 13:38:50 :[39705.114435] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 13:38:50 :[39705.118673] ata2.00: configured for UDMA/33 13:38:50 :[39705.118682] ata2: EH complete 13:38:50 :[39705.127076] ata1.00: configured for UDMA/33 13:38:50 :[39705.127084] ata1: EH complete 13:39:49 :[39764.142463] applesmc: F1Mn: write arg fail 13:48:11 :[40267.025145] applesmc: FS! : read arg fail 13:52:53 :[40548.596735] applesmc: FS! : read arg fail 13:53:58 :[40613.972856] applesmc: FS! : read arg fail 13:54:08 :[40624.057339] applesmc: FS! : read arg fail 13:58:20 :[40875.397749] applesmc: TC0D: read data fail 14:16:56 :[41991.722054] applesmc: Th2H: read data fail 14:22:32 :[42327.991522] applesmc: light sensor data length set to 10 14:26:19 :[42554.788886] applesmc: F1Mn: write arg fail 14:32:36 :[42931.860443] applesmc: TC0F: read data fail 14:34:32 :[43048.041469] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 14:34:33 :[43048.185850] EXT4-fs (dm-2): re-mounted. Opts: commit=0 14:34:33 :[43048.270184] ata1: hard resetting link 14:34:33 :[43048.270224] ata2: hard resetting link 14:34:33 :[43049.030049] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 14:34:33 :[43049.030065] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 14:34:33 :[43049.034106] ata2.00: configured for UDMA/33 14:34:33 :[43049.034112] ata2: EH complete 14:34:33 :[43049.056952] ata1.00: configured for UDMA/33 14:34:33 :[43049.056959] ------------[ cut here ]------------ 14:34:33 :[43049.056968] WARNING: at /build/buildd/linux-2.6.35/drivers/ata/libata-eh.c:3638 ata_eh_finish+0xdf/0xf0() 14:34:33 :[43049.056971] Hardware name: MacBookPro5,3 14:34:33 :[43049.056973] Modules linked in: michael_mic arc4 xt_multiport binfmt_misc rfcomm sco bnep l2cap parport_pc ppdev nvidia(P) ipt_REJECT xt_recent snd_hda_codec_cirrus xt_limit xt_tcpudp ipt_addrtype xt_state snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi applesmc led_class ip6table_filter lib80211_crypt_tkip snd_rawmidi snd_seq_midi_event ip6_tables input_polldev hid_apple snd_seq wl(P) snd_timer snd_seq_device snd joydev bcm5974 usbhid mbp_nvidia_bl uvcvideo btusb videodev v4l1_compat v4l2_compat_ioctl32 nf_nat_irc hid nf_conntrack_irc soundcore snd_page_alloc i2c_nforce2 coretemp lib80211 bluetooth nf_nat_ftp nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack_ftp nf_conntrack lp parport iptable_filter ip_tables x_tables usb_storage firewire_ohci firewire_core forcedeth crc_itu_t ahci libahci 14:34:33 :[43049.057048] Pid: 202, comm: scsi_eh_0 Tainted: P W 2.6.35-25-generic #44-Ubuntu 14:34:33 :[43049.057052] Call Trace: 14:34:33 :[43049.057060] [<ffffffff8106091f>] warn_slowpath_common+0x7f/0xc0 14:34:33 :[43049.057064] [<ffffffff8106097a>] warn_slowpath_null+0x1a/0x20 14:34:33 :[43049.057069] [<ffffffff813dc77f>] ata_eh_finish+0xdf/0xf0 14:34:33 :[43049.057074] [<ffffffff813e441e>] sata_pmp_error_handler+0x2e/0x40 14:34:33 :[43049.057083] [<ffffffffa00021bf>] ahci_error_handler+0x1f/0x90 [libahci] 14:34:33 :[43049.057088] [<ffffffff813dd6d2>] ata_scsi_error+0x492/0x5e0 14:34:33 :[43049.057093] [<ffffffff813b24cd>] scsi_error_handler+0x10d/0x190 14:34:33 :[43049.057097] [<ffffffff813b23c0>] ? scsi_error_handler+0x0/0x190 14:34:33 :[43049.057102] [<ffffffff8107f266>] kthread+0x96/0xa0 14:34:33 :[43049.057107] [<ffffffff8100aee4>] kernel_thread_helper+0x4/0x10 14:34:33 :[43049.057111] [<ffffffff8107f1d0>] ? kthread+0x0/0xa0 14:34:33 :[43049.057115] [<ffffffff8100aee0>] ? kernel_thread_helper+0x0/0x10 14:34:33 :[43049.057118] ---[ end trace 76dbffc2d5d49d9d ]--- 14:34:33 :[43049.057123] ata1: EH complete 14:34:41 :[43057.012698] ata1: hard resetting link 14:34:42 :[43057.362780] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 14:34:42 :[43057.381432] ata1.00: configured for UDMA/33 14:34:42 :[43057.381441] ------------[ cut here ]------------ 14:34:42 :[43057.381450] WARNING: at /build/buildd/linux-2.6.35/drivers/ata/libata-eh.c:3638 ata_eh_finish+0xdf/0xf0() 14:34:42 :[43057.381453] Hardware name: MacBookPro5,3 14:34:42 :[43057.381455] Modules linked in: michael_mic arc4 xt_multiport binfmt_misc rfcomm sco bnep l2cap parport_pc ppdev nvidia(P) ipt_REJECT xt_recent snd_hda_codec_cirrus xt_limit xt_tcpudp ipt_addrtype xt_state snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi applesmc led_class ip6table_filter lib80211_crypt_tkip snd_rawmidi snd_seq_midi_event ip6_tables input_polldev hid_apple snd_seq wl(P) snd_timer snd_seq_device snd joydev bcm5974 usbhid mbp_nvidia_bl uvcvideo btusb videodev v4l1_compat v4l2_compat_ioctl32 nf_nat_irc hid nf_conntrack_irc soundcore snd_page_alloc i2c_nforce2 coretemp lib80211 bluetooth nf_nat_ftp nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack_ftp nf_conntrack lp parport iptable_filter ip_tables x_tables usb_storage firewire_ohci firewire_core forcedeth crc_itu_t ahci libahci 14:34:42 :[43057.381530] Pid: 202, comm: scsi_eh_0 Tainted: P W 2.6.35-25-generic #44-Ubuntu 14:34:42 :[43057.381533] Call Trace: 14:34:42 :[43057.381542] [<ffffffff8106091f>] warn_slowpath_common+0x7f/0xc0 14:34:42 :[43057.381546] [<ffffffff8106097a>] warn_slowpath_null+0x1a/0x20 14:34:42 :[43057.381551] [<ffffffff813dc77f>] ata_eh_finish+0xdf/0xf0 14:34:42 :[43057.381556] [<ffffffff813e441e>] sata_pmp_error_handler+0x2e/0x40 14:34:42 :[43057.381565] [<ffffffffa00021bf>] ahci_error_handler+0x1f/0x90 [libahci] 14:34:42 :[43057.381569] [<ffffffff813dd6d2>] ata_scsi_error+0x492/0x5e0 14:34:42 :[43057.381575] [<ffffffff813b24cd>] scsi_error_handler+0x10d/0x190 14:34:42 :[43057.381579] [<ffffffff813b23c0>] ? scsi_error_handler+0x0/0x190 14:34:42 :[43057.381584] [<ffffffff8107f266>] kthread+0x96/0xa0 14:34:42 :[43057.381589] [<ffffffff8100aee4>] kernel_thread_helper+0x4/0x10 14:34:42 :[43057.381594] [<ffffffff8107f1d0>] ? kthread+0x0/0xa0 14:34:42 :[43057.381598] [<ffffffff8100aee0>] ? kernel_thread_helper+0x0/0x10 14:34:42 :[43057.381601] ---[ end trace 76dbffc2d5d49d9e ]--- 14:34:42 :[43057.381624] ata1: EH complete 14:34:42 :[43057.557887] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 14:34:42 :[43057.560517] EXT4-fs (dm-2): re-mounted. Opts: commit=600 14:34:42 :[43057.621194] ata1: hard resetting link 14:34:42 :[43057.621252] ata2: hard resetting link 14:34:43 :[43058.370141] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 14:34:43 :[43058.370162] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 14:34:43 :[43058.374407] ata2.00: configured for UDMA/33 14:34:43 :[43058.374415] ata2: EH complete 14:34:43 :[43058.381989] ata1.00: configured for UDMA/33 14:34:43 :[43058.381996] ata1: EH complete 14:34:43 :[43058.616228] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 14:34:43 :[43058.618931] EXT4-fs (dm-2): re-mounted. Opts: commit=600 14:34:43 :[43058.626687] ata1: hard resetting link 14:34:43 :[43058.626731] ata2: hard resetting link 14:34:44 :[43059.372908] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 14:34:44 :[43059.372932] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 14:34:44 :[43059.376997] ata2.00: configured for UDMA/33 14:34:44 :[43059.377003] ata2: EH complete 14:34:44 :[43059.392576] ata1.00: configured for UDMA/33 14:34:44 :[43059.392585] ata1: EH complete 15:48:19 :[47474.710860] ata1: hard resetting link 15:48:19 :[47474.710882] ata2: hard resetting link 15:48:20 :[47475.460144] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 15:48:20 :[47475.460169] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 15:48:20 :[47475.473709] ata1.00: configured for UDMA/33 15:48:20 :[47475.473717] ata1: EH complete 15:48:20 :[47475.727960] ata2.00: configured for UDMA/33 15:48:20 :[47475.727969] ata2: EH complete 16:29:39 :[49954.295017] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 16:29:39 :[49954.622307] EXT4-fs (dm-2): re-mounted. Opts: commit=0 16:29:39 :[49954.710139] ata1: hard resetting link 16:29:39 :[49954.710174] ata2: hard resetting link 16:29:40 :[49955.460046] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 16:29:40 :[49955.460062] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 16:29:40 :[49955.464138] ata2.00: configured for UDMA/33 16:29:40 :[49955.464144] ata2: EH complete 16:29:40 :[49955.473251] ata1.00: configured for UDMA/33 16:29:40 :[49955.473258] ata1: EH complete

    Read the article

  • Oracle UPK Customer Roundtable - Featuring Medtronic's Journey To Support Global Systems Implementat

    - by [email protected]
    Hear Medtronic's journey of adopting Oracle UPK globally across their SAP, Siebel, and PeopleSoft applications. Register Now for this free webinar! Thursday, April 29, 2010 -- 9:00 am PT Medtronic's success story highlights how Oracle UPK improved workforce effectiveness, addressed compliance, and ensured end user adoption. From starting out with a small group of developers using Oracle UPK to having 35 developers creating 18,000 topics, Oracle UPK has become part of Medtronic's learning infrastructure with multi-languages, help menu integration and much more.

    Read the article

  • Craftsmanship Tour: Day 3 &amp; 4 8th Light

    - by Liam McLennan
    Thursday morning the Illinois public transport system came through for me again. I took the Metra train north from Union Station (which was seething with inbound commuters) to Prairie Crossing (Libertyville). At Prairie Crossing I met Paul and Justin from 8th Light and then Justin drove us to the office. The 8th Light office is in an small business park, in a semi-rural area, surrounded by ponds. Upstairs there are two spacious, open areas for developers. At one end of the floor is Doug Bradbury’s walk-and-code station; a treadmill with a desk and computer so that a developer can get exercise at work. At the other end of the floor is a hammock. This irregular office furniture is indicative of the 8th Light philosophy, to pursue excellence without being limited by conventional wisdom. 8th Light have a wall covered in posters, each illustrating one person’s software craftsmanship journey. The posters are a fascinating visualisation of the similarities and differences between each of our progressions. The first thing I did Thursday morning was to create my own poster and add it to the wall. Over two days at 8th Light I did some pairing with the 8th Lighters and we shared thoughts on software development. I am not accustomed to such a progressive and enlightened environment and I found the experience inspirational. At 8th Light TDD, clean code, pairing and kaizen are deeply ingrained in the culture. Friday, during lunch, 8th Light hosted a ‘lunch and learn’ event. Paul Pagel lead us through a coding exercise using micro-pomodori. We worked in pairs, focusing on the pedagogy of pair programming and TDD. After lunch I recorded this interview with Paul Pagel and Justin Martin. We discussed 8th light, craftsmanship, apprenticeships and the limelight framework. Interview with Paul Pagel and Justin Martin My time at Didit, Obtiva and 8th Light has convinced me that I need to give up some of my independence and go back to working in a team. Craftsmen advance their skills by learning from each other, and I can’t do that working at home by myself. The challenge is finding the right team, and becoming a part of it.

    Read the article

  • Upcoming UPK Events

    - by kathryn.lustenberger(at)oracle.com
    February 15th: UPK: Follow Panduit's Lead and Leverage Oracle's User Productivity Kit To Achieve Your Goals - Join us for a live webcast to learn how Oracle's User Productivity Kit can help you meet and exceed your goals. The webcast will feature Jim Boss, from the Panduit Corporation, who will share how Oracle's User Productivity Kit was used with both Oracle and Non-Oracle applications to helped Panduit to meet their goals. Date: February 15th, 2011 at 12:00 PST / 3:00 EST Evite: http://www.oracle.com/us/dm/65630-naod10046029mpp005c010-se-300908.html March 2nd: Synaptis teams with Oracle to deliver a UPK customer success story - Webinar Offering The Value of UPK (Customer Success Story): How to leverage the value of UPK to streamline processes and maximize end user adoption for a global implementation Join us to learn how the power of UPK can be leveraged to train end users globally in a successful and cost effective manner. A valued Oracle UPK customer will share experiences, successes, challenges, and strategies. The webinar will also include a question and answer session to give the attendees an opportunity to interact directly with the Oracle UPK customer, Synaptis, and the Oracle UPK Team. Date: March 2, 2011 Time: 11:00am - 12:00pm EST Register for this webinar March 27 - 30th: The Alliance 2011 conference is an annual event for all higher education, government, and public sector users of Oracle applications. The Alliance conference is organized and managed by the Higher Education User Group (www.heug.org). This is the 14th annual event for the HEUG. This is your opportunity to join with over 3200 other Higher Education, Federal, State and Local Government users to network, learn and share in our amazing combined experiences. The Alliance conference team is hard at work, putting together the best conference ever for 2011 - so don't delay, make your plans now to be part of Alliance 2011! When: Sunday, March 27th, 2011 - Wednesday, March 30, 2011 Where: The Colorado Convention Center (Denver, Colorado) Registration for Alliance 2011 is Now Open! UPK will be represented at this event offering: Pre-Conference Training Learn the Basics of Oracle User Productivity Kit (UPK) Taking Your UPKs to a Whole New Level, Advanced Use of UPK Demo Pod Staff Sessions: Oracle User Productivity Kit: Creating Value throughout the Project Lifecycle Beyond Basic UPK -- User Tracking and SmartHelp Leveraging Oracle and User Productivity Kit (UPK) to Develop a Comprehensive Training Program Oracle User Productivity Kit Strategy and Roadmap -- Key to User Adoption April 10 - 14th: Registration for COLLABORATE 11 has begun - Don't miss the most comprehensive, user-driven conference devoted to Oracle applications and technology. Collaborate with a global network of more than 5,000 peers and experts to share real-world experiences, solve your challenges and gain insights to validate your technology plans. Read below to discover which group to register with for the best value. UPK will be represented at this event offering: Demo Pod Staff Sessions: Oracle User Productivity Kit: Creating Value throughout the Project Lifecycle Centralize all Project Team assets, AND, Deploy Fully Measurable Training with UPK Pro Oracle User Productivity Kit Strategy and Roadmap - Key to User Adoption Registration is Now Open!

    Read the article

  • Revision 2012 : récapitulatif de la plus grande demoparty, compte rendu par LittleWhite

    Bonjour, Comme chaque année, durant le week-end de Pâques, des centaines de passionnés se regroupent dans un gymnase, PC sous le bras pour assister au plus grand rassemblement de la demoscene : Revision. Voici un petit retour sur l'événement et un récapitulatif des démos réalisées pour l'occasion et diffusées sur écran géant : http://alexandre-laurent.developpez....revision-2012/ N'hésitez pas à discuter avec l'auteur de la démo 64k : F Felix's Workshop...

    Read the article

  • Find a Hash Collision, Win $100

    - by Mike C
    Margarity Kerns recently published a very nice article at SQL Server Central on using hash functions to detect changes in rows during the data warehouse load ETL process. On the discussion page for the article I noticed a lot of the same old arguments against using hash functions to detect change. After having this same discussion several times over the past several months in public and private forums, I've decided to see if we can't put this argument to rest for a while. To that end I'm going to...(read more)

    Read the article

  • A few things I learned regarding Azure billing policies

    - by Vincent Grondin
    An hour of small computing time: 0,12$ per hour A Gig of storage in the cloud: 0,15$ per hour 1 Gig of relational database using Azure SQL: 9,99$  per month A Visual Studio Professional with MSDN Premium account: 2500$ per year Winning an MSDN Professional account that comes preloaded with 750 free hours of Azure per month:  PRICELESS !!!      But was it really free???? Hmmm… Let’s see.....   Here's a few things I learned regarding Azure billing policies when I attended a promotional training at Microsoft last week...   1)  An instance deployed in the cloud really means whatever you upload in there... it doesn't matter if it's in STAGING OR PRODUCTION!!!!   Your MSDN account comes with 750 free hours of small computing time per month which should be enough hours per month for one instance of one application deployed in the cloud...  So we're cool, the application you run in the cloud doesn't cost you a penny....  BUT the one that's in staging is still consuming time!!!   So if you don’t want to end up having to pay 42$ at the end of the month on your credit card like this happened to a friend of mine, DELETE them staging applications once you’ve put them in production! This also applies to the instance count you can modify in the configuration file… So stop and think before you decide you want to spawn 50 of those hello world apps  .     2) If you have an MSDN account, then you have the promotional 750 hours of Azure credits per month and can use the Azure credits to explore the Cloud! But be aware, this promotion ends in 8 months (maybe more like 7 now) and then you will most likely go back to the standard 250 hours of Azure credits. If you do not delete your applications by then, you’ll get billed for the extra hours, believe me…   There is a switch that you can toggle and which will STOP your automatic enrollment after the promotion and prevent you from renewing the Azure Account automatically. Yes the default setting is to automatically renew your account and remember, you entered your credit card information in the registration process so, yes, you WILL be billed…  Go disable that ASAP    Log into your account, go to “Windows Azure Platform” then click the “Subscriptions” tab and on the right side, you’ll see a drop down with different “Actions” into it… Choose “Opt out of auto renew” and, NOW you’re safe…   Still, this is a great offer by Microsoft and I think everyone that has a chance should play a bit with Azure to get to know this technology a bit more...     Happy Cloud Computing All

    Read the article

  • Discoverer 11g (11.1.1.2) Certified with E-Business Suite

    - by Steven Chan
    Discoverer is an ad-hoc query, reporting, analysis, and Web-publishing tool that allows end-users to work directly with Oracle E-Business Suite OLTP data.Discoverer 11g (11.1.1.2) is now certified with Oracle E-Business Suite Release 11i and 12.The latest release of Oracle Business Intelligence Discoverer 11g offers new functionality, including integration with Oracle Business Intelligence Enterprise Edition (OBIEE), published Discoverer Webservice APIs, integration with Oracle WebCenter, integration with Oracle WebLogic Server, integration with Enterprise Manager (Fusion Middleware Control) and improved performance and scalability.

    Read the article

  • How can I respond to mouse events in AS3?

    - by Gabriel Meono
    Background: Trying to make a simple "drop the ball" game. The code is located inside the first frame of the timeline. Nothing more is on the stage. Issue: Using QuickBox2D I made a simple If statement that drops and object acording the Mouse-x position: if (MouseEvent.CLICK) { sim.addCircle({x:mouseX, y:1, radius:0.25, density:5}); I imported the MouseEvent library: import flash.events.MouseEvent; Nothing happens if I click, no output errors either. See it in action: http://gabrielmeono.com/download/Lucky_Hit_Alpha.swf http://gabrielmeono.com/download/Lucky_Hit_Alpha.fla Full Code: [SWF(width = 350, height = 600, frameRate = 60)] import com.actionsnippet.qbox.*; import flash.events.MouseEvent; var sim:QuickBox2D = new QuickBox2D(this); sim.createStageWalls(); //var ball:sim.addCircle({x:mouseX, y:1, radius:0.25, density:5}); // // make a heavy circle sim.addCircle({x:3, y:1, radius:0.25, density:5}); sim.addCircle({x:2, y:1, radius:0.25, density:5}); sim.addCircle({x:4, y:1, radius:0.25, density:5}); sim.addCircle({x:5, y:1, radius:0.25, density:5}); sim.addCircle({x:6, y:1, radius:0.25, density:5}); // create a few platforms sim.addBox({x:3, y:2, width:4, height:0.2, density:0, angle:0.1}); // make 26 dominoes for (var i:int = 0; i<7; i++){ //End sim.addCircle({x:1 + i * 1.5, y:16, radius:0.1, density:0}); sim.addCircle({x:2 + i * 1.5, y:15, radius:0.1, density:0}); //Mid end sim.addCircle({x:0 + i * 2, y:14, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:13, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:12, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:11, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:10, radius:0.1, density:0}); //Middle Start sim.addCircle({x:0 + i * 1.5, y:09, radius:0.1, density:0}); sim.addCircle({x:1 + i * 1.5, y:08, radius:0.1, density:0}); sim.addCircle({x:0 + i * 1.5, y:07, radius:0.1, density:0}); sim.addCircle({x:1 + i * 1.5, y:06, radius:0.1, density:0}); } if (MouseEvent.CLICK) { sim.addCircle({x:mouseX, y:1, radius:0.25, density:5}); sim.start(); /*sim.mouseDrag();*/ }

    Read the article

  • New Netra SPARC T3 Servers

    - by Ferhat Hatay
    Today at the Mobile World Congress 2011, Oracle announced two new carrier-grade NEBS Level 3- certified servers: Oracle’s Netra SPARC T3-1 rackmount server and Oracle’s Netra SPARC T3-1BA ATCA blade server bringing the performance, scalability and power efficiency of the newest SPARC T3 processor to the communications market.    The Netra SPARC T3-1 server enclosure has a compact 20inch-deep carrier-grade rack-optimized design The new Netra SPARC T3 servers further expand Oracle’s complete portfolio for the communications industry, which includes carrier-grade servers, storage and application software to run operations support systems and service delivery platforms with easy migration capabilities and unmatched investment protection via the binary compatibility guarantee of the Oracle Solaris operating system. With advanced reliability, networking and security features built-in to Oracle Solaris – the most widely deployed carrier-grade OS – the systems announced today are uniquely suited for mission-critical core network infrastructure and service delivery. The world’s first carrier-grade system using the 16-core, 128-thread SPARC T3 processor, the Netra SPARC T3-1 server supports 2x the I/O bandwidth, 2x the memory and is 35 percent faster than the previous generation. With integrated on-chip 10 Gigabit Ethernet, on-chip cryptographic acceleration, and built-in, no-cost Oracle VM Server for SPARC and Oracle Solaris Containers for virtualization, the Netra SPARC T3-1 server is an ideal platform for consolidation, offering 128 virtual systems in a single server. As the next generation Netra SPARC ATCA blade, Netra SPARC T3-1BA ATCA blade server brings the PICMG 3.0 compatibility, NEBS Level 3 Certification, ETSI compliance and the Netra business practices to the customer solution. The Netra SPARC T3-1BA ATCA blade server can be mixed in the Sun Netra CT900 blade chassis with other ATCA UltraSPARC and x86 blades.     The Netra SPARC T3-1BA ATCA blade server   The Netra SPARC T3-1BA ATCA blade server delivers industry-leading scalability, density and cost efficiency with up to 36 SPARC T3 processors (3456 processing threads) in a single rack – a 50 percent increase over the previous generation. The Netra SPARC T3-1BA blade server also offers high-bandwidth and high-capacity I/O, with greater memory capacity to tackle the increasing business demands of the communications industry. For service providers faced with the rapid growth of broadband networks and the dramatic surge in global smartphone adoption, the new Netra SPARC T3 systems deliver continuous availability with massive scalability, tested and certified to run in the harshest conditions. More information Oracle’s Sun Netra Servers Scaling Throughput and Managing TCO with Oracle’s Netra SPARC T3-1 Servers Enabling End-to-End 10 Gigabit Ethernet in Oracle's Sun Netra ATCA Product Family Data Sheet: Netra SPARC T3-1BA ATCA Blade Server Data Sheet: Netra SPARC T3-1 Server Oracle Solaris: The Carrier Grade Operating System

    Read the article

  • The Minimalist Approach to Content Governance - Retire Phase

    - by Kellsey Ruppel
     Originally posted by John Brunswick. Good news - the Retire Phase is actually more fun than the Manage Phase. During the Retire Phase our content management team should not have to track down content creators if the Request Phase of this process was completed successfully. The ownership meta data, success criteria and time stamp that was applied to the original content submission will help to manage content at the end of the content life cycle. The Retire Phase will provide the opportunity for us to prune irrelevant content items through archiving or deletion, keeping the content system clear of irrelevant information, streamlining users ability to browse and search for content.   1. Act on Metrics Established during the Request Phase Why - Some information is only relevant for a given amount of time. In Content Platform Migration Strategy - Artifacts vs Perishable Content we examined two content types - Artifacts and Perishable content. Understanding the differences between Artifacts and Perishable content will allow us to explicitly respect their various lifespans. Additionally, some content may have been part of a project that failed to meet the success criteria outlined in the Request Phase. Any content that did not meet the metrics outlined in the Request Phase should be considered for deletion. How - Thankfully by adhering to to The Minimalist Approach to Content Governance our content should have some level of meta data associated with it that will allow us to quickly sort and understand how to deal with it. Content Management Systems like Oracle's Universal Content Management (UCM) natively allow you to create and save advanced searches that can use content meta data like folders, author, expiration date, security settings and custom meta data to pull back listings of content for examination. Additionally, analytics are available for all content items that allow us to determine if the usage is meeting success criteria that may have been previously outlined during the request phase. The lists that are produced from these approaches can be quickly reviewed for each project with the content owners and based on the nature of the content and success criteria undergo archiving or deletion. Impact - Retiring content that is no longer relevant will allow end users to have fast and relevant access to information across your enterprise. As we mentioned in our first post in this series - it is easy to quickly start producing content, but the challenge is ensuring that the environment is easy to navigate and use on the third week and during the third year. The light level of effort that was placed into the Request Phase of this process will set us up to keep content clean and relevant for a long time to come. With an up-to-date content repository users will be able to quickly find access to the information that is critical to their work processes. You might not get a holiday named in your honor managing the content system, but will appreciate their quick access to quality information.

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >