Search Results

Search found 74454 results on 2979 pages for 'hardware problem'.

Page 429/2979 | < Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >

  • More SQL Smells

    - by Nick Harrison
    Let's continue exploring some of the SQL Smells from Phil's list. He has been putting together. Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) This is a great example poking holes in the whole theory of "If it works it's not broken" Queries will this probably will generally work and give the correct response. In fact, without careful analysis, you probably may be completely oblivious that there is even a problem. This subtle little problem will needlessly complicate queries and slow them down regardless of the indexes applied. Consider this example: CREATE TABLE [dbo].[Page](     [PageId] [int] IDENTITY(1,1) NOT NULL,     [Title] [varchar](75) NOT NULL,     [Sequence] [int] NOT NULL,     [ThemeId] [int] NOT NULL,     [CustomCss] [text] NOT NULL,     [CustomScript] [text] NOT NULL,     [PageGroupId] [int] NOT NULL;  CREATE PROCEDURE PageSelectBySequence ( @sequenceMin smallint , @sequenceMax smallint ) AS BEGIN SELECT [PageId] , [Title] , [Sequence] , [ThemeId] , [CustomCss] , [CustomScript] , [PageGroupId] FROM [CMS].[dbo].[Page] WHERE Sequence BETWEEN @sequenceMin AND @SequenceMax END  Note that the Sequence column is defined as int while the sequence parameter is defined as a small int. The problem is that the database may have to do a lot of type conversions to evaluate the query. In some cases, this may even negate the indexes that you have in place. Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) There are two main problems here. The first is a little subjective, since this is a non-standard way of expressing the query, it is harder to understand. The other problem is much more objective and potentially problematic. You are taking much of the control away from the optimizer. Written properly, such a query may well out perform a corresponding query written with traditional joins. More likely than not, performance will degrade. Whenever you assume that you know better than the optimizer, you will most likely be wrong. This is the fundmental problem with any hint. Consider a query like this:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , ( SELECT EffectName FROM dbo.Effect WHERE EffectId = dbo.PageEffects.EffectId ) AS EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId  This can and should be written as:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId INNER JOIN dbo.Effect ON dbo.Effects.EffectId = dbo.PageEffects.EffectId  The correlated query may just as easily show up in the where clause. It's not a good idea in the select clause or the where clause. Few or No comments. This one is a bit more complicated and controversial. All comments are not created equal. Some comments are helpful and need to be included. Other comments are not necessary and may indicate a problem. I tend to follow the rule of thumb that comments that explain why are good. Comments that explain how are bad. Many people may be shocked to hear the idea of a bad comment, but hear me out. If a comment is needed to explain what is going on or how it works, the logic is too complex and needs to be simplified. Comments that explain why are good. Comments may explain why the sql is needed are good. Comments that explain where the sql is used are good. Comments that explain how tables are related should not be needed if the sql is well written. If they are needed, you need to consider reworking the sql or simplify your data model. Use of functions in a WHERE clause. (Anil Das) Calling a function in the where clause will often negate the indexing strategy. The function will be called for every record considered. This will often a force a full table scan on the tables affected. Calling a function will not guarantee that there is a full table scan, but there is a good chance that it will. If you find that you often need to write queries using a particular function, you may need to add a column to the table that has the function already applied.

    Read the article

  • WCF REST Service Activation Errors when AspNetCompatibility is enabled

    - by Rick Strahl
    I’m struggling with an interesting problem with WCF REST since last night and I haven’t been able to track this down. I have a WCF REST Service set up and when accessing the .SVC file it crashes with a version mismatch for System.ServiceModel: Server Error in '/AspNetClient' Application. Could not load type 'System.ServiceModel.Activation.HttpHandler' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.TypeLoadException: Could not load type 'System.ServiceModel.Activation.HttpHandler' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [TypeLoadException: Could not load type 'System.ServiceModel.Activation.HttpHandler' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.] System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMarkHandle stackMark, Boolean loadTypeFromPartialName, ObjectHandleOnStack type) +0 System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark, Boolean loadTypeFromPartialName) +95 System.RuntimeType.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark) +54 System.Type.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase) +65 System.Web.Compilation.BuildManager.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase) +69 System.Web.Configuration.HandlerFactoryCache.GetTypeWithAssert(String type) +38 System.Web.Configuration.HandlerFactoryCache.GetHandlerType(String type) +13 System.Web.Configuration.HandlerFactoryCache..ctor(String type) +19 System.Web.HttpApplication.GetFactory(String type) +81 System.Web.MaterializeHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +223 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +184 Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.1 What’s really odd about this is that it crashes only if it runs inside of IIS (it works fine in Cassini) and only if ASP.NET Compatibility is enabled in web.config:<serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> Arrrgh!!!!! After some experimenting and some help from Glenn Block and his team mates I was able to track down the problem in ApplicationHost.config. Specifically the problem was that there were multiple *.svc mappings in the ApplicationHost.Config file and the older 2.0 runtime specific versions weren’t marked for the proper runtime. Because these handlers show up at the top of the list they execute first resulting in assembly load errors for the wrong version assembly. To fix this problem I ended up making a couple changes in applicationhost.config. On the machine level root’s Handler mappings I had an entry that looked like this:<add name="svc-Integrated" path="*.svc" verb="*" type="System.ServiceModel.Activation.HttpHandler, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" preCondition="integratedMode" /> and it needs to be changed to this:<add name="svc-Integrated" path="*.svc" verb="*" type="System.ServiceModel.Activation.HttpHandler, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" preCondition="integratedMode,runtimeVersionv2.0" />Notice the explicit runtime version assignment in the preCondition attribute which is key to keep ASP.NET 4.0 from executing that handler. The key here is that the runtime version needs to be set explicitly so that the various *.svc handlers don’t fire only in the order defined which in case of a .NET 4.0 app with the original setting would result in an incompatible version of System.ComponentModel to load.What was really hard to track this down is that even when looking in the debugger when launching the Web app, the AppDomain assembly loads showed System.ServiceModel V4.0 starting up just fine. Apparently the ASP.NET runtime load occurs at a different point and that’s when things break.So how did this break? According to the Microsoft folks it’s some older tools that got installed that change the default service handlers. There’s a blog entry that points at this problem with more detail:http://blogs.iis.net/webtopics/archive/2010/04/28/system-typeloadexception-for-system-servicemodel-activation-httpmodule-in-asp-net-4.aspxNote that I tried running aspnet_regiis and that did not fix the problem for me. I had to manually change the entries in applicationhost.config.   © Rick Strahl, West Wind Technologies, 2005-2011Posted in AJAX   ASP.NET  WCF  

    Read the article

  • SQL SERVER – Weekend Project – Experimenting with ACID Transactions, SQL Compliant, Elastically Scalable Database

    - by pinaldave
    Database technology is huge and big world. I like to explore always beyond what I know and share the learning. Weekend is the best time when I sit around download random software on my machine which I like to call as a lab machine (it is a pretty old laptop, hardly a quality as lab machine) and experiment it. There are so many free betas available for download that it’s hard to keep track and even harder to find the time to play with very many of them.  This blog is about one you shouldn’t miss if you are interested in the learning various relational databases. NuoDB just released their Beta 7.  I had already downloaded their Beta 6 and yesterday did the same for 7.   My impression is that they are onto something very very interesting.  In fact, it might be something really promising in terms of database elasticity, scale and operational cost reduction. The folks at NuoDB say they are working on the world’s first “emergent” database which they tout as a brand new transitional database that is intended to dramatically change what’s possible with OLTP.  It is SQL compliant, guarantees ACID transactions, yet scales elastically on heterogeneous and decentralized cloud-based resources. Interesting note for sure, making me explore more. Based on what I’ve seen so far, they are solving the architectural challenge that exists between elastic, cloud-based compute infrastructures designed to scale out in response to workload requirements versus the traditional relational database management system’s architecture of central control. Here’s my experience with the NuoDB Beta 6 so far: First they pretty much threw away all the features you’d associate with existing RDBMS architectures except the SQL and ACID transactions which they were smart to keep.  It looks like they have incorporated a number of the big ideas from various algorithms, systems and techniques to achieve maximum DB scalability. From a user’s perspective, the NuoDB Beta software behaves like any other traditional SQL database and seems to offer all the benefits users have come to expect from standards-based SQL solutions. One of the interesting feature is that one can run a transactional node and a storage node on my Windows laptop as well on other platforms – indeed interesting for sure. It’s quite amazing to see a database elastically scale across machine boundaries. So, one of the basic NuoDB concepts is that as you need to scale out, you can easily use more inexpensive hardware when/where you need it.  This is unlike what we have traditionally done to scale a database for an application – we replace the hardware with something more powerful (faster CPU and Disks). This is where I started to feel like NuoDB is on to something that has the potential to elastically scale on commodity hardware while reducing operational expense for a big OLTP database to a degree we’ve never seen before. NuoDB is able to fully leverage the cloud in an asynchronous and highly decentralized manner – while providing both SQL compliance and ACID transactions. Basically what NuoDB is doing is so new that it is all hard to believe until you’ve experienced it in action.  I will keep you up to date as I test the NuoDB Beta 7 but if you are developing a web-scale application or have an on-premise app you are thinking of moving to the cloud, testing this beta is worth your time. If you do try it, let me know what you think.  Before I say anything more, I am going to do more experiments and more test on this product and compare it with other existing similar products. For me it was a weekend worth spent on learning something new. I encourage you to download Beta 7 version and share your opinions here. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Introducing Oracle System Assistant

    - by B.Koch
    by Josh Rosen One of the challenges with today's servers is getting the server up and running and understanding what all of the steps are once you plug the server in for the first time. So many different pieces come into play: installing drivers, updating firmware, configuring RAID, and provisioning the operating system. All of these steps must be done before you can even start using the server. Finding the latest firmware and drivers, making sure you have the right versions, and knowing that all the different software and firmware components work together properly can be a real challenge. If not done correctly, such as if you separately downloading disk firmware or controller firmware that doesn't match the existing OS drivers, you could experience bugs, performance problems, and incompatibilities. Gone are the days of having to locate the tools and drivers media that shipped with the server only to find out that newer versions of software and firmware are available on the web. Oracle has solved these challenges in the new X3-2 family of servers by introducing Oracle System Assistant. Oracle System Assistant is an innovative tool that is built-in to every new x86 server. It provides step-by-step assistance with configuring the server, updating firmware and drivers, and provisioning the operating system. Once you have completed all of the steps in the Oracle System Assistant tool, the server is ready to use. Oracle System Assistant was designed to be easy and straightforward. Starting it is as simple as pressing F9 when the server is booting. You'll need a keyboard, monitor, and mouse or you can use the remote console feature of Oracle ILOM (Integrated Lights Out Manager) to access a virtual KVM to the server from any machine. From there Oracle System Assistant will walk you through each of the steps necessary to set up your server. After configuring the network settings for Oracle System Assistant, the next step is to check for any new software or firmware for the server. Oracle System Assistant connects back to Oracle using your My Oracle Support account and downloads any updates that were made available to you for this specific server. This is where you really start to see the innovation that went into Oracle System Assistant. Firmware for Oracle ILOM and BIOS, operating system drivers, and other system firmware (including for option cards and disk drivers) come as a single bundle, downloading as a single unit, that has been engineered and tested to work together by Oracle. Oracle System Assistant figures out the right combination for your server, so you don't have to. Now that the server has the latest firmware, Oracle System Assistant will next walk you through configuring the hardware. From Oracle System Assistant, you can configure many Oracle ILOM settings, including the network settings and initial user accounts. This ensures that ILOM is accessible and ready to use. Oracle System Assistant is where all parts of the server come together. In addition to communicating with Oracle ILOM and interacting with BIOS, Oracle System Assistant understands and can configure the storage subsystem. Before installing the operating system, Oracle System Assistant can detect the storage configuration and configure RAID for all disks in the system. At this point, the server is ready to be provisioned with the host operating system. You can use Oracle System Assistant to provision a supported OS, including Oracle Linux, Oracle VM, RHEL, SuSe Linux, and Windows. And by using Oracle System Assistant, you can be sure that the proper OS drivers are installed for each of the installed hardware components. With Oracle System Assistant, initial setup of the server has never been easier. If we can innovate around problems and find solutions to make our servers easier to manage, this reduces IT costs and makes managing servers simpler. I think with Oracle System Assistant we have done just that. Josh Rosen is a Principal Product Manager at Oracle and previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers.

    Read the article

  • Analysis Services (SSAS) - Unexpected Internal Error when processing (ProcessUpdate). Workaround/Resolution

    - by James Rogers
    Many implementations require the use of ProcessUpdate to support Type 1 slowly changing dimensions. ProcessUpdate drops all of the affected indexes and aggregations in partitions affected by data that changes in the Dimension on which the ProcessUpdate is being performed. Twice now I have had situations where the processing fails with "Internal error: An unexpected exception occurred." Any subsequent ProcessUpdate processing will also fail with the same error. In talking with Microsoft the issue is corrupt indexes for the Dimension(s) being processed in the partitions of the affected measure group. I cannot guarantee that the following will correct your problem but it did in my case and saved us quite a bit of down time.   Workaround: ProcessIndexes on the entire cube that is being processed and throwing the error. This corrected the problem on both 2008 and 2008 R2.   Pros:  Does not require a complete rebuild of the data (ProcessFull) for either the Dimension or Cube. User access can continue while this ProcessIndexes in underway.   Cons: Can take a long time, especially on large cubes with many partitions, dimensions and/or aggregations. Query Performance is usually severely impacted due to the memory and CPU requirements for Aggregation and Index building   <Batch http://schemas.microsoft.com/analysisservices/2003/engine"http://schemas.microsoft.com/analysisservices/2003/engine">  <Parallel>     <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">       <Object>         <DatabaseID>MyDatabase</DatabaseID>         <CubeID>MyCube</CubeID>       </Object>       <Type>ProcessIndexes</Type>       <WriteBackTableCreation>UseExisting</WriteBackTableCreation>     </Process>  </Parallel> </Batch>   The cube where the corruption exists can be found by having Profiler running while the ProcessUpdate is executing. The first partition that displays the "The Job has ended in failure." message in the TextData column will be part of the cube/measuregroup that has the corruption. You can try to run ProcessIndexes on just that measure group. This may correct the problem and save additional time if you have other large measure groups in the cube that are not affected by the corruption.   Remember to execute your normal ProcessUpdate batch after the successful completion of the ProcessIndexes. The ProcessIndexes does not pick up data changes.   Things that did not work: ProcessClearIndexes - why this doesn't work and ProcessIndexes does is unclear at this point. ProcessFull on the partition in question. In my latest case, this would clear up the problem for that partition. However, the next partition the ProcessUpdate touched that had data in it would generate and error. This leads me to believe the corruption problem will exist in all partitions in the affected measure group that have data in them.   NOTE: I experience this problem in both a SQL 2008 and SQL 2008 R2 Analysis Services environment, on separate built from the same relational database. This leads me to believe that some data condition in the tables used for the Dimension processing caused the corruption since the two environments were on physically separate hardware. I am waiting on Microsoft to analyze the dumps to give us more insight into what actually caused the corruption and will update this post accordingly.

    Read the article

  • SPARC T5-8 Servers EMEA Acceleration Promotion for Partners

    - by mseika
    Dear all We are pleased to announce the EMEA T5-8 Acceleration Promotion, a price promotion that, for a limited time, makes the T5-8 server available to our EMEA partners at a very attractive discount. Why the SPARC T5-8 server Oracle's SPARC servers running Oracle Solaris are ideal for mission-critical applications requiring high performance, best-in-class availability, and unmatched scalability on all application tiers. SPARC servers include built-in virtualization, systems management, and security at no additional cost. Designed for applications that demand the highest performance and 24x7 availability. Oracle's SPARC T5-8 server is the fastest and the most advanced, scalable midrange server in the Oracle portfolio. The Oracle SPARC T5-8 server is in the sweet spot of the UNIX midrange, and directly competing with IBM P770(+) and P780(+) systems, with a 7x price advantage (see official Oracle press release) over a similarly configured P780 system! What are we offering Effective immediately, the fully-configured T5-8 server is available to VADs with a 38% discount off price list: this is 8 additional points on top of the standard 30% contractual discount. The promo will be communicated to VADs and VARs, and VADs are expected to pass the additional discount through to the VARs. Resellers will be encouraged to use this attractive price to position T5-8 versus the competition, accelerate T5-8 sales, and use the increased margin to offer additional services to their end users - thus expanding their footprint within their customers and making the T5-8 business proposition even more compelling. This is a unique opportunity for partners to expand their base and beat the competition with a 7x price advantage over a similarly configured IBM P780. This price promotion is only available to OPN Partners, and is valid until November 30, 2013. What's in it for Partners  More competitive price More customer budget available for more projects: attach migration services, training, ... Opportunity to attach Storage, and additional Software Higher win rate Additional Details The promotion is valid for the existing configurations of T5-8 with 8 CPU and different memory configurations, including all X-options that are part of the system and ordered at the same time. 8% additional discount to the VAD on full T5-8 - Including X-Options: Cat V (30% + 8% additional): System, CPU, Memory, Disks, Ethernet Cat U (22% + 8% additional): Infiniband HCA Cat W (30% + 8% additional): FC/SAS HBA / FCoE CNA Partner eligibilty criteria Standard requirements apply. Partners must: be an OPN member in good standing, at Gold level or above meet the Resale criteria in the SPARC T-Series servers Knowledge Zone have a right to distribute hardware via the Full Use Distribution Agreement, with Hardware Addendum if applicable. Order process The promotion is available until November 30, 2013. VADs place the order via Oracle Partner Store. A request for extra-discount has to be raised in advance using the standard process for available configs: input the configuration apply the suggested discounts submit the request in the request documentation, please refer to EMEA T5-8 FY14H1 Channel Promotion as approved in GDMT GT-EB2-Q413-107C This promotion is only valid for the T5-8 configurations stated in this announcement. Any change, or additional products / items not listed explicitly, can be ordered at the same time and will follow standard approval process. Key contacts Your local A&C organization For questions on EMEA Partner Programs for Servers: Giuseppe Facchetti For questions on the T5-8 product: Martin de Jong Best regards, Olivier Tordo Senior Director, Sales & Strategy, Hardware SolutionsEMEA Alliances & Channels Paul Flannery Senior Director, EMEA Servers Product Management

    Read the article

  • SPARC T5-8 Servers EMEA Acceleration Promotion for Partners

    - by mseika
    Dear all We are pleased to announce the EMEA T5-8 Acceleration Promotion, a price promotion that, for a limited time, makes the T5-8 server available to our EMEA partners at a very attractive discount. Why the SPARC T5-8 server Oracle's SPARC servers running Oracle Solaris are ideal for mission-critical applications requiring high performance, best-in-class availability, and unmatched scalability on all application tiers. SPARC servers include built-in virtualization, systems management, and security at no additional cost. Designed for applications that demand the highest performance and 24x7 availability. Oracle's SPARC T5-8 server is the fastest and the most advanced, scalable midrange server in the Oracle portfolio. The Oracle SPARC T5-8 server is in the sweet spot of the UNIX midrange, and directly competing with IBM P770(+) and P780(+) systems, with a 7x price advantage (see official Oracle press release) over a similarly configured P780 system! What are we offering Effective immediately, the fully-configured T5-8 server is available to VADs with a 38% discount off price list: this is 8 additional points on top of the standard 30% contractual discount. The promo will be communicated to VADs and VARs, and VADs are expected to pass the additional discount through to the VARs. Resellers will be encouraged to use this attractive price to position T5-8 versus the competition, accelerate T5-8 sales, and use the increased margin to offer additional services to their end users - thus expanding their footprint within their customers and making the T5-8 business proposition even more compelling. This is a unique opportunity for partners to expand their base and beat the competition with a 7x price advantage over a similarly configured IBM P780. This price promotion is only available to OPN Partners, and is valid until November 30, 2013. What's in it for Partners  More competitive price More customer budget available for more projects: attach migration services, training, ... Opportunity to attach Storage, and additional Software Higher win rate Additional Details The promotion is valid for the existing configurations of T5-8 with 8 CPU and different memory configurations, including all X-options that are part of the system and ordered at the same time. 8% additional discount to the VAD on full T5-8 - Including X-Options: Cat V (30% + 8% additional): System, CPU, Memory, Disks, Ethernet Cat U (22% + 8% additional): Infiniband HCA Cat W (30% + 8% additional): FC/SAS HBA / FCoE CNA Partner eligibilty criteria Standard requirements apply. Partners must: be an OPN member in good standing, at Gold level or above meet the Resale criteria in the SPARC T-Series servers Knowledge Zone have a right to distribute hardware via the Full Use Distribution Agreement, with Hardware Addendum if applicable. Order process The promotion is available until November 30, 2013. VADs place the order via Oracle Partner Store. A request for extra-discount has to be raised in advance using the standard process for available configs: input the configuration apply the suggested discounts submit the request in the request documentation, please refer to EMEA T5-8 FY14H1 Channel Promotion as approved in GDMT GT-EB2-Q413-107C This promotion is only valid for the T5-8 configurations stated in this announcement. Any change, or additional products / items not listed explicitly, can be ordered at the same time and will follow standard approval process. Key contacts Your local A&C organization For questions on EMEA Partner Programs for Servers: Giuseppe Facchetti For questions on the T5-8 product: Martin de Jong Best regards, Olivier Tordo Senior Director, Sales & Strategy, Hardware SolutionsEMEA Alliances & Channels Paul Flannery Senior Director, EMEA Servers Product Management

    Read the article

  • SPARC T5-8 Servers EMEA Acceleration Promotion for Partners

    - by mseika
    Dear all We are pleased to announce the EMEA T5-8 Acceleration Promotion, a price promotion that, for a limited time, makes the T5-8 server available to our EMEA partners at a very attractive discount. Why the SPARC T5-8 server Oracle's SPARC servers running Oracle Solaris are ideal for mission-critical applications requiring high performance, best-in-class availability, and unmatched scalability on all application tiers. SPARC servers include built-in virtualization, systems management, and security at no additional cost. Designed for applications that demand the highest performance and 24x7 availability. Oracle's SPARC T5-8 server is the fastest and the most advanced, scalable midrange server in the Oracle portfolio. The Oracle SPARC T5-8 server is in the sweet spot of the UNIX midrange, and directly competing with IBM P770(+) and P780(+) systems, with a 7x price advantage (see official Oracle press release) over a similarly configured P780 system! What are we offering Effective immediately, the fully-configured T5-8 server is available to VADs with a 38% discount off price list: this is 8 additional points on top of the standard 30% contractual discount. The promo will be communicated to VADs and VARs, and VADs are expected to pass the additional discount through to the VARs. Resellers will be encouraged to use this attractive price to position T5-8 versus the competition, accelerate T5-8 sales, and use the increased margin to offer additional services to their end users - thus expanding their footprint within their customers and making the T5-8 business proposition even more compelling. This is a unique opportunity for partners to expand their base and beat the competition with a 7x price advantage over a similarly configured IBM P780. This price promotion is only available to OPN Partners, and is valid until November 30, 2013. What's in it for Partners  More competitive price More customer budget available for more projects: attach migration services, training, ... Opportunity to attach Storage, and additional Software Higher win rate Additional Details The promotion is valid for the existing configurations of T5-8 with 8 CPU and different memory configurations, including all X-options that are part of the system and ordered at the same time. 8% additional discount to the VAD on full T5-8 - Including X-Options: Cat V (30% + 8% additional): System, CPU, Memory, Disks, Ethernet Cat U (22% + 8% additional): Infiniband HCA Cat W (30% + 8% additional): FC/SAS HBA / FCoE CNA Partner eligibilty criteria Standard requirements apply. Partners must: be an OPN member in good standing, at Gold level or above meet the Resale criteria in the SPARC T-Series servers Knowledge Zone have a right to distribute hardware via the Full Use Distribution Agreement, with Hardware Addendum if applicable. Order process The promotion is available until November 30, 2013. VADs place the order via Oracle Partner Store. A request for extra-discount has to be raised in advance using the standard process for available configs: input the configuration apply the suggested discounts submit the request in the request documentation, please refer to EMEA T5-8 FY14H1 Channel Promotion as approved in GDMT GT-EB2-Q413-107C This promotion is only valid for the T5-8 configurations stated in this announcement. Any change, or additional products / items not listed explicitly, can be ordered at the same time and will follow standard approval process. Key contacts Your local A&C organization For questions on EMEA Partner Programs for Servers: Giuseppe Facchetti For questions on the T5-8 product: Martin de Jong Best regards, Olivier Tordo Senior Director, Sales & Strategy, Hardware SolutionsEMEA Alliances & Channels Paul Flannery Senior Director, EMEA Servers Product Management

    Read the article

  • SPARC T5-8 Servers EMEA Acceleration Promotion for Partners

    - by mseika
    Dear all We are pleased to announce the EMEA T5-8 Acceleration Promotion, a price promotion that, for a limited time, makes the T5-8 server available to our EMEA partners at a very attractive discount. Why the SPARC T5-8 server Oracle's SPARC servers running Oracle Solaris are ideal for mission-critical applications requiring high performance, best-in-class availability, and unmatched scalability on all application tiers. SPARC servers include built-in virtualization, systems management, and security at no additional cost. Designed for applications that demand the highest performance and 24x7 availability. Oracle's SPARC T5-8 server is the fastest and the most advanced, scalable midrange server in the Oracle portfolio. The Oracle SPARC T5-8 server is in the sweet spot of the UNIX midrange, and directly competing with IBM P770(+) and P780(+) systems, with a 7x price advantage (see official Oracle press release) over a similarly configured P780 system! What are we offering Effective immediately, the fully-configured T5-8 server is available to VADs with a 38% discount off price list: this is 8 additional points on top of the standard 30% contractual discount. The promo will be communicated to VADs and VARs, and VADs are expected to pass the additional discount through to the VARs. Resellers will be encouraged to use this attractive price to position T5-8 versus the competition, accelerate T5-8 sales, and use the increased margin to offer additional services to their end users - thus expanding their footprint within their customers and making the T5-8 business proposition even more compelling. This is a unique opportunity for partners to expand their base and beat the competition with a 7x price advantage over a similarly configured IBM P780. This price promotion is only available to OPN Partners, and is valid until November 30, 2013. What's in it for Partners  More competitive price More customer budget available for more projects: attach migration services, training, ... Opportunity to attach Storage, and additional Software Higher win rate Additional Details The promotion is valid for the existing configurations of T5-8 with 8 CPU and different memory configurations, including all X-options that are part of the system and ordered at the same time. 8% additional discount to the VAD on full T5-8 - Including X-Options: Cat V (30% + 8% additional): System, CPU, Memory, Disks, Ethernet Cat U (22% + 8% additional): Infiniband HCA Cat W (30% + 8% additional): FC/SAS HBA / FCoE CNA Partner eligibilty criteria Standard requirements apply. Partners must: be an OPN member in good standing, at Gold level or above meet the Resale criteria in the SPARC T-Series servers Knowledge Zone have a right to distribute hardware via the Full Use Distribution Agreement, with Hardware Addendum if applicable. Order process The promotion is available until November 30, 2013. VADs place the order via Oracle Partner Store. A request for extra-discount has to be raised in advance using the standard process for available configs: input the configuration apply the suggested discounts submit the request in the request documentation, please refer to EMEA T5-8 FY14H1 Channel Promotion as approved in GDMT GT-EB2-Q413-107C This promotion is only valid for the T5-8 configurations stated in this announcement. Any change, or additional products / items not listed explicitly, can be ordered at the same time and will follow standard approval process. Key contacts Your local A&C organization For questions on EMEA Partner Programs for Servers: Giuseppe Facchetti For questions on the T5-8 product: Martin de Jong Best regards, Olivier Tordo Senior Director, Sales & Strategy, Hardware SolutionsEMEA Alliances & Channels Paul Flannery Senior Director, EMEA Servers Product Management

    Read the article

  • F# and the rose-tinted reflection

    - by CliveT
    We're already seeing increasing use of many cores on client desktops. It is a change that has been long predicted. It is not just a change in architecture, but our notions of efficiency in a program. No longer can we focus on the asymptotic complexity of an algorithm by counting the steps that a single core processor would take to execute it. Instead we'll soon be more concerned about the scalability of the algorithm and how well we can increase the performance as we increase the number of cores. This may even lead us to throw away our most efficient algorithms, and switch to less efficient algorithms that scale better. We might even be willing to waste cycles in order to speculatively execute at the algorithm rather than the hardware level. State is the big headache in this parallel world. At the hardware level, main memory doesn't necessarily contain the definitive value corresponding to a particular address. An update to a location might still be held in a CPU's local cache and it might be some time before the value gets propagated. To get the latest value, and the notion of "latest" takes a lot of defining in this world of rapidly mutating state, the CPUs may well need to communicate to decide who has the definitive value of a particular address in order to avoid lost updates. At the user program level, this means programmers will need to lock objects before modifying them, or attempt to avoid the overhead of locking by understanding the memory models at a very deep level. I think it's this need to avoid statefulness that has led to the recent resurgence of interest in functional languages. In the 1980s, functional languages started getting traction when research was carried out into how programs in such languages could be auto-parallelised. Sadly, the impracticality of some of the languages, the overheads of communication during this parallel execution, and rapid improvements in compiler technology on stock hardware meant that the functional languages fell by the wayside. The one thing that these languages were good at was getting rid of implicit state, and this single idea seems like a solution to the problems we are going to face in the coming years. Whether these languages will catch on is hard to predict. The mindset for writing a program in a functional language is really very different from the way that object-oriented problem decomposition happens - one has to focus on the verbs instead of the nouns, which takes some getting used to. There are a number of hybrid functional/object languages that have been becoming more popular in recent times. These half-way houses make it easy to use functional ideas for some parts of the program while still allowing access to the underlying object-focused platform without a great deal of impedance mismatch. One example is F# running on the CLR which, in Visual Studio 2010, has because a first class member of the pack. Inside Visual Studio 2010, the tooling for F# has improved to the point where it is easy to set breakpoints and watch values change while debugging at the source level. In my opinion, it is the tooling support that will enable the widespread adoption of functional languages - without this support, people will put off any transition into the functional world for as long as they possibly can. Without tool support it will make it hard to learn these languages. One tool that doesn't currently support F# is Reflector. The idea of decompiling IL to a functional language is daunting, but F# is potentially so important I couldn't dismiss the idea. As I'm currently developing Reflector 6.5, I thought it wise to take four days just to see how far I could get in doing so, even if it achieved little more than to be clearer on how much was possible, and how long it might take. You can read what happened here, and of the insights it gave us on ways to improve the tool.

    Read the article

  • IIS not starting: The process cannot access the file because it is being used by another process

    - by Rick Strahl
    Ok, apparently a few people knew about this issue, but it is new to me and has caused me nearly an hour to track down today. What happened is that I’ve been working all day doing some final pre-deployment testing of several tools on my local dev machine. In the process I’ve been starting and stopping several IIS 7 Web sites. At some point I was done and just wanted to start my Default Web Site again and found this  little gem of an error message popping up: The process cannot access the file because it is being used by another process. (Exception from HRESULT: 0x80070020) A lot of headless running around ensued after this, trying to figure out why IIS wouldn’t start. Oddly some sites started right up, others didn’t. I killed INetInfo, all worker processes, tried IISReset a million times and even rebooted – all to no avail. What gives? Skype, you evil Bastard! As it turns out the culprit is – drum roll please - Skype!  What, you may ask, does Skype have to do with IIS and Web Requests? It looks like recent versions of Skype have an option to run over Port 80 and 443 to allow running over corporate firewalls. Which is actually a nice feature that lets Skype work just about anywhere. What’s not so cool is that IIS fails to start up when another application is already using the same port that a Web site is mapped to. In the case of my dev site that’d be port 80 and Skype was hogging it. To fix this issue you can stop Skype from using port 80 and 443 which quickly fixes the problem. Or stop Skype. Duh! To permanently fix the problem in Skype find the option on the Options | Connection tab and uncheck the Use port 80/443 option: Oddly I haven’t run into this problem even though my setup hasn’t changed in quite some time. It appears that it’s bad startup timing that causes this problem to occur. Whatever the circumstance was, Skype somehow ended up starting before IIS.  If Skype is started after IIS has started it will automatically opt for other ports and not use port 80 and so there’s no problem. It’s easy to demonstrate this behavior if you’re looking for it: Stop IIS Stop Skype Start Skype and make a test call Start IIS And voila your error is ready for you! This really shouldn’t be a problem except that it would be really nice if IIS could give a more helpful error message when it can fire up a site because a port is blocked. “The process cannot access a file” is really not a very helpful error message in this scenario… I/O port / file ah what the heck it’s all the same to Windows. Right! I’ve run into this situation quite a bit with other, albeit more obvious applications like running Apache on the local machine for testing and then trying to run an IIS application. Same situation,  although it’s been a while – pre IIS 7 and I think previous versions of IIS actually gave more useful error messages for port blockages and that would be helpful. On the way to figuring this out I ran into some pretty humorous forum posts though with people ragging on why the hell you would be running IIS. Or Skype. The misinformed paranoia police out in full force so to say :-). It’ll be nice to start running IIS Express once Visual Studio 2010 SP1 gets released. Anyway, no surprise that Skype didn’t jump out at me as the culprit right away and I was left fumbling for a while until the Internet came to the rescue. I’m not the first to have found this for sure – I posted a message on Twitter and dozens of people replied they’d run into this before as well. Seems worth mentioning again though – since I’m sure to forget that this happened in a year from now when I hit that same error. Maybe I’ll even find this blog post to remind me…© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7  Windows  

    Read the article

  • Give a session on C++ AMP – here is how

    - by Daniel Moth
    Ever since presenting on C++ AMP at the AMD Fusion conference in June, then the Gamefest conference in August, and the BUILD conference in September, I've had numerous requests about my material from folks that want to re-deliver the same session. The C++ AMP session I put together has evolved over the 3 presentations to its final form that I used at BUILD, so that is the one I recommend you base yours on. Please get the slides and the recording from channel9 (I'll refer to slide numbers below). This is how I've been presenting the C++ AMP session: Context (slide 3, 04:18-08:18) Start with a demo, on my dual-GPU machine. I've been using the N-Body sample (for VS 11 Developer Preview). (slide 4) Use an nvidia slide that has additional examples of performance improvements that customers enjoy with heterogeneous computing. (slide 5) Talk a bit about the differences today between CPU and GPU hardware, leading to the fact that these will continue to co-exist and that GPUs are great for data parallel algorithms, but not much else today. One is a jack of all trades and the other is a number cruncher. (slide 6) Use the APU example from amd, as one indication that the hardware space is still in motion, emphasizing that the C++ AMP solution is a data parallel API, not a GPU API. It has a future proof design for hardware we have yet to see. (slide 7) Provide more meta-data, as blogged about when I first introduced C++ AMP. Code (slide 9-11) Introduce C++ AMP coding with a simplistic array-addition algorithm – the slides speak for themselves. (slide 12-13) index<N>, extent<N>, and grid<N>. (Slide 14-16) array<T,N>, array_view<T,N> and comparison between them. (Slide 17) parallel_for_each. (slide 18, 21) restrict. (slide 19-20) actual restrictions of restrict(direct3d) – the slides speak for themselves. (slide 22) bring it altogether with a matrix multiplication example. (slide 23-24) accelerator, and accelerator_view. (slide 26-29) Introduce tiling incl. tiled matrix multiplication [tiling probably deserves a whole session instead of 6 minutes!]. IDE (slide 34,37) Briefly touch on the concurrency visualizer. It supports GPU profiling, but enhancements specific to C++ AMP we hope will come at the Beta timeframe, which is when I'll be spending more time talking about it. (slide 35-36, 51:54-59:16) Demonstrate the GPU debugging experience in VS 11. Summary (slide 39) Re-iterate some of the points of slide 7, and add the point that the C++ AMP spec will be open for other compiler vendors to implement, even on other platforms (in fact, Microsoft is actively working on that). (slide 40) Links to content – see slide – including where all your questions should go: http://social.msdn.microsoft.com/Forums/en/parallelcppnative/threads.   "But I don't have time for a full blown session, I only need 2 (or just 1, or 3) C++ AMP slides to use in my session on related topic X" If all you want is a small number of slides, you can take some from the session above and customize them. But because I am so nice, I have created some slides for you, including talking points in the notes section. Download them here. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • ORA-4031 Troubleshooting

    - by [email protected]
      QUICKLINK: Note 396940.1 Troubleshooting and Diagnosing ORA-4031 Error Note 1087773.1 : ORA-4031 Diagnostics Tools [Video]   Have you observed an ORA-04031 error reported in your alert log? An ORA-4031 error is raised when memory is unavailable for use or reuse in the System Global Area (SGA).  The error message will indicate the memory pool getting errors and high level information about what kind of allocation failed and how much memory was unavailable.  The challenge with ORA-4031 analysis is that the error and associated trace is for a "victim" of the problem.   The failing code ran into the memory limitation, but in almost all cases it was not part of the root problem.    Looking for the best way to diagnose? When an ORA-4031 error occurs, a trace file is raised and noted in the alert log if the process experiencing the error is a background process.   User processes may experience errors without reports in the alert log or traces generated.   The V$SHARED_POOL_RESERVED view will show reports of misses for memory over the life of the database. Diagnostics scripts are available in Note 430473.1 to help in analysis of the problem.  There is also a training video on using and interpreting the script data Note 1087773.1. 11g DiagnosabilityStarting with Oracle Database 11g Release 1, the Diagnosability infrastructure was introduced which places traces and core files into a location controlled by the DIAGNOSTIC_DEST initialization parameter when an incident, such as an ORA-4031 occurs. For earlier versions, the trace file will be written to either USER_DUMP_DEST (if the error was caught in a user process) or BACKGROUND_DUMP_DEST (if the error was caught in a background process like PMON or SMON). The trace file contains vital information about what led to the error condition.  Note 443529.1 11g Quick Steps to Package and Send Critical Error Diagnostic Information to Support[Video]Oracle Configuration Manager (OCM)Oracle Configuration Manager (OCM) works with My Oracle Support to enable proactive support capability that helps you organize, collect and manage your Oracle configurations.Oracle Configuration Manager Quick Start GuideNote 548815.1: My Oracle Support Configuration Management FAQ Note 250434.1: BULLETIN: Learn More About My Oracle Support Configuration Manager    Common Causes/Solutions The ORA-4031 can occur for many different reasons.  Some possible causes are: SGA components too small for workload Auto-tuning issues Fragmentation due to application design Bug/leaks in memory allocationsFor more on the 4031 and how this affects the SGA, see Note 396940.1 Troubleshooting and Diagnosing ORA-4031 Error Because of the multiple potential causes, it is important to gather enough diagnostics so that an appropriate solution can be identified.  However, most commonly the cause is associated with configuration tuning.   Ensuring that MEMORY_TARGET or SGA_TARGET are large enough to accommodate workload can get around many scenarios.  The default trace associated with the error provides very high level information about the memory problem and the "victim" that ran into the issue.   The data in the default trace is not going to point to the root cause of the problem. When migrating from 9i to 10g and higher, it is necessary to increase the size of the Shared Pool due to changes in the basic design of the shared memory area. Note 270935.1 Shared pool sizing in 10gNOTE: Diagnostics on the errors should be investigated as close to the time of the error(s) as possible.  If you must restart a database, it is not feasible to diagnose the problem until the database has matured and/or started seeing the problems again. Note 801787.1 Common Cause for ORA-4031 in 10gR2, Excess "KGH: NO ACCESS" Memory Allocation ***For reference to the content in this blog, refer to Note.1088239.1 Master Note for Diagnosing ORA-4031 

    Read the article

  • Oracle Database Upcoming Event dates to know

    - by mandy.ho
    February may be a short month, but it's not short of exciting Oracle events. From information packed "Real Performance Days" to participation in one of the biggest IT Security events - look out for Oracle Database and let us know if you are there with us! Feb 13-18, 2011 - Las Vegas, NV TDWI World Conference Series Join Oracle in highlighting Exadata x2-2 and x2-8, along with Oracle Business Intelligence, Enterprise Performance management and Data Warehousing solutions. Oracle will be presenting a workshop - Oracle Data Integration: Best-of-Breed Solutions for the Enterprise Wednesday, February 16, 2011 7p.m - 9p.m Glen Goodrich, Director of Product Management Christophe Dupupet, Director of Product Management, Data Integration http://events.tdwi.org/events/las-vegas-world-conference-2011/sessions/session-list.aspx Feb 14-17, 2011 - Barcelona, Spain Mobile World Congress MWC is an event where Oracle showcases the near complete breadth and depth of value that our Communications Industry strategy and Hardware and Software Solutions can deliver. Oracle supports Communications Service Providers today and delivers platforms and flexibility primed for the future. Oracle will have a two story Pavilion, along with an Oracle Java and Embedded Solutions Center - App Planet. The Exhibition times are Monday, 14th February 09.00 - 19.00 Tuesday, 15th February 09.00 - 19.00 Wednesday, 16th February 09.00 - 19.00 Thursday, 17th February 09.00 - 16.00 Have questions? Meet with Oracle Sales representatives at the Oracle Café. Open every day from 9am to 17:00pm. http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=109912&src=6973382&src=6973382&Act=4 Feb 14-18, 2011 - San Francisco, CA RSA Conference As the world's most complete, open, integrated business software and hardware systems provider, Oracle can uniquely safeguard your information throughout its entire lifecycle. Learn more by attending these sessions: Cloud Computing: A Brave New World for Security and Privacy (CLD-201) Wednesday, February 16 at 8:30 a.m. Databases Under Attack - Securing Heterogeneous Database Infrastructures (DAS-301) Thursday, February 17, 2011 at 8:30 a.m. Seven Steps to Protecting Databases (DAS-402) Friday, February 18 at 10:10 a.m. RSA Conference Attendees will also have the opportunity to meet with Oracle Security Solution experts, see live product demos and more by visiting booth # 1559. Hours: Monday, February 14, 6:00 p.m. - 8:00 p.m., Tuesday, February 15, 11:00 a.m. - 6:00 p.m. and 4:30 p.m. - 6:00p.m., Wednesday, February 16, 11:00 a.m. - 6:00 p.m., and Thursday, February 17, 11:00 a.m. - 3:00 p.m. http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=127657&src=6967733&src=6967733&Act=12 Feb 21-25, 2011 - Various Locations IOUG Presents - A Day of Real World Performance with Tom Kyte, Andrew Holdsworth and Graham Wood These Oracle experts will debate, discuss and delineate the best practices for designing hardware architectures, deploying Oracle databases, and developing applications that deliver the fastest possible performance for your business.Topics are covered in a conversational format - with all three chiming in where appropriate. Each presenter has their own screen projector to demonstrate their individual points to the participants. Customers will have the opportunity to get their specific performance/tuning questions answered and learn how to balance all the different environmental requirements for their applications to improve performance. Register today for the following dates and locations • February 21 in San Diego, CA • February 22 in Los Angeles, CA • February 23 in Seattle, WA • February 25 in Phoenix, AZ http://www.ioug.org/tabid/194/Default.aspx Feb 8-24 - Various Oracle Enterprise Cloud Summit This series of full-day events with cloud experts, sharing real-world best practices, reference architectures and more continues during the month of February. Attend the Oracle Enterprise Cloud Summit to learn how to: • Build a state-of-the-art cloud architecture • Leverage your existing IT investments • Optimize your IT management processes Whether you are considering a move to cloud computing or have already adopted a cloud model, this event offers you the insights you need to take full advantage of cloud computing. Check below to see if the event is coming to a city near you. http://www.oracle.com/us/corporate/events/cloud-events-214342.html

    Read the article

  • 65536% Autogrowth!

    - by Tara Kizer
    Twice a year, we move our production systems to our disaster recovery site.  Last Saturday night was one of those days.  There are about 50 SQL Server databases to be moved to the DR site, which is done via database mirroring.  It takes only a few seconds to failover, but some databases have a bit more involved work such as setting up replication.  Everything went relatively smooth, but we encountered a weird bug on our most mission critical system.  After everything was successfully failed over to the DR site, it was noticed that mirroring was in a suspended state on one of the databases.  We thought we had run into a SQL Server 2005 bug that we had been encountering and were working with Microsoft on a fix.  Microsoft did fix it in both SQL Server 2005 service pack 3 cumulative update package 13 and service pack 4 cumulative update package 2, however SP3 CU13 and SP4 both recently failed on this system so we were not patched yet with the bug fix.  As the suspended state was causing us issues with replication, we dropped mirroring.  We then noticed we had 10MB of free disk space on the mount point where the principal’s data files are stored.  I knew something went amiss as this system should have at least 150GB free on that mount point.  I immediately checked the main database’s data file and was shocked to see an autgrowth size of 65536%.  The data file autogrew right before mirroring went into the suspended state. 65536%! I didn’t have a lot of time to research if this autgrowth problem was a known SQL Server bug, so I deferred that research to today.  A quick Google search yielded no results but emphasis on “quick”.  I checked our performance system, which was recently restored with a copy of the affected production database, and found the autogrowth setting to be 512MB.  So this autogrowth bug was encountered sometime in the last two weeks.  On February 26th, we had attempted to install SQL 2005 SP4 on production, however it had failed (PSS case open with Microsoft).  I suspected that the SP4 failure was somehow related to this autgrowth bug although that turned out not to be the case. I then tweeted (@TaraKizer) about this problem to see if the SQL Server community (#sqlhelp) had any insights.  It seems several people have either heard of this bug or encountered it.  Aaron Bertrand (blog|twitter) referred me to this Connect item. Our affected database originated on SQL Server 2000 and was upgraded to SQL Server 2005 in 2007.  Back on SQL Server 2000, we were using the default file growth setting which was a percentage.  Sometime after the 2005 upgrade is when we changed it to 512MB.  Our situation seemed to fit the bug Aaron referred to me, so now the question was whether Microsoft had fixed it yet. I received a reply to my tweet from Amit Banerjee (twitter) that it had been fixed in SP3 CU1 (KB958004).  My affected system is SP3 CU8, so I was initially confused why we had encountered the bug.  Because I don’t read things fully, I had missed that there are additional steps you have to follow after applying the bug fix.  Amit set me straight.  Although you can read this information in the KB article, I will also copy it here in case you are as lazy as me and miss the most important section of it (although if you are as lazy as me, you won’t have read this far down my blog post): This hotfix will prevent only future occurrences of this problem. For example, if you restore a database from SQL Server 2000 to a SQL Server 2005 instance that contains this hotfix, this problem will not occur. However, if you already have a database that is affected by this problem, you must follow these steps to resolve this problem manually: Apply this hotfix. Set the file growth settings for the affected files to percentage settings, and then set the settings back to megabyte settings. Take the database offline, and then bring it back online. Verify that the values of the is_percent_growth column are correct in the sys.database_files system table and in the sys.master_files system table.

    Read the article

  • Crime Scene Investigation: SQL Server

    - by Rodney Landrum
    “The packages are running slower in Prod than they are in Dev” My week began with this simple declaration from one of our lead BI developers, quickly followed by an emailed spreadsheet demonstrating that, over 5 executions, an extensive ETL process was running average 630 seconds faster on Dev than on Prod. The situation needed some scientific investigation to determine why the same code, the same data, the same schema would yield consistently slower results on a more powerful server. Prod had yet to be officially christened with a “Go Live” date so I had the time, and having recently been binge watching CSI: New York, I also had the inclination. An inspection of the two systems, Prod and Dev, revealed the first surprise: although Prod was indeed a “bigger” system, with double the amount of RAM of Dev, the latter actually had twice as many processor cores. On neither system did I see much sign of resources being heavily taxed, while the ETL process was running. Without any real supporting evidence, I jumped to a conclusion that my years of performance tuning should have helped me avoid, and that was that the hardware differences explained the better performance on Dev. We spent time setting up a Test system, similarly scoped to Prod except with 4 times the cores, and ported everything across. The results of our careful benchmarks left us truly bemused; the ETL process on the new server was slower than on both other systems. We burned more time tweaking server configurations, monitoring IO and network latency, several times believing we’d uncovered the smoking gun, until the results of subsequent test runs pitched us back into confusion. Finally, I decided, enough was enough. Hadn’t I learned very early in my DBA career that almost all bottlenecks were caused by code and database design, not hardware? It was time to get back to basics. With over 100 SSIS packages and hundreds of queries, each handling specific tasks such as file loads, bulk inserts, transforms, logging, and so on, the task seemed formidable. And yet, after barely an hour spent with Profiler, Extended Events, and wait statistics DMVs, I had a lead in the shape of a query that joined three tables, containing millions of rows, returned 3279 results, but performed 239K logical reads. As soon as I looked at the execution plans for the query in Dev and Test I saw the culprit, an implicit conversion warning on a join predicate field that was numeric in one table and a varchar(50) in another! I turned this information over to the BI developers who quickly resolved the data type mismatches and found and fixed “several” others as well. After the schema changes the same query with the same databases ran in under 1 second on all systems and reduced the logical reads down to fewer than 300. The analysis also revealed that on Dev, the ETL task was pulling data across a LAN, whereas Prod and Test were connected across slower WAN, in large part explaining why the same process ran slower on the latter two systems. Loading the data locally on Prod delivered a further 20% gain in performance. As we progress through our DBA careers we learn valuable lessons. Sometimes, with a project deadline looming and pressure mounting, we choose to forget them. I was close to giving into the temptation to throw more hardware at the problem. I’m pleased at least that I resisted, though I still kick myself for not looking at the code on day one. It can seem a daunting prospect to return to the fundamentals of the code so close to roll out, but with the right tools, and surprisingly little time, you can collect the evidence that reveals the true problem. It is a lesson I trust I will remember for my next 20 years as a DBA, if I’m ever again tempted to bypass the evidence.

    Read the article

  • Give a session on C++ AMP – here is how

    - by Daniel Moth
    Ever since presenting on C++ AMP at the AMD Fusion conference in June, then the Gamefest conference in August, and the BUILD conference in September, I've had numerous requests about my material from folks that want to re-deliver the same session. The C++ AMP session I put together has evolved over the 3 presentations to its final form that I used at BUILD, so that is the one I recommend you base yours on. Please get the slides and the recording from channel9 (I'll refer to slide numbers below). This is how I've been presenting the C++ AMP session: Context (slide 3, 04:18-08:18) Start with a demo, on my dual-GPU machine. I've been using the N-Body sample (for VS 11 Developer Preview). (slide 4) Use an nvidia slide that has additional examples of performance improvements that customers enjoy with heterogeneous computing. (slide 5) Talk a bit about the differences today between CPU and GPU hardware, leading to the fact that these will continue to co-exist and that GPUs are great for data parallel algorithms, but not much else today. One is a jack of all trades and the other is a number cruncher. (slide 6) Use the APU example from amd, as one indication that the hardware space is still in motion, emphasizing that the C++ AMP solution is a data parallel API, not a GPU API. It has a future proof design for hardware we have yet to see. (slide 7) Provide more meta-data, as blogged about when I first introduced C++ AMP. Code (slide 9-11) Introduce C++ AMP coding with a simplistic array-addition algorithm – the slides speak for themselves. (slide 12-13) index<N>, extent<N>, and grid<N>. (Slide 14-16) array<T,N>, array_view<T,N> and comparison between them. (Slide 17) parallel_for_each. (slide 18, 21) restrict. (slide 19-20) actual restrictions of restrict(direct3d) – the slides speak for themselves. (slide 22) bring it altogether with a matrix multiplication example. (slide 23-24) accelerator, and accelerator_view. (slide 26-29) Introduce tiling incl. tiled matrix multiplication [tiling probably deserves a whole session instead of 6 minutes!]. IDE (slide 34,37) Briefly touch on the concurrency visualizer. It supports GPU profiling, but enhancements specific to C++ AMP we hope will come at the Beta timeframe, which is when I'll be spending more time talking about it. (slide 35-36, 51:54-59:16) Demonstrate the GPU debugging experience in VS 11. Summary (slide 39) Re-iterate some of the points of slide 7, and add the point that the C++ AMP spec will be open for other compiler vendors to implement, even on other platforms (in fact, Microsoft is actively working on that). (slide 40) Links to content – see slide – including where all your questions should go: http://social.msdn.microsoft.com/Forums/en/parallelcppnative/threads.   "But I don't have time for a full blown session, I only need 2 (or just 1, or 3) C++ AMP slides to use in my session on related topic X" If all you want is a small number of slides, you can take some from the session above and customize them. But because I am so nice, I have created some slides for you, including talking points in the notes section. Download them here. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Help Me Help You Fix That

    - by BuckWoody
    If you've been redirected here because you posted on a forum, or asked a question in an e-mail, the person wanted you to know how to get help quickly from a group of folks who are willing to do so - but whose time is valuable. You need to put a little effort into the question first to get others to assist. This is how to do that. It will only take you a moment to read... 1. State the problem succinctly in the title When an e-mail thread starts, or a forum post is the "head" of the conversation, you'll attract more helpers by using a descriptive headline than a vague one. This: "Driver for Epson Line Printer Not Installing on Operating System XYZ" Not this: "Can't print - PLEASE HELP" 2. Explain the Error Completely Make sure you include all pertinent information in the request. More information is better, there's almost no way to add too much data to the discussion. What you were doing, what happened, what you saw, the error message, visuals, screen shots, whatever you can include. This: "I'm getting error '5203 - Driver not compatible with Operating System since about 25 years ago' in a message box on the screen when I tried to run the SETUP.COM file from my older computer. It was a 1995 Compaq Proliant and worked correctly there.." Not this: "I get an error message in a box. It won't install." 3. Explain what you have done to research the problem If the first thing you do is ask a question without doing any research, you're lazy, and no one wants to help you. Using one of the many fine search engines you can most always find the answer to your problem. Sometimes you can't. Do yourself a favor - open a notepad app, and paste the URL's as you look them up. If you get your answer, don't save the note. If you don't get an answer, send the list along with the problem. It will show that you've tried, and also keep people from sending you links that you've already checked. This: "I read the fine manual, and it doesn't mention Operating System XYZ for some reason. Also, I checked the following links, but the instructions there didn't fix the problem: " Not this: <NULL> 4. Say "Please" and "Thank You" Remember, you're asking for help. No one owes you their valuable time. Ask politely, don't pester, endure the people who are rude to you, and when your question is answered, respond back to the thread or e-mail with a thank you to close it out. It helps others that have your same problem know that this is the correct answer. This: "I could really use some help here - if you have any pointers or things to try, I'd appreciate it." Not this: "I really need this done right now - why are there no responses?" This: "Thanks for those responses - that last one did the trick. Turns out I needed a new printer anyway, didn't realize they were so inexpensive now." Not this: <NULL> There are a lot of motivated people that will help you. Help them do that.

    Read the article

  • Intermittent internet connectivity

    - by Rob Oplawar
    UPDATED: I recently built a new computer and set it up to dual-boot Windows 7 and Ubuntu 11.10. In Windows, using the same hardware, my LAN connectivity is solid. In Ubuntu, however, my network interface periodically dies and resets itself; I'll have a solid connection for 30 seconds, and then it will go out for 30 seconds. When I tail the log: tail -f /var/log/kern.log I see "eth0 link up" messages appear periodically, corresponding with the return of connectivity. I posted the original question months ago, and misinterpreted what was going on. With a working Internet connection in Windows, I ignored the problem for some months. See my answer below for the solution (drivers). ORIGINAL POST In Ubuntu, although I maintain a solid connection to my LAN (pinging the router IP address consistently returns a good result), my internet connectivity drops in and out. When I continuously ping 74.125.227.18 (a google.com server), I get responses for a while, then I start getting "Destination Host Unreachable" for a while, then I get responses again. This happens consistently, dropping the connection for about 30 seconds out of every minute or two. Whether I configure my network via the network manager or via /etc/network/interfaces seems to make no difference. I configure with the following settings: address 192.168.1.101 network 192.168.1.0 gateway 192.168.1.99 (my router's IP address) netmask 255.255.255.0 (confirmed as the right netmask for the router) broadcast 192.168.1.255 (also confirmed with the router). ifconfig confirms that these settings are working: eth0 Link encap:Ethernet HWaddr 50:e5:49:40:da:a6 inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::52e5:49ff:fe40:daa6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11557 errors:0 dropped:11557 overruns:0 frame:11557 TX packets:13117 errors:0 dropped:211 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:9551488 (9.5 MB) TX bytes:1930952 (1.9 MB) Interrupt:41 Base address:0xa000 I get the same issue when I use automatic DHCP address settings, although I did confirm that there is no other machine on the network with the static IP address I want to use. As I said, the connection to the local network stays solid - I never have any trouble pinging 192.168.1.* - it's internet addresses that I intermittently cannot reach. It's not a DNS issue because pinging known IP addresses directly shows the same behavior. Also, I don't think it's a hardware issue, as I never have any internet connectivity problems on the same machine in Windows. The network hardware is built into the motherboard: Gigabyte Z68XP-UD3P. I managed to bring the OS fully up to date, according to the update manager, but it didn't fix the issue, and with my limited understanding of network architecture I'm at my wit's end. The only clue I can see is that ifconfig is reporting a lot of dropped packets, but I'm not sure what to do about it. UPDATE: It seems my problem is a little more generic than I described; now when I try pinging my router and google simultaneously, they both go unreachable at the same time. Running ifdown eth0 and then ifup eth0 brings it back temporarily; if I just wait it comes back after a couple of minutes. I'll broaden my search through intermittent network connectivity problems.

    Read the article

  • Is Your ASP.NET Development Server Not Working?

    - by Paulo Morgado
    Since Visual Studio 2005, Visual Studio comes with a development web server: the ASP.NET Development Server. I’ve been using this web server for simple test projects since than with Visual Studio 2005 and Visual Studio 2008 in Windows XP Professional on my work laptop and Windows XP Professional, Windows Vista 64bit Ultimate and Windows 7 64bit Ultimate at my home desktop without any problems (apart the known custom identity problem, that is). When I received my new work laptop, I installed Windows Vista 64bit Enterprise and Visual Studio 2008 and, for my surprise, the ASP.NET Development Server wasn’t working. I started looking for differences between the laptop environment and the desktop environment and the most notorious differences were: System Laptop Desktop SKU Windows Vista 64bit Enterprise Windows Vista 64bit Ultimate Joined to a Domain Yes No Anti-Virus McAffe ESET After asserting that no domain policies were being applied to my laptop and domain user and nothing was being logged by the ant-virus, my suspicions turned to the fact that the laptop was running an Enterprise SKU and the desktop was running an Ultimate SKU. After having problems with other applications I was sure that problem was the Enterprise SKU, but never found a solution to the problem. Because I wasn’t doing any web development at the time, I left it alone. After upgrading to Windows 7, the problem persisted but, because I wasn’t doing any web development at the time, once again, I left it alone. Now that I installed Visual Studio 2010 I had to solve this. After searching around forums and blogs that either didn’t offer an answer or offered very complicated workarounds that, sometimes, involved messing with the registry, I came to the conclusion that the solution is, in fact, very simple. When Windows Vista is installed, hosts file, according to this contains this definition: 127.0.0.1 localhost ::1 localhost This was not what I had on my laptop hosts file. What I had was this: #127.0.0.1 localhost #::1 localhost I might have changed it myself, but from the amount of people that I found complaining about this problem on Windows Vista, this was probably the way it was. The installation of Windows 7 leaves the hosts file like this: #127.0.0.1 localhost #::1 localhost And although the ASP.NET Development Server works fine on Windows 7 64bit Ultimate, on Windows 7 64bit Enterprise it needs to be change to this: 127.0.0.1 localhost ::1 localhost And I suspect it’s the same with Windows Vista 64bit Enterprise.

    Read the article

  • Wine is no longer able to initialize OpenGL

    - by nebukadnezzar
    Since a while, wine is no longer able to initialize OpenGL on my 64bit Linux. This is by no means a unique problem to me- Lots of people with nvidia cards running 64bit linux seem to have this problem with wine on oneiric: http://forum.winehq.org/viewtopic.php?p=66856&sid=9d6e5ad628ee6fb6e5ef04577275daed http://forum.pinguyos.com/Thread-Wine-OpenGl-Problem https://bbs.archlinux.org/viewtopic.php?id=137696 And while some launchpad bug reports say one should use this workaround: LD_PRELOAD=/usr/lib32/nvidia-current/libGL.so.1 wine <app> It unfortunately does not solve the problem at all for me; That is, if i'd run CS:S, the game will run just fine for a while, but will abort after some time, including a range of GLSL-related errors. Here the startup errors from simply running steam: + wine steam.exe fixme:process:GetLogicalProcessorInformation ((nil),0x33e488): stub [.. snip ...] fixme:dwmapi:DwmSetWindowAttribute (0x1009a, 3, 0x33d384, 4) stub fixme:dwmapi:DwmSetWindowAttribute (0x1009a, 4, 0x33d374, 4) stub err:wgl:is_extension_supported No OpenGL extensions found, check if your OpenGL setup is correct! err:wgl:is_extension_supported No OpenGL extensions found, check if your OpenGL setup is correct! err:wgl:is_extension_supported No OpenGL extensions found, check if your OpenGL setup is correct! [... this error is being reported a few dozen times, so snip again ...] err:wgl:is_extension_supported No OpenGL extensions found, check if your OpenGL setup is correct! err:wgl:is_extension_supported No OpenGL extensions found, check if your OpenGL setup is correct! err:wgl:is_extension_supported No OpenGL extensions found, check if your OpenGL setup is correct! err:wgl:is_extension_supported No OpenGL extensions found, check if your OpenGL setup is correct! fixme:iphlpapi:NotifyAddrChange (Handle 0x47cdba8, overlapped 0x45dba80): stub fixme:winsock:WSALookupServiceBeginW (0x47cdbc8 0x00000ff0 0x47cdbc4) Stub! [... snip ...] Here are the errors reported while running, and after running (because the log is huge-ish, it's pasted elsewhere): http://paste.ubuntu.com/901925/ Now, 32bit OpenGL works just fine; The 32bit executables of Nexuiz, for example, work just fine. That being said, I'm suspecting that this is a problem of wine itself. I've already manually built the git version of wine, to no avail. So what's going on? Is something broken? How do I check (correctly) whether something is broken? How do I solve this?

    Read the article

  • Shutdown Hangs for 5 Minutes on Kubuntu 14.04

    - by Augustinus
    I've had persistent problems with a 5 minute hang at shutdown for the last three versions of Kubuntu (13.04, 13.10, and now 14.04). I suspect this is not a KDE-specific problem. Recently, I performed a fresh installation of Kubuntu 14.04 from a live-USB, and shutdown worked normally for about a week. The hang-up is now happening again, and I can't figure out why. A brief description of the problem: The hang-up occurs with all methods of initiating a normal shutdown: Clicking the shutdown or restart button in KDE, sudo shutdown -h now, sudo reboot The shutdown splash screen appears. Using the down-arrow to access verbose messages, I see "Asking all remaining processes to terminate." This message remains for 5 minutes with no disk activity. Finally, a rapid series of messages flurries to the screen: * All processes ended within 300 seconds... [ OK ] nm-dispatcher.action: Caught signal 15, shutting down... ModemManager[852]: <warn> Could not acquire the 'org.freedesktop.ModemManager1' service name ModemManager[852]: <info> ModemManager is shut down * Deactivating swap... [ OK ] * Unmounting local filesystems... [ OK ] * Will now restart` Possible Sources of the Problem: Before the problem re-appeared, I have mainly been doing routine computing. I have kept the system up-to-date using apt-get upgrade and apt-get dist-upgrade. The only other notable incident was a power failure. I do not have the computer connected to a UPS, so the power failure resulted in an immediate shutdown. Could this have corrupted an important file which must be accessed at shutdown? Is there any way that could cause a 5-minute hang-up? Here is a list of packages that have been updated before the problem appeared: bash iotop dpkg dpkg-dev python3-software-properties libdpkg-perl software-properties-kde software-properties-common akonadi-backend-mysql libakonadiprotocolinternals1 akonadi-server firefox-locale-en firefox flashplugin-installer libqapt2 libqapt2-runtime thunderbird openjdk-7-jre-headless thunderbird-locale-en kubuntu-driver-manager qapt-deb-installer openjdk-7-jre qapt-batch icedtea-7-jre-jamvm libelf1 dpkg dpkg-dev libdpkg-perl libjbig0 gettext-base libgettextpo-dev libssl1.0.0 libgettextpo0 libasprintf-dev linux-headers-3.13.0-24 gettext libasprintf0c2 linux-headers-3.13.0-24-generic openssl linux-libc-dev gstreamer0.10-qapt kubuntu-desktop linux-image-extra-3.13.0-24-generic linux-image-3.13.0-24-generic I would appreciate any help with this.

    Read the article

  • Inside Red Gate - Ricky Leeks

    - by Simon Cooper
    So, one of our profilers has a problem. Red Gate produces two .NET profilers - ANTS Performance Profiler (APP) and ANTS Memory Profiler (AMP). Both products help .NET developers solve problems they are virtually guaranteed to encounter at some point in their careers - slow code, and high memory usage, respectively. Everyone understands slow code - the symptoms are very obvious (an operation takes 2 hours when it should take 10 seconds), you know when you've solved it (the same operation now takes 15 seconds), and everyone understands how you can use a profiler like APP to help solve your particular problem. High memory usage is a much more subtle and misunderstood concept. How can .NET have memory leaks? The garbage collector, and how the CLR uses and frees memory, is one of the most misunderstood concepts in .NET. There's hundreds of blog posts out there covering various aspects of the GC and .NET memory, some of them helpful, some of them confusing, and some of them are just plain wrong. There's a lot of misconceptions out there. And, if you have got an application that uses far too much memory, it can be hard to wade through all the contradictory information available to even get an idea as to what's going on, let alone trying to solve it. That's where a memory profiler, like AMP, comes into play. Unfortunately, that's not the end of the issue. .NET memory management is a large, complicated, and misunderstood problem. Even armed with a profiler, you need to understand what .NET is doing with your objects, how it processes them, and how it frees them, to be able to use the profiler effectively to solve your particular problem. And that's what's wrong with AMP - even with all the thought, designs, UX sessions, and research we've put into AMP itself, some users simply don't have the knowledge required to be able to understand what AMP is telling them about how their application uses memory, and so they have problems understanding & solving their memory problem. Ricky Leeks This is where Ricky Leeks comes in. Created by one of the many...colourful...people in Red Gate, he headlines and promotes several tutorials, pages, and articles all with information on how .NET memory management actually works, with the goal to help educate developers on .NET memory management. And educating us all on how far you can push various vegetable-based puns. This, in turn, not only helps them understand and solve any memory issues they may be having, but helps them proactively code against such memory issues in their existing code. Ricky's latest outing is an interview on .NET Rocks, providing information on the Top 5 .NET Memory Management Gotchas, along with information on a free ebook on .NET Memory Management. Don't worry, there's loads more vegetable-based jokes where those came from...

    Read the article

  • How to share two keyboard on the same laptop, french iso layout and usa ansi layout keyboard with usb?

    - by reyman64
    I recently buy a "noppoo choc mini" with this specific ANSI US-INTERNATIONAL pc84 layout. This specific keyboard have only 84 key , a 60% (compact tenkeyless) reduced layout My problem is simple, there is no keyboard layout into Ubuntu 12.04 which correspond to this usa normal ansi layout ... so it's the same problem with reduced version and only 84 key .. I search a template of normal ANSI US-INTERNATIONAL for xmodmap/xkb, and after i can try to manually map the other key. I search on google, and i don't find any other user which have same problem, so it's seem i have not the good keywoard to search this information.. Edit 1 : Here you can see there is probably a bug in ubuntu, because the layout for USA with dead key is not correct ! I have this : http://minus.com/lEdKMrsNAwkVA And other users have this for the same layout : http://i.stack.imgur.com/p52XG.png EDIT 2 It seems after a "sudo dpkg-reconfigure keyboard-configuration" : french standard keyboard pc105 + precision M65 keyboard from dell laptop Now i can see the good us layout in parameters, but i cannot have the iso layout for french usage... EDIT 3 Ok, after reboot i understand the probleme, i explain. I have one laptop with integrated french keyboard, and i want to use my usb keyboard which use a usa ANSI layout. It seem it's impossible in ubuntu and "dpkg-reconfigure keyboard-configuration" to share two different physical layout (ANSI and EU ISO) on the same computer ... EDIT4 Ok, it seems i can switch the physical layout (ISO <- ANSI) with this command in terminal : setxkbmap -layout us setxkbmap -layout us -variant alt-intl an setxkbmap -layout fr It's very complicated qnd it seem ubuntu 12.04 have big problem with keyboard manager ... because all works great with these two commands, without ANY change into the system parameters keyboard !!! Second bug ? The image of the layout for fr is buggy, the layout is not ISO, but i can press on the letter "< " at the left of right shift without any problem ! You can see the image here (french alternative with ANSI layout ? it's crazy ?) : http: //minus.com/lXsDJwoeyWAfF Can you help me on this point ? I'm lost with xkb, and manual mapping is very complicated ... Thanks a lot, SR

    Read the article

  • Unable to update/ install any files [closed]

    - by Surya
    Possible Duplicate: “Problem with MergeList” error when trying to do an update Just now I installed ubuntu 12.04 on my Lenovo G570 laptop. First I got an error at the time of installation (don't know about it) and I restarted the system and next time, it went well. So, after installing problems started.. There was a error with "Language recognition" and I tried to fix it but didn't work. I tried to install powerTop to check the status of power management. at terminal: sudo apt-get install powertop This is the error I got surya@surya-Lenovo-G570:~$ sudo apt-get powertop install [sudo] password for surya: E: Invalid operation powertop surya@surya-Lenovo-G570:~$ sudo apt-get install powertop Reading package lists... Error! E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages E: The package lists or status file could not be parsed or opened. surya@surya-Lenovo-G570:~$ ^C surya@surya-Lenovo-G570:~$ ^C surya@surya-Lenovo-G570:~$ ^C surya@surya-Lenovo-G570:~$ I downloaded Google Chrome .deb one and tried to install but its not working. Software center is opened and its not loading. There was a notification on the status bar which says: An error occurred please run the package manager from the right-click menu ... .... ... E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages "Copy & Paste" from terminal is not really working... When I press Ctrl + C; its showing ^C on terminal but its not working.. The most important error: I am unable to see a "chip" icon on the status bar so as to install proprietary drivers for my ATI drivers... The interesting part is, powertop worked will on live cd and it even detected my ATI card. Update When I opened "Software Up to Date", this showed a error: Could not initialize the package information An unresolvable problem occurred while initializing the package information. Please report this bug against the 'update-manager' package and include the following error message: 'E:Encountered a section with no Package: header, E:Problem with MergeList /var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages, E:The package lists or status file could not be parsed or opened.' : My laptop details Lenovo G570; Intel 2nd Gen i5 processor 4GB DDR3 RAM Intel in-build graphics + AMD Radeon HD 6370M 1GB graphics. I need help ASAP.

    Read the article

< Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >