Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 1115/1300 | < Previous Page | 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122  | Next Page >

  • Performance problems loading XML with SSIS, an alternative way!

    - by AtulThakor
    I recently needed to load several thousand XML files into a SQL database, I created an SSIS package which was created as followed: Using a foreach container to loop through a directory and load each file path into a variable, the “Import XML” dataflow would then load each XML file into a SQL table.       Running this, it took approximately 1 second to load each file which seemed a massive amount of time to parse the XML and load the data, speaking to my colleague Martin Croft, he suggested the use of T-SQL Bulk Insert and OpenRowset, so we adjusted the package as followed:     The same foreach container was used but instead the following SQL command was executed (this is an expression):     "INSERT INTO MyTable(FileDate) SELECT   CAST(bulkcolumn AS XML)     FROM OPENROWSET(         BULK         '" + @[User::CurrentFile]  + "',         SINGLE_BLOB ) AS x"     Using this method we managed to load approximately 20 records per second, much faster…for data loading! For what we wanted to achieve this was perfect but I’ll leave you with the following points when making your own decision on which solution you decide to choose!      Openrowset Method Much faster to get the data into SQL You’ll need to parse or create a view over the XML data to allow the data to be more usable(another post on this!) Not able to apply validation/transformation against the data when loading it The SQL Server service account will need permission to the file No schema validation when loading files SSIS Slower (in our case) Schema validation Allows you to apply transformations/joins to the data Permissions should be less of a problem Data can be loaded into the final form through the package When using a schema validation errors can fail the package (I’ll do another post on this)

    Read the article

  • Chuck Norris Be Thy Name

    - by Robz / Fervent Coder
    Chuck Norris doesn’t program with a keyboard. He stares the computer down until it does what he wants. All things need a name. We’ve tossed around a bunch of names for the framework of tools we’ve been working on, but one we kept coming back to was Chuck Norris. Why did we choose Chuck Norris? Well Chuck Norris sort of chose us. Everything we talked about, the name kept drawing us closer to it. We couldn’t escape Chuck Norris, no matter how hard we tried. So we gave in. Chuck Norris can divide by zero. What is the Chuck Norris Framework? @drusellers and I have been working on a variety of tools: WarmuP - http://github.com/chucknorris/warmup (Template your entire project/solution and create projects ready to code - From Zero to a Solution with everything in seconds. Your templates, your choices.) UppercuT - http://projectuppercut.org (Build with Conventions - Professional Builds in Moments, Not Days!) | Code also at http://github.com/chucknorris/uppercut DropkicK - http://github.com/chucknorris/dropkick (Deploy Fluently) RoundhousE - http://projectroundhouse.org (Professional Database Management with Versioning) | Code also at http://github.com/chucknorris/roundhouse SidePOP - http://sidepop.googlecode.com (Does your application need to check email?) HeadlocK - http://github.com/chucknorris/headlock (Hash a directory so you can later know if anything has changed) Others – still in concept or vaporware People ask why we choose such violent names for each tool of our framework? At first it was about whipping your code into shape, but after awhile the naming became, “How can we relate this to Chuck Norris?” People also ask why we uppercase the last letter of each name. Well, that’s more about making you ask questions…but there are a few reasons for it. Project managers never ask Chuck Norris for estimations…ever. The class object inherits from Chuck Norris Chuck Norris doesn’t need garbage collection because he doesn’t call .Dispose(), he calls .DropKick() So what are you waiting for? Join the Google group today, download and play with the tools. And lastly, welcome to Chuck Norris. Or should I say Chuck Norris welcomes you…

    Read the article

  • Oracle Systems and Solutions at OpenWorld Tokyo 2012

    - by ferhat
    Oracle OpenWorld Tokyo and JavaOne Tokyo will start next week April 4th. We will cover Oracle systems and Oracle Optimized Solutions in several keynote talks and general sessions. Full schedule can be found here. Come by the DemoGrounds to learn more about mission critical integration and optimization of complete Oracle stack. Our Oracle Optimized Solutions experts will be at hand to discuss 1-1 several of Oracle's systems solutions and technologies. Oracle Optimized Solutions are proven blueprints that eliminate integration guesswork by combing best in class hardware and software components to deliver complete system architectures that are fully tested, and include documented best practices that reduce integration risks and deliver better application performance. And because they are highly flexible by design, Oracle Optimized Solutions can be implemented as an end-to-end solution or easily adapted into existing environments. Oracle Optimized Solutions, Servers,  Storage, and Oracle Solaris  Sessions, Keynotes, and General Session Talks DAY TIME TITLE Notes Session Wednesday  April 4 9:00 - 11:15 Keynote: ENGINEERED FOR INNOVATION - Engineered Systems Mark Hurd,  President, Oracle Takao Endo, President & CEO, Oracle Corporation Japan John Fowler, EVP of Systems, Oracle Ed Screven, Chief Corporate Architect, Oracle English Session K1-01 11:50 - 12:35 Simplifying IT: Transforming the Data Center with Oracle's Engineered Systems Robert Shimp, Group VP, Product Marketing, Oracle English Session S1-01 15:20 - 16:05 Introducing Tiered Storage Solution for low cost Big Data Archiving S1-33 16:30 - 17:15 Simplifying IT - IT System Consolidation that also Accelerates Business Agility S1-42 Thursday  April 5 9:30 - 11:15 Keynote: Extreme Innovation Larry Ellison, Chief Executive Officer, Oracle English Session K2-01 11:50 - 13:20 General Session: Server and Storage Systems Strategy John Fowler, EVP of Systems, Oracle English Session G2-01 16:30 - 17:15 Top 5 Reasons why ZFS Storage appliance is "The cloud storage" by SAKURA Internet Inc L2-04 16:30 - 17:15 The UNIX based Exa* Performance IT Integration Platform - SPARC SuperCluster S2-42 17:40 - 18:25 Full stack solutions of hardware and software with SPARC SuperCluster and Oracle E-Business Suite  to minimize the business cost while maximizing the agility, performance, and availability S2-53 Friday April 6 9:30 - 11:15 Keynote: Oracle Fusion Applications & Cloud Robert Shimp, Group VP, Product Marketing Anthony Lye, Senior VP English Session K3-01 11:50 - 12:35 IT at Oracle: The Art of IT Transformation to Enable Business Growth English Session S3-02 13:00-13:45 ZFS Storagge Appliance: Architecture of high efficient and high performance S3-13 14:10 - 14:55 Why "Niko Niko doga" chose ZFS Storage Appliance to support their growing requirements and storage infrastructure By DWANGO Co, Ltd. S3-21 15:20 - 16:05 Osaka University: Lower TCO and higher flexibility for student study by Virtual Desktop By Osaka University S3-33 Oracle Developer Sessions with Oracle Systems and Oracle Solaris DAY TIME TITLE Notes LOCATION Friday April 6 13:00 - 13:45 Oracle Solaris 11 Developers D3-03 13:00 - 14:30 Oracle Solaris Tuning Contest Hands-On Lab D3-04 14:00 - 14:35 How to build high performance and high security Oracle Database environment with Oracle SPARC/Solaris English Session D3-13 15:00 - 15:45 IT Assets preservation and constructive migration with Oracle Solaris virtualization D3-24 16:00 - 17:30 The best packaging system for cloud environment - Creating an IPS package D3-34 Follow Oracle Infrared at Twitter, Facebook, Google+, and LinkedIn  to catch the latest news, developments, announcements, and inside views from  Oracle Optimized Solutions.

    Read the article

  • Book review: Microsoft System Center Enterprise Suite Unleashed

    - by BuckWoody
    I know, I know – what’s a database guy doing reading a book on System Center? Well, I need it from time to time. System Center is actually a collection of about 7 different products that you can use to manage and monitor your software and hardware, from drive space through Microsoft Office, UNIX systems, and yes, SQL Server. It’s that last part I care about the most, and so I’ve dealt with Data Protection Manager and System Center Operations Manager (I call it SCOM) in SQL Server. But I wasn’t familiar with the rest of the suite nor was I as familiar as I needed to be with the “Essentials” release – a separate product that groups together the main features of System Center into a single offering for smaller organizations. These companies usually run with a smaller IT shop, so they sometimes opt for this product to help them monitor everything, including SQL Server. So I picked up “Microsoft System Center Enterprise Suite Unleashed” by Chris Amaris and a cast of others. I don’t normally like to get a technical book by multiple authors – I just find that most of the time it’s quite jarring to switch from author to author, but I think this group did pretty well here.  The first chapter on introducing System Center has helped me talk with others about what the product does, and which pieces fit well together with SQL Server. The writing is well done, and I didn’t find a jump from author to author as I went along. The information is sequential, meaning that they lead you from install to configuration and then use. It’s very much a concepts-and-how-to book, and a big one at that – over 950 pages of learning! It was a pretty quick read, though, since I skipped the installation parts and there are lots of screenshots. While I’m not sure you’d be an expert on the product when you finish reading this book, but I would say you’re more than halfway there. I would say it suits someone that learns through examples the best, since they have a lot of step-by-step examples I do recommend that you take a look if you have to interact with this product, or even if you are a smaller shop and you’re the primary IT resource. The last few chapters deal with System Center Essentials, and honestly it was the best part of the book for me. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL SERVER – Introduction to FIRST _VALUE and LAST_VALUE – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    SQL Server 2012 introduces new analytical functions FIRST_VALUE() and LAST_VALUE(). This function returns first and last value from the list. It will be very difficult to explain this in words so I’d like to attempt to explain its function through a brief example. Instead of creating a new table, I will be using the AdventureWorks sample database as most developers use that for experiment purposes. Now let’s have fun following query: USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO The above query will give us the following result: What’s the most interesting thing here is that as we go from row 1 to row 10, the value of the FIRST_VALUE() remains the same but the value of the LAST_VALUE is increasing. The reason behind this is that as we progress in every line – considering that line and all the other lines before it, the last value will be of the row where we are currently looking at. To fully understand this statement, see the following figure: This may be useful in some cases; but not always. However, when we use the same thing with PARTITION BY, the same query starts showing the result which can be easily used in analytical algorithms and needs. Let us have fun through the following query: Let us fun following query. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO The above query will give us the following result: Let us understand how PARTITION BY windows the resultset. I have used PARTITION BY SalesOrderID in my query. This will create small windows of the resultset from the original resultset and will follow the logic or FIRST_VALUE and LAST_VALUE in this resultset. Well, this is just an introduction to these functions. In the future blog posts we will go deeper to discuss the usage of these two functions. By the way, these functions can be applied over VARCHAR fields as well and are not limited to the numeric field only. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Architecture strategies for a complex competition scoring system

    - by mikewassmer
    Competition description: There are about 10 teams competing against each other over a 6-week period. Each team's total score (out of a 1000 total available points) is based on the total of its scores in about 25,000 different scoring elements. Most scoring elements are worth a small fraction of a point and there will about 10 X 25,000 = 250,000 total raw input data points. The points for some scoring elements are awarded at frequent regular time intervals during the competition. The points for other scoring elements are awarded at either irregular time intervals or at just one moment in time. There are about 20 different types of scoring elements. Each of the 20 types of scoring elements has a different set of inputs, a different algorithm for calculating the earned score from the raw inputs, and a different number of total available points. The simplest algorithms require one input and one simple calculation. The most complex algorithms consist of hundreds or thousands of raw inputs and a more complicated calculation. Some types of raw inputs are automatically generated. Other types of raw inputs are manually entered. All raw inputs are subject to possible manual retroactive adjustments by competition officials. Primary requirements: The scoring system UI for competitors and other competition followers will show current and historical total team scores, team standings, team scores by scoring element, raw input data (at several levels of aggregation, e.g. daily, weekly, etc.), and other metrics. There will be charts, tables, and other widgets for displaying historical raw data inputs and scores. There will be a quasi-real-time dashboard that will show current scores and raw data inputs. Aggregate scores should be updated/refreshed whenever new raw data inputs arrive or existing raw data inputs are adjusted. There will be a "scorekeeper UI" for manually entering new inputs, manually adjusting existing inputs, and manually adjusting calculated scores. Decisions: Should the scoring calculations be performed on the database layer (T-SQL/SQL Server, in my case) or on the application layer (C#/ASP.NET MVC, in my case)? What are some recommended approaches for calculating updated total team scores whenever new raw inputs arrives? Calculating each of the teams' total scores from scratch every time a new input arrives will probably slow the system to a crawl. I've considered some kind of "diff" approach, but that approach may pose problems for ad-hoc queries and some aggegates. I'm trying draw some sports analogies, but it's tough because most games consist of no more than 20 or 30 scoring elements per game (I'm thinking of a high-scoring baseball game; football and soccer have fewer scoring events per game). Perhaps a financial balance sheet analogy makes more sense because financial "bottom line" calcs may be calculated from 250,000 or more transactions. Should I be making heavy use of caching for this application? Are there any obvious approaches or similar case studies that I may be overlooking?

    Read the article

  • SQL SERVER – Three Puzzling Questions – Need Your Answer

    - by pinaldave
    Last week I had asked three questions on my blog. I got very good response to the questions. I am planning to write summary post for each of three questions next week. Before I write summary post and give credit to all the valid answers. I was wondering if I can bring to notice of all of you this week. Why SELECT * throws an error but SELECT COUNT(*) does not This is indeed very interesting question as not quite many realize that this kind of behavior SQL Server demonstrates out of the box. Once you run both the code and read the explanation it totally makes sense why SQL Server is behaving how it is behaving. Also there is connect item is associated with it. Also read the very first comment by Rob Farley it also shares very interesting detail. Statistics are not Updated but are Created Once This puzzle has multiple right answer. I am glad to see many of the correct answer as a comment to this blog post. Statistics are very important and it really helps SQL Server Engine to come up with optimal execution plan. DBA quite often ignore statistics thinking it does not need to be updated, as they are automatically maintained if proper database setting is configured (auto update and auto create). Well, in this question, we have scenario even though auto create and auto update statistics are ON, statistics is not updated. There are multiple solutions but what will be your solution in this case? When to use Function and When to use Stored Procedure This question is rather open ended question – there is no right or wrong answer. Everybody developer has always used functions and stored procedures. Here is the chance to justify when to use Stored Procedure and when to use Functions. I want to acknowledge that they can be used interchangeably but there are few reasons when one should not do that. There are few reasons when one is better than other. Let us discuss this here. Your opinion matters. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Performance, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Hadoop growing pains

    - by Piotr Rodak
    This post is not going to be about SQL Server. I have been reading recently more and more about “Big Data” – very catchy term that describes untamed increase of the data that mankind is producing each day and the struggle to capture the meaning of these data. Ten years ago, and perhaps even three years ago this need was not so recognized. Increasing number of smartphones and discernable trend of mainstream Internet traffic moving to the smartphone generated one means that there is bigger and bigger stream of information that has to be stored, transformed, analysed and perhaps monetized. The nature of this traffic makes if very difficult to wrap it into boundaries of relational database engines. The amount of data makes it near to impossible to process them in relational databases within reasonable time. This is where ‘cloud’ technologies come to play. I just read a good article about the growing pains of Hadoop, which became one of the leading players on distributed processing arena within last year or two. Toby Baer concludes in it that lack of enterprise ready toolsets hinders Hadoop’s apprehension in the enterprise world. While this is true, something else drew my attention. According to the article there are already about half of a dozen of commercially supported distributions of Hadoop. For me, who has not been involved into intricacies of open-source world, this is quite interesting observation. On one hand, it is good that there is competition as it is beneficial in the end to the customer. On the other hand, the customer is faced with difficulty of choosing the right distribution. In future, when Hadoop distributions fork even more, this choice will be even harder. The distributions will have overlapping sets of features, yet will be quite incompatible with each other. I suppose it will take a few years until leaders emerge and the market will begin to resemble what we see in Linux world. There are myriads of distributions, but only few are acknowledged by the industry as enterprise standard. Others are honed by bearded individuals with too much time to spend. In any way, the third fact I can’t help but notice about the proliferation of distributions of Hadoop is that IT professionals will have jobs.   BuzzNet Tags: Hadoop,Big Data,Enterprise IT

    Read the article

  • Excel tables creation upon MySQL data import (new feature in MySQL for Excel 1.2.x)

    - by Javier Treviño
    In this blog post we are going to talk about one of the features included since MySQL for Excel 1.2.0, you can install the latest GA or maintenance version using the MySQL Installer or optionally you can download directly any GA or non-GA version from the MySQL Developer Zone. Remember how easy is to dump data from a MySQL table, view or stored procedure to an Excel worksheet? (If you don't you can check out this other post: How To - Guide to Importing Data from a MySQL Database to Excel using MySQL for Excel). In version 1.2.0 we introduced some advanced options for the Import MySQL Data operation regarding Excel tables. The Advanced Options dialog shown above is accessible from any Import Data dialog. When the Create an Excel table for the imported MySQL table data option is checked (which is by default), MySQL for Excel will create an Excel table (also known in Excel jargon as a ListObject) from the Excel range containing the imported MySQL data. This "little feature" enables the right-away usage of the Excel table in data analysis, like including it for summarization on a PivotTable, including a summarization row at the end of the table's data, sorting or filtering the table's data by clicking the drop-down button next to each column's header, among other actions. The Excel tables that are created automatically from imported MySQL data will have a name like [UserPrefix].<SchemaName>.<DbObjectName> for tables and views, and <Prefix>.<SchemaName>.<ProcedureName>.<ResultSetName> for stored procedures.  Notice the first piece of the name is an optional [UserPrefix], the prefix is only used if the Prefix Excel tables with the following text option is checked, notice that the suggested prefix is "MySQL" but it can be changed to whatever text is suitable for you. Excel tables must have a table style so they are easily identified. There are a lot of predefined Excel table styles, by default the MySqlDefault style is applied, which is the style you have seen applied to imported data for Edit Sessions, and which adds simple and elegant formatting to the table. If you wish to change it to any of the predefined Excel table style you can do it through the drop-down list on the Use style [[styles drop-down]] for the new Excel table option. Excel tables are the basic construction blocks for building data analysis or self-service Business Intelligence using other more advanced Excel tools like Power Pivot, Power View or Power Map. This feature empowers imported MySQL data to use it in more advanced ways.  We hope you give this and the other new features in the 1.2.x version family a try! Remember that your feedback is very important for us, so drop us a message and follow us: MySQL on Windows (this) Blog: https://blogs.oracle.com/MySqlOnWindows/ MySQL for Excel forum: http://forums.mysql.com/list.php?172 Facebook: http://www.facebook.com/mysql YouTube channel: https://www.youtube.com/user/MySQLChannel Cheers!

    Read the article

  • Before the Summit of 2012

    - by Ajarn Mark Caldwell
    Today, Monday, was the first day of the PASS Summit Preconference training events, but instead I spent the day at the free SQL in the City event put on by Red Gate. For me this was not a financial decision (pre-con sessions cost extra above the general Summit registration) but rather a matter of interest.  I had already included money for pre-cons in this year’s training budget, but none of them really stood out to me, so even if the Red-Gate event were not going on at the same time, I probably would not have gone to any pre-cons this year.  However, the topics being presented at the SQL in the City event were of great interest to me.  There promised to be good information on Continuous Integration and automated deployment of database changes, which lately has been a real hot topic at my work.  And indeed, Red-Gate announced the release of a new tool (still in Early Access Program…a.k.a. Beta) which is called the Deployment Manager.  Since we are in the middle of a TFS implementation project, it will be interesting to see how this plays out and compares to what we put together with the automated builds in TFS.  But, as I understand it, the primary focus of Deployment Manager is not to be the Build process (Red Gate uses JetBrains’ Team City for that in their shop) but rather to aid in the deployment of those build packages, as well as providing easy rollback and a good visualization of which versions of software are in which environments.  It looks promising and I’ve already downloaded the installer package to play with it later. Overall, I was quite impressed with the SQL in the City event.  Having heard many current and past members of the PASS Board of Directors describe the challenges of putting on a large conference, and the growing pains that the PASS Summit has gone through, I am even more impressed that the Red Gate event ran as smoothly as it did.  And it is quite impressive the amount of money that Red Gate must have spent given that this was a no-charge event to attend, they had a very nice hot lunch, and the after-event drinks celebration.  Well done, folks! Of course it was great to hear from a variety of speakers.  Today I listened to some folks from Red Gate like Grant Fritchey (blog | @GFritchey) and David Atkinson (Product Manager for SQL Source Control and now the Deployment Manager tool set); and also Brent Ozar (blog | @BrentO) and Buck Woody (blog | @BuckWoody).  By the way, if you have never seen either Brent or Buck speak, you really should.  Different styles, but both are very entertaining and educational at the same time.  I love Buck’s sense of humor (here’s a tip…don’t be late to Buck’s session or you’ll become part of the presentation) and I praise Brent’s slides.  Brent’s style very much reminds me of that espoused by Garr Reynolds on his Presentation Zen blog (and book) and I am impressed that he can make a technical presentation so engaging. It was a great day, a great way to kick off the week, and I am excited to get into the full Summit!

    Read the article

  • nfs-kernel-server installation : file does not exist

    - by Stuti Rastogi
    I am extremely new to Ubuntu and need to work on EdX platform. I need to install the NFS Client on Ubuntu 12.04 for the same. I used the following stuti@stuti:/$ sudo apt-get install nfs-kernel-server However this gives me an error as follows: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: nfs-common The following NEW packages will be installed: nfs-common nfs-kernel-server 0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/355 kB of archives. After this operation, 1,222 kB of additional disk space will be used. Do you want to continue [Y/n]? y Selecting previously unselected package nfs-common. (Reading database ... 200367 files and directories currently installed.) Unpacking nfs-common (from .../nfs-common_1%3a1.2.5-3ubuntu3.1_i386.deb) ... Selecting previously unselected package nfs-kernel-server. Unpacking nfs-kernel-server (from .../nfs-kernel-server_1%3a1.2.5-3ubuntu3.1_i386.deb) ... Processing triggers for ureadahead ... Processing triggers for man-db ... Setting up nfs-common (1:1.2.5-3ubuntu3.1) ... statd start/running, process 4574 gssd stop/pre-start, process 4603 idmapd start/running, process 4643 Setting up nfs-kernel-server (1:1.2.5-3ubuntu3.1) ... update-rc.d: /etc/init.d/nfs-kernel-server: file does not exist dpkg: error processing nfs-kernel-server (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: nfs-kernel-server E: Sub-process /usr/bin/dpkg returned an error code (1) I have tried sudo apt-get autoremove nfs-kernel-server sudo apt-get autoremove nfs-common After these, I tried to install but I keep getting the same error. apt-get update or upgrade also do not help and give the same error. I am clueless as to where can I find this missing file as stated in the output. I tried to google about this problem but none of the solutions I came across have helped or I have not been able to understand some of them. Any help would really be appreciated. Thanks in advance for your time and attention.

    Read the article

  • Cannot install Acrobat Reader on Ubuntu 12.04.1

    - by user91137
    I have been trying to install Acrobat Reader on my Ubuntu 12.04.1. In the be;ggining, I tried to install it from the software-center, but it crashes with the report: (Reading database ... 189311 files and directories currently installed.) Unpacking acroread (from .../acroread_9.5.1-1precise1_i386.deb) ... dpkg: error processing /var/cache/apt/archives/acroread_9.5.1-1precise1_i386.deb (--unpack): trying to overwrite '/usr/bin/acroread', which is also in package adobereader-ptb 8.1.7-2 No apport report written because MaxReports is reached already Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Processing triggers for man-db ... Errors were encountered while processing: /var/cache/apt/archives/acroread_9.5.1-1precise1_i386.deb" As as solution, I tried to install it via terminal, with the $sudo apt-get install acroread and receive the following: arcanjo@arcanjo:~$ sudo apt-get install acroread Lendo listas de pacotes... Pronto Construindo árvore de dependências Lendo informação de estado... Pronto Pacotes sugeridos: libldap2 libgnome-speech7 Os NOVOS pacotes a seguir serão instalados: acroread 0 pacotes atualizados, 1 pacotes novos instalados, 0 a serem removidos e 0 não atualizados. É preciso baixar 60,1 MB de arquivos. Depois desta operação, 142 MB adicionais de espaço em disco serão usados. Obter:1 http://archive.canonical.com/ubuntu/ precise/partner acroread i386 9.5.1-1precise1 [60,1 MB] Baixados 60,1 MB em 4min 17s (234 kB/s) (Lendo banco de dados ... 189311 ficheiros e directórios actualmente instalados.) Desempacotando acroread (de .../acroread_9.5.1-1precise1_i386.deb) ... dpkg: erro processando /var/cache/apt/archives/acroread_9.5.1-1precise1_i386.deb (--unpack): a tentar sobre-escrever '/usr/bin/acroread', que também está no pacote adobereader-ptb 8.1.7-2 Nenhum relatório apport escrito pois MaxReports já foi atingido Processando gatilhos para desktop-file-utils ... Processando gatilhos para bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processando gatilhos para gnome-menus ... Processando gatilhos para man-db ... Erros foram encontrados durante o processamento de: /var/cache/apt/archives/acroread_9.5.1-1precise1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) arcanjo@arcanjo:~$ I've already tried to upgrade and update the apt-get, also tried to remove and re-install the software-center, tried deleting the "problematic" files and re-updating the apt-get... Nothing seems to work... Any solutions?

    Read the article

  • Package system broken - E: Sub-process /usr/bin/dpkg returned an error code (1)

    - by delha
    After installing some packages and libraries I have an error on Package Manager, I can't run any update because it says: "The package system is broken If you are using third party repositories then disable them, since they are a common source of problems. Now run the following command in a terminal: apt-get install -f " I've tried to do what it says and it returns me: jara@jara-Aspire-5738:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: libcaca-dev libopencv2.3-bin nite-dev python-bluez ps-engine libslang2-dev python-sphinx ros-electric-geometry-tutorials ros-electric-geometry-visualization python-matplotlib libzzip-dev ros-electric-orocos-kinematics-dynamics ros-electric-physics-ode libbluetooth-dev libaudiofile-dev libassimp2 libnetpbm10-dev ros-electric-laser-pipeline python-epydoc ros-electric-geometry-experimental libasound2-dev evtest python-matplotlib-data libyaml-dev ros-electric-bullet ros-electric-executive-smach ros-electric-documentation libgl2ps0 libncurses5-dev ros-electric-robot-model texlive-fonts-recommended python-lxml libwxgtk2.8-dev daemontools libxxf86vm-dev libqhull-dev libavahi-client-dev ros-electric-geometry libgl2ps-dev libcurl4-openssl-dev assimp-dev libusb-1.0-0-dev libopencv2.3 ros-electric-diagnostics-monitors libsdl1.2-dev libjs-underscore libsdl-image1.2 tipa libusb-dev libtinfo-dev python-tz python-sip libfltk1.1 libesd0 libfreeimage-dev ros-electric-visualization x11proto-xf86vidmode-dev python-docutils libvtk5.6 ros-electric-assimp x11proto-scrnsaver-dev libnetcdf-dev libidn11-dev libeigen3-dev joystick libhdf5-serial-1.8.4 ros-electric-joystick-drivers texlive-fonts-recommended-doc esound-common libesd0-dev tcl8.5-dev ros-electric-multimaster-experimental ros-electric-rx libaudio-dev ros-electric-ros-tutorials libwxbase2.8-dev ros-electric-visualization-common python-sip-dev ros-electric-visualization-tutorials libfltk1.1-dev libpulse-dev libnetpbm10 python-markupsafe openni-dev tk8.5-dev wx2.8-headers freeglut3-dev libavahi-common-dev python-roman python-jinja2 ros-electric-robot-model-visualization libxss-dev libqhull5 libaa1-dev ros-electric-eigen freeglut3 ros-electric-executive-smach-visualization ros-electric-common-tutorials ros-electric-robot-model-tutorials libnetcdf6 libjs-sphinxdoc python-pyparsing libaudiofile0 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: libcv-dev The following NEW packages will be installed libcv-dev 0 upgraded, 1 newly installed, 0 to remove and 4 not upgraded. 2 not fully installed or removed. Need to get 0 B/3,114 kB of archives. After this operation, 11.1 MB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 261801 files and directories currently installed.) Unpacking libcv-dev (from .../libcv-dev_2.1.0-7build1_amd64.deb) ... dpkg: error processing /var/cache/apt/archives/libcv-dev_2.1.0-7build1_amd64.deb (-- unpack): trying to overwrite '/usr/bin/opencv_haartraining', which is also in package libopencv2.3-bin 2.3.1+svn6514+branch23-12~oneiric dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/libcv-dev_2.1.0-7build1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I've tried everything people recommend on internet like: sudo apt-get clean sudo apt-get autoremove sudo apt-get update sudo apt-get upgrade sudo apt-get -f install Also I've tried to install the synaptic manager but it doesn't let me install anything.. As you can see nothing works so I'm desperate! I'm using ubuntu 11.10, 64 bits Thanks!!

    Read the article

  • Warm Reception By Partners at EMEA Manageability Forum

    - by Get_Specialized!
    For the EMEA Partners that were able to attend the event in Istanbul Turkey, thank you for your attendance and feedback at the event. As you can see, the weather kept most of inside during the event and at times there was even some snow.  And while it may have been chilly outside, there was a warm reception from Partners who traveled from all over EMEA to hear from other Oracle Specialized Partners and subject matter experts about the opportunities and benefits of Oracle Enterprise Manager and Exadata Specialization. Here you can see David Robo, Oracle Technology Director for Manageability kicking off the event followed later by Patrick Rood, Oracle Indirect Manageability Business. A special thank you to all the Partner speakers including Ron Tolido, VP and CTO of Application Services Continental Europe Capgemini, who delivered a very innovative keynote where many in attendance learned that Black Swans do exist. And while at break, interactivity among partners continued and it was great to see such innovative partners who had listed their achieved specializations on their business cards. Here we can see Oracle Enterprise Manager customer, Turkish Oracle User Group board member and Blogger Gokhan Atil sharing his product experiences with others attending. Additionally, Christian Trieb of Paragon Data, also shared with other Partners what the German Oracle User Group (DOAG) was doing around manageability and invitation to submit papers for their next event. Here we can see at one of the breaks, one of the event organizers Javier Puerta (left), Oracle Director of Partner Programs, joined by Sebastiaan Vingerhoed (middle), Oracle EE & CIS Manager Manageability and speaker on Managing the Application Lifecycle, Julian Dontcheff (right), Global Head of Database Management at Accenture. Below is Julian Dontcheff's delivering his partner presentation on Exadata and Lifecycle Management. Just after his plane landed and 1 hour Turkish taxi experience to the event location, Julian still took the time to sit down with me and provide some extra insights on his experiences of managing the enterprise infrastructure with Oracle Enterprise Manager. Below is one of the Oracle Enterprise Management Product Management Team,  Mark McGill, Oracle Principal Product Manager, presenting to Partners on how you can perform Chargeback and Metering with Oracle Enterprise Manager 12c Cloud Control. Overall, it was a great event and an extra thank you to those OPN Specialized Partners who presented, to the Partners that attended, and to those Oracle team members who organized the event and presented.

    Read the article

  • Install of AppFabric RC stops AppFabric Monitoring (some traps for young players)

    - by Rob Addis
    I uninstalled AppFabric Beta 2 and installed AppFabric RC. The AppFabricEventCollection Service is started (runs under Local Service which is a dbo_owner on the Monitoring Database to prove this wasn’t the issue). The SQLServerAgent Service is started. Nothing is being written to the Monitoring DB Staging Table and thus nothing is being written to the Event tables or seen in the AppFabric Dashboard. Nothing has been written to the following event logs     - Microsoft-Windows-Application Server-System Services\Admin     - Microsoft-Windows-Application Server-System Services\Operational The Microsoft-Windows-Application Server-System Services\Debug event log is not shown in the event viewer. The WCF configuration appears fine the connection string to the Monitoring DB is correct. Monitoring is set to “Trouble Shooting” and no errors are shown on the “Configure WCF and WF for Application” dialog. So the problem seems to lie with either AppFabric which writes to the event log or the AppFabricEventCollection Service. I thought I was flummoxed... However one of my colleagues said have you checked the etwProviderId? I was using a config which was created under AppFabric  Beta 2 which had a different etwProviderId. So I deleted the following section and all other references to AppFabric monitoring from the web.config and then recreated them using IIS the “Configure WCF and WF for Application” dialog and set the level to TroubleShooting.         <diagnostics etwProviderId="6b44a7ff-9db4-4723-b8cf-1b584bf1591b">             <endToEndTracing propagateActivity="true" messageFlowTracing="true" />         </diagnostics>   I then called a service to create some log entries. Still nothing was written to the Monitoring DB Staging Table... I checked the Microsoft-Windows-Application Server-System Services\Admin event log. It had the following entry... Configuration error. Please see the details to correct the problem. \rDetailed information:\r Filename: \\?\C:\Users\xxx\Documents\dotnetdev\Frameworks\SOA\xxx.SOA.Framework\xxx.SOA.Framework.MockServices\SimpleServiceParent\web.config Error: Cannot read configuration file due to insufficient permissions    System.UnauthorizedAccessException: Filename: \\?\C:\Users\xxx\Documents\dotnetdev\Frameworks\SOA\xxx.SOA.Framework\IAG.SOA.Framework.MockServices\SimpleServiceParent\web.config Error: Cannot read configuration file due to insufficient permissions   And guess who the user was... Local Service yes yes I should have used a better User in the AppFabric RC setup to run the AppFabricEventCollection Service under! So I changed the user to a more appropriate one and removed Local Service as a DBO and hay presto!

    Read the article

  • SQL SERVER – Introduction to CUME_DIST – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    This blog post is written in response to the T-SQL Tuesday post of Prox ‘n’ Funx. This is a very interesting subject. By the way Brad Schulz is my favorite guy when it is about blogging. I respect him as well learn a lot from him. Everybody is writing something new his subject, I decided to start SQL Server 2012 analytic functions series. SQL Server 2012 introduces new analytical function CUME_DIST(). This function provides cumulative distribution value. It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. Let us fun following query. USE AdventureWorks GO SELECT SalesOrderID, OrderQty, CUME_DIST() OVER(ORDER BY SalesOrderID) AS CDist FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY CDist DESC GO Above query will give us following result. Now let us understand what is the formula behind CUME_DIST and why the values in SalesOrderID = 43670 are 1. Let us take more example and be clear about why the values in SalesOrderID = 43667 are 0.5. Now let us enhence the same example and use PARTITION BY into the OVER clause and see the results. Run following query in SQL Server 2012. USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, CUME_DIST() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) AS CDist FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID DESC, CDist DESC GO Now let us see the result of this query. We are have changed the ORDER BY clause as well partitioning by SalesOrderID. You can see that CUME_DIST() function provides us different results. Additionally now we see value 1 multiple times. As we are using partitioning for each group of SalesOrderID we get the CUME_DIST() value. CUME_DIST() was long awaited Analytical function and I am glad to see it in SQL Server 2012. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Oracle’s Sun Server X4-8 with Built-in Elastic Computing

    - by kgee
    We are excited to announce the release of Oracle's new 8-socket server, Sun Server X4-8. It’s the most flexible 8-socket x86 server Oracle has ever designed, and also the most powerful. Not only does it use the fastest Intel® Xeon® E7 v2 processors, but also its memory, I/O and storage subsystems are all designed for maximum performance and throughput. Like its predecessor, the Sun Server X4-8 uses a “glueless” design that allows for maximum performance for Oracle Database, while also reducing power consumption and improving reliability. The specs are pretty impressive. Sun Server X4-8 supports 120 cores (or 240 threads), 6 TB memory, 9.6 TB HDD capacity or 3.2 TB SSD capacity, contains 16 PCIe Gen 3 I/O expansion slots, and allows for up to 6.4 TB Sun Flash Accelerator F80 PCIe Cards. The Sun Server X4-8 is also the most dense x86 server with its 5U chassis, allowing 60% higher rack-level core and DIMM slot density than the competition.  There has been a lot of innovation in Oracle’s x86 product line, but the latest and most significant is a capability called elastic computing. This new capability is built into each Sun Server X4-8.   Elastic computing starts with the Intel processor. While Intel provides a wide range of processors each with a fixed combination of core count, operational frequency, and power consumption, customers have been forced to make tradeoffs when they select a particular processor. They have had to make educated guesses on which particular processor (core count/frequency/cache size) will be best suited for the workload they intend to execute on the server.Oracle and Intel worked jointly to define a new processor, the Intel Xeon E7-8895 v2 for the Sun Server X4-8, that has unique characteristics and effectively combines the capabilities of three different Xeon processors into a single processor. Oracle system design engineers worked closely with Oracle’s operating system development teams to achieve the ability to vary the core count and operating frequency of the Xeon E7-8895 v2 processor with time without the need for a system level reboot.  Along with the new processor, enhancements have been made to the system BIOS, Oracle Solaris, and Oracle Linux, which allow the processors in the system to dynamically clock up to faster speeds as cores are disabled and to reach higher maximum turbo frequencies for the remaining active cores. One customer, a stock market trading company, will take advantage of the elastic computing capability of Sun Server X4-8 by repurposing servers between daytime stock trading activity and nighttime stock portfolio processing, daily, to achieve maximum performance of each workload.To learn more about Sun Server X4-8, you can find more details including the data sheet and white papers here.Josh Rosen is a Principal Product Manager for Oracle’s x86 servers, focusing on Oracle’s operating systems and software. He previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers.

    Read the article

  • Can't install graphic drivers in 12.04

    - by yinon
    The driver is ATI/AMD proprietary FGLRX graphics driver. After clicking Activate, it asks for my password and starts downloading. Then it shows an error message: 2012-10-03 16:16:04,227 DEBUG: updating <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> 2012-10-03 16:16:06,172 DEBUG: reading modalias file /lib/modules/3.2.0-29-generic-pae/modules.alias 2012-10-03 16:16:06,383 DEBUG: reading modalias file /usr/share/jockey/modaliases/b43 2012-10-03 16:16:06,386 DEBUG: reading modalias file /usr/share/jockey/modaliases/disable-upstream-nvidia 2012-10-03 16:16:06,456 DEBUG: loading custom handler /usr/share/jockey/handlers/pvr-omap4.py 2012-10-03 16:16:06,506 WARNING: modinfo for module omapdrm_pvr failed: ERROR: modinfo: could not find module omapdrm_pvr 2012-10-03 16:16:06,509 DEBUG: Instantiated Handler subclass __builtin__.PVROmap4Driver from name PVROmap4Driver 2012-10-03 16:16:06,682 DEBUG: PowerVR SGX proprietary graphics driver for OMAP 4 not available 2012-10-03 16:16:06,682 DEBUG: loading custom handler /usr/share/jockey/handlers/cdv.py 2012-10-03 16:16:06,727 WARNING: modinfo for module cedarview_gfx failed: ERROR: modinfo: could not find module cedarview_gfx 2012-10-03 16:16:06,728 DEBUG: Instantiated Handler subclass __builtin__.CdvDriver from name CdvDriver 2012-10-03 16:16:06,728 DEBUG: cdv.available: falling back to default 2012-10-03 16:16:06,772 DEBUG: Intel Cedarview graphics driver availability undetermined, adding to pool 2012-10-03 16:16:06,772 DEBUG: loading custom handler /usr/share/jockey/handlers/vmware-client.py 2012-10-03 16:16:06,781 WARNING: modinfo for module vmxnet failed: ERROR: modinfo: could not find module vmxnet 2012-10-03 16:16:06,781 DEBUG: Instantiated Handler subclass __builtin__.VmwareClientHandler from name VmwareClientHandler 2012-10-03 16:16:06,795 DEBUG: VMWare Client Tools availability undetermined, adding to pool 2012-10-03 16:16:06,796 DEBUG: loading custom handler /usr/share/jockey/handlers/fglrx.py 2012-10-03 16:16:06,801 WARNING: modinfo for module fglrx_updates failed: ERROR: modinfo: could not find module fglrx_updates 2012-10-03 16:16:06,805 DEBUG: Instantiated Handler subclass __builtin__.FglrxDriverUpdate from name FglrxDriverUpdate 2012-10-03 16:16:06,805 DEBUG: fglrx.available: falling back to default 2012-10-03 16:16:06,833 DEBUG: ATI/AMD proprietary FGLRX graphics driver (post-release updates) availability undetermined, adding to pool 2012-10-03 16:16:06,836 WARNING: modinfo for module fglrx failed: ERROR: modinfo: could not find module fglrx 2012-10-03 16:16:06,840 DEBUG: Instantiated Handler subclass __builtin__.FglrxDriver from name FglrxDriver 2012-10-03 16:16:06,840 DEBUG: fglrx.available: falling back to default 2012-10-03 16:16:06,873 DEBUG: ATI/AMD proprietary FGLRX graphics driver availability undetermined, adding to pool 2012-10-03 16:16:06,873 DEBUG: loading custom handler /usr/share/jockey/handlers/dvb_usb_firmware.py 2012-10-03 16:16:06,925 DEBUG: Instantiated Handler subclass __builtin__.DvbUsbFirmwareHandler from name DvbUsbFirmwareHandler 2012-10-03 16:16:06,926 DEBUG: Firmware for DVB cards not available 2012-10-03 16:16:06,926 DEBUG: loading custom handler /usr/share/jockey/handlers/nvidia.py 2012-10-03 16:16:06,961 WARNING: modinfo for module nvidia_96 failed: ERROR: modinfo: could not find module nvidia_96 2012-10-03 16:16:06,967 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriver96 from name NvidiaDriver96 2012-10-03 16:16:06,968 DEBUG: nvidia.available: falling back to default 2012-10-03 16:16:06,980 DEBUG: XorgDriverHandler(nvidia_96, nvidia-96, None): Disabling as package video ABI xorg-video-abi-10 does not match X.org video ABI xorg-video-abi-11 2012-10-03 16:16:06,980 DEBUG: NVIDIA accelerated graphics driver not available 2012-10-03 16:16:06,983 WARNING: modinfo for module nvidia_current failed: ERROR: modinfo: could not find module nvidia_current 2012-10-03 16:16:06,987 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriverCurrent from name NvidiaDriverCurrent 2012-10-03 16:16:06,987 DEBUG: nvidia.available: falling back to default 2012-10-03 16:16:07,015 DEBUG: NVIDIA accelerated graphics driver availability undetermined, adding to pool 2012-10-03 16:16:07,018 WARNING: modinfo for module nvidia_current_updates failed: ERROR: modinfo: could not find module nvidia_current_updates 2012-10-03 16:16:07,021 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriverCurrentUpdates from name NvidiaDriverCurrentUpdates 2012-10-03 16:16:07,022 DEBUG: nvidia.available: falling back to default 2012-10-03 16:16:07,066 DEBUG: NVIDIA accelerated graphics driver (post-release updates) availability undetermined, adding to pool 2012-10-03 16:16:07,069 WARNING: modinfo for module nvidia_173_updates failed: ERROR: modinfo: could not find module nvidia_173_updates 2012-10-03 16:16:07,072 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriver173Updates from name NvidiaDriver173Updates 2012-10-03 16:16:07,073 DEBUG: nvidia.available: falling back to default 2012-10-03 16:16:07,105 DEBUG: NVIDIA accelerated graphics driver (post-release updates) availability undetermined, adding to pool 2012-10-03 16:16:07,112 WARNING: modinfo for module nvidia_173 failed: ERROR: modinfo: could not find module nvidia_173 2012-10-03 16:16:07,118 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriver173 from name NvidiaDriver173 2012-10-03 16:16:07,119 DEBUG: nvidia.available: falling back to default 2012-10-03 16:16:07,159 DEBUG: NVIDIA accelerated graphics driver availability undetermined, adding to pool 2012-10-03 16:16:07,166 WARNING: modinfo for module nvidia_96_updates failed: ERROR: modinfo: could not find module nvidia_96_updates 2012-10-03 16:16:07,171 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriver96Updates from name NvidiaDriver96Updates 2012-10-03 16:16:07,171 DEBUG: nvidia.available: falling back to default 2012-10-03 16:16:07,188 DEBUG: XorgDriverHandler(nvidia_96_updates, nvidia-96-updates, None): Disabling as package video ABI xorg-video-abi-10 does not match X.org video ABI xorg-video-abi-11 2012-10-03 16:16:07,188 DEBUG: NVIDIA accelerated graphics driver (post-release updates) not available 2012-10-03 16:16:07,188 DEBUG: loading custom handler /usr/share/jockey/handlers/madwifi.py 2012-10-03 16:16:07,195 WARNING: modinfo for module ath_pci failed: ERROR: modinfo: could not find module ath_pci 2012-10-03 16:16:07,195 DEBUG: Instantiated Handler subclass __builtin__.MadwifiHandler from name MadwifiHandler 2012-10-03 16:16:07,196 DEBUG: Alternate Atheros "madwifi" driver availability undetermined, adding to pool 2012-10-03 16:16:07,196 DEBUG: loading custom handler /usr/share/jockey/handlers/sl_modem.py 2012-10-03 16:16:07,213 DEBUG: Instantiated Handler subclass __builtin__.SlModem from name SlModem 2012-10-03 16:16:07,234 DEBUG: Software modem not available 2012-10-03 16:16:07,234 DEBUG: loading custom handler /usr/share/jockey/handlers/broadcom_wl.py 2012-10-03 16:16:07,239 WARNING: modinfo for module wl failed: ERROR: modinfo: could not find module wl 2012-10-03 16:16:07,277 DEBUG: Instantiated Handler subclass __builtin__.BroadcomWLHandler from name BroadcomWLHandler 2012-10-03 16:16:07,277 DEBUG: Broadcom STA wireless driver availability undetermined, adding to pool 2012-10-03 16:16:07,278 DEBUG: all custom handlers loaded 2012-10-03 16:16:07,278 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'pci:v00008086d000027D8sv00001043sd000082EAbc04sc03i00') 2012-10-03 16:16:07,568 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'snd_hda_intel'} 2012-10-03 16:16:07,699 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'snd_hda_intel', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,699 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'snd_hda_intel'} 2012-10-03 16:16:07,699 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'snd_hda_intel', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,699 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'input:b0000v0000p0000e0000-e0,5,kramlsfw6,') 2012-10-03 16:16:07,704 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'evbug'} 2012-10-03 16:16:07,704 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'evbug', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,704 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'pci:v00008086d000027DAsv00001043sd00008179bc0Csc05i00') 2012-10-03 16:16:07,707 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'i2c_i801'} 2012-10-03 16:16:07,707 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'i2c_i801', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,707 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'acpi:PNP0C01:') 2012-10-03 16:16:07,707 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'acpi:PNP0B00:') 2012-10-03 16:16:07,707 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'pci:v00001969d00001026sv00001043sd00008304bc02sc00i00') 2012-10-03 16:16:07,710 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'atl1e'} 2012-10-03 16:16:07,710 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'atl1e', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,710 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'input:b0003v04F2p0816e0111-e0,1,4,11,14,k71,72,73,74,75,77,79,7A,7B,7C,7D,7E,7F,80,81,82,83,84,85,86,87,88,89,8A,8C,8E,96,98,9E,9F,A1,A3,A4,A5,A6,AD,B0,B1,B2,B3,B4,B7,B8,B9,BA,BB,BC,BD,BE,BF,C0,C1,C2,F0,ram4,l0,1,2,sfw') 2012-10-03 16:16:07,711 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'evbug'} 2012-10-03 16:16:07,711 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'evbug', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,711 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'mac_hid'} 2012-10-03 16:16:07,711 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'mac_hid', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,711 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'platform:pcspkr') 2012-10-03 16:16:07,711 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'pcspkr'} 2012-10-03 16:16:07,711 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'pcspkr', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,712 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'snd_pcsp'} 2012-10-03 16:16:07,712 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'snd_pcsp', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,712 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'usb:v1D6Bp0001d0302dc09dsc00dp00ic09isc00ip00') 2012-10-03 16:16:07,724 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'input:b0019v0000p0001e0000-e0,1,k74,ramlsfw') 2012-10-03 16:16:07,724 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'evbug'} 2012-10-03 16:16:07,724 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'evbug', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,724 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'mac_hid'} 2012-10-03 16:16:07,724 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'mac_hid', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,724 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'acpi:PNP0C04:') 2012-10-03 16:16:07,724 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'platform:eisa') 2012-10-03 16:16:07,725 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'pci:v00008086d000027CCsv00001043sd00008179bc0Csc03i20') 2012-10-03 16:16:07,728 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'platform:Fixed MDIO bus') 2012-10-03 16:16:07,728 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'pci:v00008086d000029C0sv00001043sd000082B0bc06sc00i00') 2012-10-03 16:16:07,731 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'usb:v045Ep0766d0101dcEFdsc02dp01ic01isc01ip00') 2012-10-03 16:16:07,777 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'snd_usb_audio'} 2012-10-03 16:16:07,777 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'snd_usb_audio', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,777 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'acpi:PNP0F03:PNP0F13:') 2012-10-03 16:16:07,777 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'acpi:PNP0000:') 2012-10-03 16:16:07,777 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'pci:v00001002d000095C5sv0000174Bsd0000E400bc03sc00i00') 2012-10-03 16:16:08,072 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'fglrx_updates', 'package': 'fglrx-updates'} 2012-10-03 16:16:08,133 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/i386-linux-gnu/mesa/ld.so.conf other target alt None other current alt None 2012-10-03 16:16:08,134 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:16:08,072 DEBUG: found match in handler pool xorg:fglrx_updates([FglrxDriverUpdate, nonfree, disabled] ATI/AMD proprietary FGLRX graphics driver (post-release updates)) 2012-10-03 16:16:08,136 WARNING: modinfo for module fglrx_updates failed: ERROR: modinfo: could not find module fglrx_updates 2012-10-03 16:16:08,147 DEBUG: fglrx.available: falling back to default 2012-10-03 16:16:08,173 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/i386-linux-gnu/mesa/ld.so.conf other target alt None other current alt None 2012-10-03 16:16:08,173 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:16:08,162 DEBUG: got handler xorg:fglrx_updates([FglrxDriverUpdate, nonfree, disabled] ATI/AMD proprietary FGLRX graphics driver (post-release updates)) 2012-10-03 16:16:08,173 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'fglrx', 'package': 'fglrx'} 2012-10-03 16:16:08,184 DEBUG: fglrx.enabled(fglrx): target_alt None current_alt /usr/lib/i386-linux-gnu/mesa/ld.so.conf other target alt None other current alt None 2012-10-03 16:16:08,184 DEBUG: fglrx is not the alternative in use 2012-10-03 16:16:08,173 DEBUG: found match in handler pool xorg:fglrx([FglrxDriver, nonfree, disabled] ATI/AMD proprietary FGLRX graphics driver) 2012-10-03 16:16:08,187 WARNING: modinfo for module fglrx failed: ERROR: modinfo: could not find module fglrx 2012-10-03 16:16:08,190 DEBUG: fglrx.available: falling back to default 2012-10-03 16:16:08,216 DEBUG: fglrx.enabled(fglrx): target_alt None current_alt /usr/lib/i386-linux-gnu/mesa/ld.so.conf other target alt None other current alt None . . . 2012-10-03 16:18:10,552 DEBUG: install progress initramfs-tools 62.500000 2012-10-03 16:18:22,249 DEBUG: install progress libc-bin 62.500000 2012-10-03 16:18:23,251 DEBUG: Selecting previously unselected package dkms. (Reading database ... 142496 files and directories currently installed.) Unpacking dkms (from .../dkms_2.2.0.3-1ubuntu3_all.deb) ... Selecting previously unselected package fakeroot. Unpacking fakeroot (from .../fakeroot_1.18.2-1_i386.deb) ... Selecting previously unselected package fglrx-updates. Unpacking fglrx-updates (from .../fglrx-updates_2%3a8.960-0ubuntu1.1_i386.deb) ... Selecting previously unselected package fglrx-amdcccle-updates. Unpacking fglrx-amdcccle-updates (from .../fglrx-amdcccle-updates_2%3a8.960-0ubuntu1.1_i386.deb) ... Processing triggers for man-db ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot dpkg: error processing libxss1 (--configure): package libxss1 is already installed and configured dpkg: error processing chromium-codecs-ffmpeg (--configure): package chromium-codecs-ffmpeg is already installed and configured dpkg: error processing chromium-browser (--configure): package chromium-browser is already installed and configured dpkg: error processing chromium-browser-l10n (--configure): package chromium-browser-l10n is already installed and configured Setting up dkms (2.2.0.3-1ubuntu3) ... No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already Setting up fakeroot (1.18.2-1) ... update-alternatives: using /usr/bin/fakeroot-sysv to provide /usr/bin/fakeroot (fakeroot) in auto mode. Setting up fglrx-updates (2:8.960-0ubuntu1.1) ... update-alternatives: using /usr/lib/fglrx/ld.so.conf to provide /etc/ld.so.conf.d/i386-linux-gnu_GL.conf (i386-linux-gnu_gl_conf) in auto mode. update-alternatives: warning: skip creation of /etc/OpenCL/vendors/amdocl64.icd because associated file /usr/lib/fglrx/etc/OpenCL/vendors/amdocl64.icd (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalcl.so because associated file /usr/lib32/fglrx/libaticalcl.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalrt.so because associated file /usr/lib32/fglrx/libaticalrt.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: using /usr/lib/fglrx/alt_ld.so.conf to provide /etc/ld.so.conf.d/x86_64-linux-gnu_GL.conf (x86_64-linux-gnu_gl_conf) in auto mode. update-initramfs: deferring update (trigger activated) Loading new fglrx-updates-8.960 DKMS files... First Installation: checking all kernels... Building only for 3.2.0-29-generic-pae Building for architecture i686 Building initial module for 3.2.0-29-generic-pae Done. fglrx_updates: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/3.2.0-29-generic-pae/updates/dkms/ depmod...... DKMS: install completed. update-initramfs: deferring update (trigger activated) Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Setting up fglrx-amdcccle-updates (2:8.960-0ubuntu1.1) ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-29-generic-pae Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: libxss1 chromium-codecs-ffmpeg chromium-browser chromium-browser-l10n Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) 2012-10-03 16:18:23,256 ERROR: Package failed to install: Selecting previously unselected package dkms. (Reading database ... 142496 files and directories currently installed.) Unpacking dkms (from .../dkms_2.2.0.3-1ubuntu3_all.deb) ... Selecting previously unselected package fakeroot. Unpacking fakeroot (from .../fakeroot_1.18.2-1_i386.deb) ... Selecting previously unselected package fglrx-updates. Unpacking fglrx-updates (from .../fglrx-updates_2%3a8.960-0ubuntu1.1_i386.deb) ... Selecting previously unselected package fglrx-amdcccle-updates. Unpacking fglrx-amdcccle-updates (from .../fglrx-amdcccle-updates_2%3a8.960-0ubuntu1.1_i386.deb) ... Processing triggers for man-db ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot dpkg: error processing libxss1 (--configure): package libxss1 is already installed and configured dpkg: error processing chromium-codecs-ffmpeg (--configure): package chromium-codecs-ffmpeg is already installed and configured dpkg: error processing chromium-browser (--configure): package chromium-browser is already installed and configured dpkg: error processing chromium-browser-l10n (--configure): package chromium-browser-l10n is already installed and configured Setting up dkms (2.2.0.3-1ubuntu3) ... No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already Setting up fakeroot (1.18.2-1) ... update-alternatives: using /usr/bin/fakeroot-sysv to provide /usr/bin/fakeroot (fakeroot) in auto mode. Setting up fglrx-updates (2:8.960-0ubuntu1.1) ... update-alternatives: using /usr/lib/fglrx/ld.so.conf to provide /etc/ld.so.conf.d/i386-linux-gnu_GL.conf (i386-linux-gnu_gl_conf) in auto mode. update-alternatives: warning: skip creation of /etc/OpenCL/vendors/amdocl64.icd because associated file /usr/lib/fglrx/etc/OpenCL/vendors/amdocl64.icd (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalcl.so because associated file /usr/lib32/fglrx/libaticalcl.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalrt.so because associated file /usr/lib32/fglrx/libaticalrt.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: using /usr/lib/fglrx/alt_ld.so.conf to provide /etc/ld.so.conf.d/x86_64-linux-gnu_GL.conf (x86_64-linux-gnu_gl_conf) in auto mode. update-initramfs: deferring update (trigger activated) Loading new fglrx-updates-8.960 DKMS files... First Installation: checking all kernels... Building only for 3.2.0-29-generic-pae Building for architecture i686 Building initial module for 3.2.0-29-generic-pae Done. fglrx_updates: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/3.2.0-29-generic-pae/updates/dkms/ depmod...... DKMS: install completed. update-initramfs: deferring update (trigger activated) Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Setting up fglrx-amdcccle-updates (2:8.960-0ubuntu1.1) ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-29-generic-pae Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: libxss1 chromium-codecs-ffmpeg chromium-browser chromium-browser-l10n Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) 2012-10-03 16:18:23,590 WARNING: /sys/module/fglrx_updates/drivers does not exist, cannot rebind fglrx_updates driver 2012-10-03 16:18:43,601 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:43,601 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:43,617 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:43,617 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,143 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,144 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,154 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,154 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,182 DEBUG: fglrx.enabled(fglrx): target_alt /usr/lib/fglrx/ld.so.conf current_alt /usr/lib/fglrx/ld.so.conf other target alt /usr/lib/fglrx/alt_ld.so.conf other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,182 DEBUG: XorgDriverHandler(%s, %s).enabled(): No X.org driver set, not checking 2012-10-03 16:18:54,215 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,215 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,229 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,229 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,268 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,268 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,279 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,279 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,298 DEBUG: fglrx.enabled(fglrx): target_alt /usr/lib/fglrx/ld.so.conf current_alt /usr/lib/fglrx/ld.so.conf other target alt /usr/lib/fglrx/alt_ld.so.conf other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,298 DEBUG: XorgDriverHandler(%s, %s).enabled(): No X.org driver set, not checking 2012-10-03 16:18:57,828 DEBUG: Shutting down I don't know how to troubleshoot from looking at the log file, could somebody assist me with this please? You can download the log file at: https://www.dropbox.com/s/a59d2hyabo02q5z/jockey.log

    Read the article

  • Vote of Disconfidence to Entity Framework

    - by Ricardo Peres
    A friend of mine has found the following problem with Entity Framework 4: Two simple classes and one association between them (one to many): One condition to filter out soft-deleted entities (WHERE Deleted = 0): 100 records in the database; A simple query: 1: var l = ctx.Person.Include("Address").Where(x => (x.Address.Name == "317 Oak Blvd." && x.Address.Number == 926) || (x.Address.Name == "891 White Milton Drive" && x.Address.Number == 497)); Will produce the following SQL: 1: SELECT 2: [Extent1].[Id] AS [Id], 3: [Extent1].[FullName] AS [FullName], 4: [Extent1].[AddressId] AS [AddressId], 5: [Extent202].[Id] AS [Id1], 6: [Extent202].[Name] AS [Name], 7: [Extent202].[Number] AS [Number] 8: FROM [dbo].[Person] AS [Extent1] 9: LEFT OUTER JOIN [dbo].[Address] AS [Extent2] ON ([Extent2].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent2].[Id]) 10: LEFT OUTER JOIN [dbo].[Address] AS [Extent3] ON ([Extent3].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent3].[Id]) 11: LEFT OUTER JOIN [dbo].[Address] AS [Extent4] ON ([Extent4].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent4].[Id]) 12: LEFT OUTER JOIN [dbo].[Address] AS [Extent5] ON ([Extent5].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent5].[Id]) 13: LEFT OUTER JOIN [dbo].[Address] AS [Extent6] ON ([Extent6].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent6].[Id]) 14: ... 15: WHERE ((N'317 Oak Blvd.' = [Extent2].[Name]) AND (926 = [Extent3].[Number])) 16: ... And will result in 680 MB of memory being taken! Now, Entity Framework has been historically known for producing less than optimal SQL, but 680 MB for 100 entities?! According to Microsoft, the problem will be addressed in the following version, there is a Connect issue open. There is even a whitepaper, Performance Considerations for Entity Framework 5, which talks about some of the changes and optimizations coming on version 5, but by reading it, I got even more concerned: “Once the cache contains a set number of entries (800), we start a timer that periodically (once-per-minute) sweeps the cache.” Say what?! The next version of Entity Framework will spawn timer threads?! When Code First came along, I thought it was a step in the right direction. Sure, it didn’t include some things that NHibernate did for quite some time – for example, different strategies for Id generation that do not rely on IDENTITY columns, which makes INSERT batching impossible, or support for enumerated types – but I thought these would come with the time. Now, enumerated types have, but so did… timer threads! I’m afraid Entity Framework is becoming a monster.

    Read the article

  • Refreshing imported MySQL data with MySQL for Excel

    - by Javier Rivera
    Welcome to another blog post from the MySQL for Excel Team. Today we're going to talk about a new feature included since MySQL for Excel 1.3.0, you can install the latest GA or maintenance version using the MySQL Installer or optionally you can download directly any GA or non-GA version from the MySQL Developer Zone.As some users suggested in our forums we should be maintaining the link between tables and Excel not only when editing data through the Edit MySQL Data option, but also when importing data via Import MySQL Data. Before 1.3.0 this process only provided you with an offline copy of the Table's data into Excel and you had no way to refresh that information from the DB later on. Now, with this new feature we'll show you how easy is to work with the latest available information at all times. This feature is transparent to you (it doesn't require additional steps to work as long as the users had the Create an Excel Table for the imported MySQL table data option enabled. To ensure you have this option checked, click over Advanced Options... after the Import Data dialog is displayed). The current blog post assumes you already know how to import data into excel, you could always take a look at our previous post How To - Guide to Importing Data from a MySQL Database to Excel using MySQL for Excel if you need further reference on that topic. After importing Data from a MySQL Table into Excel, you can refresh the data in 3 ways.1. Simply right click over the range of the imported data, to show the pop-up menu: Click over the Refresh button to obtain the latest copy of the data in the table. 2. Click the Refresh button on the Data ribbon: 3. Click the Refresh All button in the Data ribbon (beware this will refresh all Excel tables in the Workbook): Please take a note of a couple of details here, the first one is about the size of the table. If by the time you refresh the table new columns had been added to it, and you originally have imported all columns, the table will grow to the right. The same applies to rows, if the table has new rows and you did not limit the results , the table will grow to to the bottom of the sheet in Excel. The second detail you should take into account is this operation will overwrite any changes done to the cells after the table was originally imported or previously refreshed: Now with this new feature, imported data remains linked to the data source and is available to be updated at all times. It empowers the user to always be able to work with the latest version of the imported MySQL data. We hope you like this this new feature and give it a try! Remember that your feedback is very important for us, so drop us a message with your comments, suggestions for this or other features and follow us at our social media channels: MySQL on Windows (this) Blog: https://blogs.oracle.com/MySqlOnWindows/ MySQL for Excel forum: http://forums.mysql.com/list.php?172 Facebook: http://www.facebook.com/mysql YouTube channel: https://www.youtube.com/user/MySQLChannel Thanks!

    Read the article

  • Using SQL Source Control and Vault Professional Part 4

    - by Ajarn Mark Caldwell
    Two weeks ago I upgraded our installation of Fortress to the latest version, which is now named Vault Professional.  This is the version of Vault (i.e. Vault Standard 5.1 / Vault Professional 5.1) that will be officially supported with Red-Gate SQL Source Control 2.1.  While the folks at Red-Gate did a fantastic job of working with me to get SQL Source Control to work with the older Fortress version, we weren’t going to just sit on that.  There are a couple of things that Vault Professional cleaned up for us, such as improved integration with Visual Studio 2010, so it was a win all around. Shortly after that upgrade, I received notice from Red-Gate that they had a new Early Access version of SQL Source Control available that included the ability to source control static data.  The idea here is that you probably have a few fairly static lookup tables in your system, and those data values are similar in concept to source code, and should be versioned in your source control management system also.  I agree with this, but please be wise…somebody out there is bound to try to use this feature as their disaster recovery for their entire database, and that is NOT the purpose.  First off, you should never have your PROD (or LIVE, whatever you call it) system attached to source control.  Source Control is for development, not for PROD systems.  Second, use the features that are intended for this purpose, such as BACKUP and RESTORE. Laying that tangent aside, it is great that now you can include these critical values in your repository and make them part of a deployment process.  As you would guess, SQL Source Control uses SQL Data Compare to create the data change scripts just like it uses SQL Compare to create the schema change scripts.  Once again, they did a very good job with the integration to their other products.  At this point we are really starting to see some good payback on our investment in the full SQL Developer Bundle.  Those products were worth the investment back when we only used them sporadically for troubleshooting and DBA analysis, but now with SQL Source Control, they are becoming everyday-use products for the development team. I like this software (SQL Source Control) so much that I am about to break my own rules and distribute it to my team to use even though it is still in beta.  This is the first time that I have approved the use of any beta software in a production scenario (actively building our next versions of internal software) but I predict that the usability and productivity gain of using SQL Source Control over manual scripting is worth the risk.  Of course, I have also put this beta software through its paces pretty well to be comfortable with it, and Red-Gate has proven their responsiveness to issues that came up in my early beta testing, and so I am willing to bet on their continued support.  Likewise, SourceGear, the maker of Vault Professional, has proven itself to me as well, and so the combination of SQL Source Control with Vault Professional is the new standard for my development team.

    Read the article

  • At the Java DEMOgrounds - Oracle’s Java Embedded Suite 7.0

    - by Janice J. Heiss
    The Java Embedded Suite 7.0, a new, packaged offering that facilitates the creation of  applications across a wide range of  embedded systems including network appliances, healthcare devices, home gateways, and routers was demonstrated by Oleg Kostukovsky of  Oracle’s Java Embedded Global Business Unit. He presented a device-to-cloud application that relied upon a scan station connected to Java Demos throughout JavaOne. This application allows an NFC tag distributed on a handout given to attendees to be scanned to gather various kinds of data. “A raffle allows attendees to check in at six unique demos and qualify for a prize,” explained Kostukovsky. “At the same time, we are collecting data both from NFC tags and sensors. We have a sensor attached to the back of the skin page that collects temperature, humidity, light intensity, and motion data at each pod. So, all of this data is collected using an application running on a small device behind the scan station."“Analytics are performed on the network using Java Embedded Suite and technology from Oracle partners, SeeControl, Hitachi, and Globalscale,” Kostukovsky said. Next, he showed me a data visualization web site showing sensory, environmental, and scan data that is collected on the device and pushed into the cloud. The Oracle product that enabled all of this, Java Embedded Suite 7.0, was announced in late September. “You can see all kinds of data coming from the stations in real-time -- temperature, power consumption, light intensity and humidity,” explained Kostukovsky. “We can identify trends and look at sensory data and see all the trends of all the components. It uses a Java application written by a partner, SeeControl. So we are using a Java app server and web server and a database.” The Market for Java Embedded Suite 7.0 “It's mainly geared to mission-to-mission applications because the overall architecture applies across multiple industries – telematics, transportation, industrial automation, smart metering, etc. This architecture is one in which the network connects to sensory devices and then pre-analyzes the data from these devices, after which it pushes the data to the cloud for processing and visualization. So we are targeting all those industries with those combined solutions. There is a strong interest from Telcos, from carriers, who are now moving more and more to the space of providing full services for their interim applications. They are looking to deploy solutions that will provide a full service to those who are building M-to-M applications.”

    Read the article

  • ArchBeat Link-o-Rama for 2012-03-20

    - by Bob Rhubart
    SOA! SOA! SOA!; OSB 11g Recipes and Author Interviews www.oracle.com Featured this week on the OTN Architect Homepage, along with the latest articles, white papers, blogs, events, and other resources for software architects. OTN Virtual Developer Day - Java - APAC Tuesday March 27th, 2012. 9:30 am to 2:00pm IST / 12:00pm to 4.30pm SGT / 3.00pm - 7.30pm AEDT Oracle Virtualization Newsletter - March Edition www.oracle.com News, white papers, webcasts, events, blogs, and more -- all focused on Oracle Virtualization products. 7 Signs an Enterprise is getting the post-PC thing | Ron Tolido www.capgemini.com Capgemini's Ron Tolido shares "indicators for enterprises that actually understand the power of mobility and the post-PC era." Gartner: Personal Cloud Will Replace the Personal Computer as the Center of Users' Digital Lives www.gartner.com The change, says Gartner, "will require enterprises to fundamentally rethink how they deliver applications and services to users." Northeast Ohio Oracle Users Group 2 Day Seminar - May 14-15 - Cleveland, OH www.neooug.org More than 20 sessions over 4 tracks, featuring 18 speakers, including Oracle ACE Director Cary Millsap, Oracle ACE Director Rich Niemiec, and Oracle ACE Stewart Brand. Register before April 15 and save. Oracle Hardware Systems: The Extreme Performance Tour - Dates and Locations Worldwide www.oracle.com Get the inside track on Oracle's hardware strategy and product roadmap from the people who know Oracle hardware best. And be sure to meet our global experts in the Extreme Performance exhibition area. Click the link for dates and locations worldwide. Oracle's ZFS Storage Appliance Simulator | Steen Schmidt blogs.oracle.com Take a test drive. Oracle Access Manager 11g - useful links | Dmitry Nefedkin blogs.oracle.com Dmitry Nefedkin shares a list of links to useful resources for those interested in Oracle Access Manager 11g. Oracle Linux Online Forum - March 27 event.on24.com Date: Tuesday, March 27, 2012 Time: 9:30 AM PT / 12:30 PM ET Leading Innovations in Enterprise Linux hosted by Oracle Executives Edward Screven and Wim Coekaerts. Customer Presentation: How Oracle Helps Reduce Cost and Improve Performance of Database Applications at Progressive Insurance Speaker: John Dome What's New in Oracle Linux Speakers: Waseem Daher, Chris Mason, Elena Zannoni, Lenz Grimmer Get More Value from your Linux Vendor Speakers: Sergio Leunissen, Chris Mason, Monica Kumar Thought for the Day "I have yet to see any problem, however complicated, which, when looked at in the right way, did not become still more complicated." —Poul Anderson

    Read the article

  • Is there a better term than "smoothness" or "granularity" to describe this language feature?

    - by Chris Stevens
    One of the best things about programming is the abundance of different languages. There are general purpose languages like C++ and Java, as well as little languages like XSLT and AWK. When comparing languages, people often use things like speed, power, expressiveness, and portability as the important distinguishing features. There is one characteristic of languages I consider to be important that, so far, I haven't heard [or been able to come up with] a good term for: how well a language scales from writing tiny programs to writing huge programs. Some languages make it easy and painless to write programs that only require a few lines of code, e.g. task automation. But those languages often don't have enough power to solve large problems, e.g. GUI programming. Conversely, languages that are powerful enough for big problems often require far too much overhead for small problems. This characteristic is important because problems that look small at first frequently grow in scope in unexpected ways. If a programmer chooses a language appropriate only for small tasks, scope changes can require rewriting code from scratch in a new language. And if the programmer chooses a language with lots of overhead and friction to solve a problem that stays small, it will be harder for other people to use and understand than necessary. Rewriting code that works fine is the single most wasteful thing a programmer can do with their time, but using a bazooka to kill a mosquito instead of a flyswatter isn't good either. Here are some of the ways this characteristic presents itself. Can be used interactively - there is some environment where programmers can enter commands one by one Requires no more than one file - neither project files nor makefiles are required for running in batch mode Can easily split code across multiple files - files can refeence each other, or there is some support for modules Has good support for data structures - supports structures like arrays, lists, and especially classes Supports a wide variety of features - features like networking, serialization, XML, and database connectivity are supported by standard libraries Here's my take on how C#, Python, and shell scripting measure up. Python scores highest. Feature C# Python shell scripting --------------- --------- --------- --------------- Interactive poor strong strong One file poor strong strong Multiple files strong strong moderate Data structures strong strong poor Features strong strong strong Is there a term that captures this idea? If not, what term should I use? Here are some candidates. Scalability - already used to decribe language performance, so it's not a good idea to overload it in the context of language syntax Granularity - expresses the idea of being good just for big tasks versus being good for big and small tasks, but doesn't express anything about data structures Smoothness - expresses the idea of low friction, but doesn't express anything about strength of data structures or features Note: Some of these properties are more correctly described as belonging to a compiler or IDE than the language itself. Please consider these tools collectively as the language environment. My question is about how easy or difficult languages are to use, which depends on the environment as well as the language.

    Read the article

  • SQL SERVER – OLEDB – Link Server – Wait Type – Day 23 of 28

    - by pinaldave
    When I decided to start writing about this wait type, the very first question that came to my mind was, “What does ‘OLEDB’ stand for?” A quick search on Wikipedia tells me that OLEDB means Object Linking and Embedding Database. (How many of you knew this?) Anyway, I found it very interesting that this wait type was in one of the top 10 wait types in many of the systems I have come across in my performance tuning experience. Books On-Line: ????OLEDB occurs when SQL Server calls the SQL Server Native Client OLE DB Provider. This wait type is not used for synchronization. Instead, it indicates the duration of calls to the OLE DB provider. OLEDB Explanation: This wait type primarily happens when Link Server or Remove Query has been executed. The most common case wherein this wait type is visible is during the execution of Linked Server. When SQL Server is retrieving data from the remote server, it uses OLEDB API to retrieve the data. It is possible that the remote system is not quick enough or the connection between them is not fast enough, leading SQL Server to wait for the result’s return from the remote (or external) server. This is the time OLEDB wait type occurs. Reducing OLEDB wait: Check the Link Server configuration. Checking Disk-Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) At this point in time, I am not able to think of any more ways on reducing this wait type. Do you have any opinion about this subject? Please share it here and I will share your comment with the rest of the Community, and of course, with due credit unto you. Please read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

< Previous Page | 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122  | Next Page >