Search Results

Search found 47861 results on 1915 pages for 'sql update'.

Page 396/1915 | < Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >

  • Nepotism In The SQL Family

    - by Rob Farley
    There’s a bunch of sayings about nepotism. It’s unpopular, unless you’re the family member who is getting the opportunity. But of course, so much in life (and career) is about who you know. From the perspective of the person who doesn’t get promoted (when the family member is), nepotism is simply unfair; even more so when the promoted one seems less than qualified, or incompetent in some way. We definitely get a bit miffed about that. But let’s also look at it from the other side of the fence – the person who did the promoting. To them, their son/daughter/nephew/whoever is just another candidate, but one in whom they have more faith. They’ve spent longer getting to know that person. They know their weaknesses and their strengths, and have seen them in all kinds of situations. They expect them to stay around in the company longer. And yes, they may have plans for that person to inherit one day. Sure, they have a vested interest, because they’d like their family members to have strong careers, but it’s not just about that – it’s often best for the company as well. I’m not announcing that the next LobsterPot employee is one of my sons (although I wouldn’t be opposed to the idea of getting them involved), but actually, admitting that almost all the LobsterPot employees are SQLFamily members… …which makes this post good for T-SQL Tuesday, this month hosted by Jeffrey Verheul (@DevJef). You see, SQLFamily is the concept that the people in the SQL Server community are close. We have something in common that goes beyond ordinary friendship. We might only see each other a few times a year, at events like the PASS Summit and SQLSaturdays, but the bonds that are formed are strong, going far beyond typical professional relationships. And these are the people that I am prepared to hire. People that I have got to know. I get to know their skill level, how well they explain things, how confident people are in their expertise, and what their values are. Of course there people that I wouldn’t hire, but I’m a lot more comfortable hiring someone that I’ve already developed a feel for. I need to trust the LobsterPot brand to people, and that means they need to have a similar value system to me. They need to have a passion for helping people and doing what they can to make a difference. Above all, they need to have integrity. Therefore, I believe in nepotism. All the people I’ve hired so far are people from the SQL community. I don’t know whether I’ll always be able to hire that way, but I have no qualms admitting that the things I look for in an employee are things that I can recognise best in those that are referred to as SQLFamily. …like Ted Krueger (@onpnt), LobsterPot’s newest employee and the guy who is representing our brand in America. I’m completely proud of this guy. He’s everything I want in an employee. He’s an experienced consultant (even wrote a book on it!), loving husband and father, genuine expert, and incredibly respected by his peers. It’s not favouritism, it’s just choosing someone I’ve been interviewing for years. @rob_farley

    Read the article

  • apt-get update cannot find ubuntu servers

    - by Phrogz
    Running sudo apt-get update fails on my server (that has a 'net connection). Are the servers temporarily broken, or is my apt misconfigured and using old servers? In short, how do I fix this? Here's the output: ~$ uname -a Linux nematode 2.6.28-19-server #66-Ubuntu SMP Sat Oct 16 18:41:24 UTC 2010 i686 GNU/Linux ~$ sudo apt-get update Err http://us.archive.ubuntu.com jaunty Release.gpg Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty/main Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty/restricted Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty/universe Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty/multiverse Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty-updates Release.gpg Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty-updates/main Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty-updates/restricted Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty-updates/universe Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty-updates/multiverse Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://security.ubuntu.com jaunty-security Release.gpg Could not resolve 'security.ubuntu.com' Err http://security.ubuntu.com jaunty-security/main Translation-en_US Could not resolve 'security.ubuntu.com' Err http://security.ubuntu.com jaunty-security/restricted Translation-en_US Could not resolve 'security.ubuntu.com' Err http://security.ubuntu.com jaunty-security/universe Translation-en_US Could not resolve 'security.ubuntu.com' Err http://security.ubuntu.com jaunty-security/multiverse Translation-en_US Could not resolve 'security.ubuntu.com' Reading package lists... Done W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty/Release.gpg Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty/main/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty/restricted/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty/universe/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty/multiverse/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty-updates/Release.gpg Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty-updates/main/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty-updates/restricted/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty-updates/universe/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty-updates/multiverse/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/jaunty-security/Release.gpg Could not resolve 'security.ubuntu.com' W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/jaunty-security/main/i18n/Translation-en_US.bz2 Could not resolve 'security.ubuntu.com' W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/jaunty-security/restricted/i18n/Translation-en_US.bz2 Could not resolve 'security.ubuntu.com' W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/jaunty-security/universe/i18n/Translation-en_US.bz2 Could not resolve 'security.ubuntu.com' W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/jaunty-security/multiverse/i18n/Translation-en_US.bz2 Could not resolve 'security.ubuntu.com' W: Some index files failed to download, they have been ignored, or old ones used instead. W: You may want to run apt-get update to correct these problems

    Read the article

  • apt-get update cannot find ubuntu servers

    - by Phrogz
    Running sudo apt-get update fails on my server (that has a 'net connection). Are the servers temporarily broken, or is my apt misconfigured and using old servers? In short, how do I fix this? Here's the output: ~$ uname -a Linux nematode 2.6.28-19-server #66-Ubuntu SMP Sat Oct 16 18:41:24 UTC 2010 i686 GNU/Linux ~$ sudo apt-get update Err http://us.archive.ubuntu.com jaunty Release.gpg Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty/main Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty/restricted Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty/universe Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty/multiverse Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty-updates Release.gpg Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty-updates/main Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty-updates/restricted Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty-updates/universe Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://us.archive.ubuntu.com jaunty-updates/multiverse Translation-en_US Could not resolve 'us.archive.ubuntu.com' Err http://security.ubuntu.com jaunty-security Release.gpg Could not resolve 'security.ubuntu.com' Err http://security.ubuntu.com jaunty-security/main Translation-en_US Could not resolve 'security.ubuntu.com' Err http://security.ubuntu.com jaunty-security/restricted Translation-en_US Could not resolve 'security.ubuntu.com' Err http://security.ubuntu.com jaunty-security/universe Translation-en_US Could not resolve 'security.ubuntu.com' Err http://security.ubuntu.com jaunty-security/multiverse Translation-en_US Could not resolve 'security.ubuntu.com' Reading package lists... Done W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty/Release.gpg Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty/main/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty/restricted/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty/universe/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty/multiverse/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty-updates/Release.gpg Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty-updates/main/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty-updates/restricted/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty-updates/universe/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/jaunty-updates/multiverse/i18n/Translation-en_US.bz2 Could not resolve 'us.archive.ubuntu.com' W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/jaunty-security/Release.gpg Could not resolve 'security.ubuntu.com' W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/jaunty-security/main/i18n/Translation-en_US.bz2 Could not resolve 'security.ubuntu.com' W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/jaunty-security/restricted/i18n/Translation-en_US.bz2 Could not resolve 'security.ubuntu.com' W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/jaunty-security/universe/i18n/Translation-en_US.bz2 Could not resolve 'security.ubuntu.com' W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/jaunty-security/multiverse/i18n/Translation-en_US.bz2 Could not resolve 'security.ubuntu.com' W: Some index files failed to download, they have been ignored, or old ones used instead. W: You may want to run apt-get update to correct these problems

    Read the article

  • Write, Read and Update Oracle CLOBs with PL/SQL

    - by robertphyatt
    Fun with CLOBS! If you are using Oracle, if you have to deal with text that is over 4000 bytes, you will probably find yourself dealing with CLOBs, which can go up to 4GB. They are pretty tricky, and it took me a long time to figure out these lessons learned. I hope they will help some down-trodden developer out there somehow. Here is my original code, which worked great on my Oracle Express Edition: (for all examples, the first one writes a new CLOB, the next one Updates an existing CLOB and the final one reads a CLOB back) CREATE OR REPLACE PROCEDURE PRC_WR_CLOB (        p_document      IN VARCHAR2,        p_id            OUT NUMBER) IS      lob_loc CLOB; BEGIN    INSERT INTO TBL_CLOBHOLDERDDOC (CLOBHOLDERDDOC)        VALUES (empty_CLOB())        RETURNING CLOBHOLDERDDOC, CLOBHOLDERDDOCID INTO lob_loc, p_id;    DBMS_LOB.WRITE(lob_loc, LENGTH(UTL_RAW.CAST_TO_RAW(p_document)), 1, UTL_RAW.CAST_TO_RAW(p_document)); END; / CREATE OR REPLACE PROCEDURE PRC_UD_CLOB (        p_document      IN VARCHAR2,        p_id            IN NUMBER) IS      lob_loc CLOB; BEGIN        SELECT CLOBHOLDERDDOC INTO lob_loc FROM TBL_CLOBHOLDERDDOC        WHERE CLOBHOLDERDDOCID = p_id FOR UPDATE;    DBMS_LOB.WRITE(lob_loc, LENGTH(UTL_RAW.CAST_TO_RAW(p_document)), 1, UTL_RAW.CAST_TO_RAW(p_document)); END; / CREATE OR REPLACE PROCEDURE PRC_RD_CLOB (    p_id IN NUMBER,    p_clob OUT VARCHAR2) IS    lob_loc  CLOB; BEGIN    SELECT CLOBHOLDERDDOC INTO lob_loc    FROM   TBL_CLOBHOLDERDDOC    WHERE  CLOBHOLDERDDOCID = p_id;    p_clob := UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(lob_loc, DBMS_LOB.GETLENGTH(lob_loc), 1)); END; / As you can see, I had originally been casting everything back and forth between RAW formats using the UTL_RAW.CAST_TO_VARCHAR2() and UTL_RAW.CAST_TO_RAW() functions all over the place, but it had the nasty side effect of working great on my Oracle express edition on my developer box, but having all the CLOBs above a certain size display garbage when read back on the Oracle test database server . So...I kept working at it and came up with the following, which ALSO worked on my Oracle Express Edition on my developer box:   CREATE OR REPLACE PROCEDURE PRC_WR_CLOB (     p_document      IN VARCHAR2,     p_id        OUT NUMBER) IS       lob_loc CLOB; BEGIN     INSERT INTO TBL_CLOBHOLDERDOC (CLOBHOLDERDOC)         VALUES (empty_CLOB())         RETURNING CLOBHOLDERDOC, CLOBHOLDERDOCID INTO lob_loc, p_id;     DBMS_LOB.WRITE(lob_loc, LENGTH(p_document), 1, p_document);   END; / CREATE OR REPLACE PROCEDURE PRC_UD_CLOB (     p_document      IN VARCHAR2,     p_id        IN NUMBER) IS       lob_loc CLOB; BEGIN     SELECT CLOBHOLDERDOC INTO lob_loc FROM TBL_CLOBHOLDERDOC     WHERE CLOBHOLDERDOCID = p_id FOR UPDATE;     DBMS_LOB.WRITE(lob_loc, LENGTH(p_document), 1, p_document); END; / CREATE OR REPLACE PROCEDURE PRC_RD_CLOB (     p_id IN NUMBER,     p_clob OUT VARCHAR2) IS     lob_loc  CLOB; BEGIN     SELECT CLOBHOLDERDOC INTO lob_loc     FROM   TBL_CLOBHOLDERDOC     WHERE  CLOBHOLDERDOCID = p_id;     p_clob := DBMS_LOB.SUBSTR(lob_loc, DBMS_LOB.GETLENGTH(lob_loc), 1); END; / Unfortunately, by changing my code to what you see above, even though it kept working on my Oracle express edition, everything over a certain size just started truncating after about 7950 characters on the test server! Here is what I came up with in the end, which is actually the simplest solution and this time worked on both my express edition and on the database server (note that only the read function was changed to fix the truncation issue, and that I had Oracle worry about converting the CLOB into a VARCHAR2 internally): CREATE OR REPLACE PROCEDURE PRC_WR_CLOB (        p_document      IN VARCHAR2,        p_id            OUT NUMBER) IS      lob_loc CLOB; BEGIN    INSERT INTO TBL_CLOBHOLDERDDOC (CLOBHOLDERDDOC)        VALUES (empty_CLOB())        RETURNING CLOBHOLDERDDOC, CLOBHOLDERDDOCID INTO lob_loc, p_id;    DBMS_LOB.WRITE(lob_loc, LENGTH(p_document), 1, p_document); END; / CREATE OR REPLACE PROCEDURE PRC_UD_CLOB (        p_document      IN VARCHAR2,        p_id            IN NUMBER) IS      lob_loc CLOB; BEGIN        SELECT CLOBHOLDERDDOC INTO lob_loc FROM TBL_CLOBHOLDERDDOC        WHERE CLOBHOLDERDDOCID = p_id FOR UPDATE;    DBMS_LOB.WRITE(lob_loc, LENGTH(p_document), 1, p_document); END; / CREATE OR REPLACE PROCEDURE PRC_RD_CLOB (    p_id IN NUMBER,    p_clob OUT VARCHAR2) IS BEGIN    SELECT CLOBHOLDERDDOC INTO p_clob    FROM   TBL_CLOBHOLDERDDOC    WHERE  CLOBHOLDERDDOCID = p_id; END; /   I hope that is useful to someone!

    Read the article

  • Want a headless build server for SSDT without installing Visual Studio? You’re out of luck!

    - by jamiet
    An issue that regularly seems to rear its head on my travels is that of headless build servers for SSDT. What does that mean exactly? Let me give you my interpretation of it. A SQL Server Data Tools (SSDT) project incorporates a build process that will basically parse all of the files within the project and spit out a .dacpac file. Where an organisation employs a Continuous Integration process they will likely want to automate the building of that dacpac whenever someone commits a change to the source control repository. In order to do that the organisation will use a build server (e.g. TFS, TeamCity, Jenkins) and hence that build server requires all the pre-requisite software that understands how to build an SSDT project. The simplest way to install all of those pre-requisites is to install SSDT itself however a lot of folks don’t like that approach because it installs a lot unnecessary components on there, not least Visual Studio itself. Those folks (of which i am one) are of the opinion that it should be unnecessary to install a heavyweight GUI in order to simply get a few software components required to do something that inherently doesn’t even need a GUI. The phrase “headless build server” is often used to describe a build server that doesn’t contain any heavyweight GUI tools such as Visual Studio and is a desirable state for a build server. In his blog post Headless MSBuild Support for SSDT (*.sqlproj) Projects Gert Drapers outlines the steps necessary to obtain a headless build server for SSDT: This article describes how to install the required components to build and publish SQL Server Data Tools projects (*.sqlproj) using MSBuild without installing the full SQL Server Data Tool hosted inside the Visual Studio IDE. http://sqlproj.com/index.php/2012/03/headless-msbuild-support-for-ssdt-sqlproj-projects/ Frankly however going through these steps is a royal PITA and folks like myself have longed for Microsoft to support headless build support for SSDT by providing a distributable installer that installs only the pre-requisites for building SSDT projects. Yesterday in MSDN forum thread Building a VS2013 headless build server - it's sooo hard Mike Hingley complained about this very thing and it prompted a response from Kevin Cunnane from the SSDT product team: The official recommendation from the TFS / Visual Studio team is to install the version of Visual Studio you use on the build machine. I, like many others, would rather not have to install full blown Visual Studio and so I asked: Is there any chance you'll ever support any of these scenarios: Installation of all build/deploy pre-requisites without installing the VS shell? TFS shipping with all of the pre-requisites for doing SSDT project build/deploys 3rd party build servers (e.g. TeamCity) shipping with all of the requisites for doing SSDT project build/deploys I have to say that the lack of a single installer containing all the pre-requisites for SSDT build/deploy puzzles me. Surely the DacFX installer would be a perfect vehicle for that? Kevin replied again: The answer is no for all 3 scenarios. We looked into this issue, discussed it with the Visual Studio / TFS team, and in the end agreed to go with their latest guidance which is to install Visual Studio (e.g. VS2013 Express for Web) on the build machine. This is how Visual Studio Online is doing it and it's the approach recommended for customers setting up their own TFS build servers. I would hope this is compatible with 3rd party build servers but have not verified whether this works with TeamCity etc. Note that DacFx MSI isn't a suitable release vehicle for this as we don't want to include Visual Studio/MSBuild dependencies in that package. It's meant to just include the core DacFx DLLs used by SSMS, SqlPackage.exe on the command line, etc. What this means is we won't be providing a separate MSI installer or nuget package with just the necessary build DLLs you need to run your build and tests. If someone wanted to create a script that generated a nuget package based on our DLLs and targets files, then release that somewhere on the web for easier integration with 3rd party build servers we've no problem with that. Again, here’s the link to the thread and its worth reading in its entirety if this is something that interests you. So there you have it. Microsoft will not be be providing support for headless build servers for SSDT but if someone in the community wants to go ahead and roll their own, go right ahead. @Jamiet

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 23 (sys.dm_db_index_usage_stats)

    - by Tamarick Hill
    The sys.dm_db_index_usage_stats Dynamic Management View is used to return usage information about the various indexes on your SQL Server instance. Let’s have a look at this DMV against our AdventureWorks2012 database so we can examine the information returned. SELECT * FROM sys.dm_db_index_usage_stats WHERE database_id = db_id('AdventureWorks2012') The first three columns in the result set represent the database_id, object_id, and index_id of a given row. You can join these columns back to other system tables to extract the actual database, object, and index names. The next four columns are probably the most beneficial columns within this DMV. First, the user_seeks column represents the number of times that a user query caused a seek operation against a particular index. The user_scans column represents how many times a user query caused a scan operation on a particular index. The user_lookups column represents how many times an index was used to perform a lookup operation. The user_updates column refers to how many times an index had to be updated due to a write operation that effected a particular index. The last_user_seek, last_user_scan, last_user_lookup, and last_user_update columns provide you with DATETIME information about when the last user scan, seek, lookup, or update operation was performed. The remaining columns in the result set are the same as the ones we previously discussed, except instead of the various operations being generated from user requests, they are generated from system background requests. This is an extremely useful DMV and one of my favorites when it comes to Index Maintenance. As we all know, indexes are extremely beneficial with improving the performance of your read operations. But indexes do have a downside as well. Indexes slow down the performance of your write operations, and they also require additional resources for storage. For this reason, in my opinion, it is important to regularly analyze the indexes on your system to make sure the indexes you have are being used efficiently. My AdventureWorks2012 database is only used for demonstrating or testing things, so I dont have a lot of meaningful information here, but for a Production system, if you see an index that is never getting any seeks, scans, or lookups, but is constantly getting a ton of updates, it more than likely would be a good candidate for you to consider removing. You would not be getting much benefit from the index, but yet it is incurring a cost on your system due to it constantly having to be updated for your write operations, not to mention the additional storage it is consuming. You should regularly analyze your indexes to ensure you keep your database systems as efficient and lean as possible. One thing to note is that these DMV statistics are reset every time SQL Server is restarted. Therefore it would not be a wise idea to make decisions about removing indexes after a Server Reboot or a cluster roll. If you restart your SQL Server instances frequently, for example if you schedule weekly/monthly cluster rolls, then you may not capture indexes that are being used for weekly/monthly reports that run for business users. And if you remove them, you may have some upset people at your desk on Monday morning. If you would like to begin analyzing your indexes to possibly remove the ones that your system is not using, I would recommend building a process to load this DMV information into a table on scheduled basis, depending on how frequently you perform an operation that would reset these statistics, then you can analyze the data over a period of time to get a more accurate view of what indexes are really being used and which ones or not. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms188755.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 19 (sys.dm_exec_query_stats)

    - by Tamarick Hill
    The sys.dm_exec_query_stats DMV is one of the most useful DMV’s out there when it comes to performance tuning. If you have been keeping up with this blog series this month, you know that I started out on Day 1 reviewing many of the DMV’s within the ‘exec’ namespace. I’m not sure how I missed this one considering how valuable it is, but hey, they say it’s better late than never right?? On Day 7 and Day 8 we reviewed the sys.dm_exec_procedure_stats and sys.dm_exec_trigger_stats respectively. This sys.dm_exec_query_stats DMV is very similar to these two. As a matter of fact, this DMV will return all of the information you saw in the other two DMV’s, but in addition to that, you can see stats for all queries that have cached execution plans on your server. You can even see stats for statements that are ran Ad-Hoc as long as they are still cached in the buffer pool. To better illustrate this DMV, let have a quick look at it: SELECT * FROM sys.dm_exec_query_stats As you can see, there is a lot of information returned from this DMV. I wont go into detail about each and every one of these columns, but I will touch on a few of them briefly. The first column is the ‘sql_handle’, which if you remember from Day 4 of our blog series, I explained how you can use this column to extract the actual SQL text that was executed. The next columns statement_start_offset and statement_end_offset provide you a way of extracting the exact SQL statement that was executed as part of a batch. The plan_handle column is used to extract the Execution plan that was used, which we talked about during Day 5 of this blog series. Later in the result set, you have columns to identify how many times a particular statement was executed, how much CPU time it used, how many reads/writes it performed, the duration, how many rows were returned, etc. These columns provide you with a solid avenue to begin your performance optimization. The last column I will touch on is the query_plan_hash column. A lot of times when you have Dynamic SQL running on your server, you have similar statements with different parameter values being passed in. Many times these types of statements will get similar execution plans and then a Binary hash value can be generated based on these similar plans. This query plan hash can be used to find the cost of all queries that have similar execution plans and then you can tune based on that plan to improve the performance of all of the individual queries. This is a very powerful way of identifying and tuning Ad-hoc statements that run on your server. As I stated earlier, this sys.dm_exec_query_stats DMV is a very powerful and recommended DMV for performance tuning. You are able to quickly identify statements that are running on your server and analyze their impact on system resources. Using this DMV to track down the biggest performance killers on your server will allow you to make the biggest gains once you focus your tuning efforts on those top offenders. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms189741.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Problem when trying to update "Duplicate sources.list"

    - by Coca Akat
    I got this problem when trying to update using sudo apt-get update W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ saucy-backports/multiverse amd64 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_saucy-backports_multiverse_binary-amd64_Packages) W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ saucy-backports/multiverse i386 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_saucy-backports_multiverse_binary-i386_Packages) W: You may want to run apt-get update to correct these problems This is my souces.list : # deb cdrom:[Ubuntu 13.10 _Saucy Salamander_ - Release amd64 (20131016.1)]/ saucy main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://archive.ubuntu.com/ubuntu saucy main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://archive.ubuntu.com/ubuntu saucy-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu saucy universe deb http://archive.ubuntu.com/ubuntu saucy-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://archive.ubuntu.com/ubuntu saucy multiverse deb http://archive.ubuntu.com/ubuntu saucy-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu saucy-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu saucy-security main restricted deb http://archive.ubuntu.com/ubuntu saucy-security universe deb http://archive.ubuntu.com/ubuntu saucy-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. # deb http://extras.ubuntu.com/ubuntu saucy main # deb-src http://extras.ubuntu.com/ubuntu saucy main # deb http://archive.canonical.com/ saucy partner # deb-src http://archive.canonical.com/ saucy partner # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. ## Major bug fix updates produced after the final release of the ## distribution. ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu saucy-backports multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software.

    Read the article

  • Three Buckets of Knowledge

    - by BuckWoody
    As I learn more and more about SQL Server every day, I divide up my information into three “buckets”: Concepts In the first bucket are the general concepts about the topic. What is it? What does it do (or sometimes, what is is supposed to do?) How does one operation flow to another? For this information I use books, magazine articles and believe it or not – Wikipedia. I don’t always trust that last source, but I do use it to see how others lay out their thoughts around a concept. I really like graphical charts that show me the process flow if I can get it, and this is an ideal place for a good presentation. In fact, this may be the only real use for a presentation – I’ll explain what I mean in a moment. Reference The references for a topic include things like Transact-SQL (T-SQL) syntax, or the screen layout on a panel, things like that. Think Dictionary. The only reference I trust for this information is Books Online – presentations are fine, but we’re talking about a dictionary. Ever go to a movie that just reads through a dictionary? Me neither. But I have gone to presentations where people try to include tons of reference materials in their slides. Even if you give me the presentation material later, it’s not really a searchable, readable medium. How To A how-to for me is an example, or even better, a tutorial about an example. Whatever it is shows me a practical use for the concepts and of course involves the syntax. The important thing here is that you need to be able to separate out the example the person is showing you from the stuff you need to know. I can’t tell you how many times folks have told me, “well, sure, if yours is red then that works. But mine is blue.” And I have to explain, “then use “blue” for the search word here.” You get the idea. No one will do your work for you – the examples are meant as a teaching tool only. I accept that, learn what I can, and then run off to create my own thing. You might think a How To works well in a presentation, and it does, for the most part. For a complex example or tutorial, I still prefer the printed word (electronic if possible) so that I can go over the example multiple times, skip around and so on.   The order here isn’t actually that important. Most of the time I start with a concept, look at an example, and then read the reference material. But sometimes I look up an example, read a little of concepts and then check the reference. The only primary thing I try to enforce is to read something from each of them. It’s dangerous to base your work on any single example, reference or concept.  Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Ops Center 12c - Update - Provisioning Solaris on x86 Using a Card-Based NIC

    - by scottdickson
    Last week, I posted a blog describing how to use Ops Center to provision Solaris over the network via a NIC on a card rather than the built-in NIC.  Really, that was all about how to install Solaris on a SPARC system.  This week, we'll look at how to do the same thing for an x86-based server. Really, the overall process is exactly the same, at least for Solaris 11, with only minor updates. We will focus on Solaris 11 for this blog.  Once I verify that the same approach works for Solaris 10, I will provide another update. Booting Solaris 11 on x86 Just as before, in order to configure the server for network boot across a card-based NIC, it is necessary to declare the asset to associate the additional MACs with the server.  You likely will need to access the server console via the ILOM to figure out the MAC and to get a good idea of the network instance number.  The simplest way to find both of these is to start a network boot using the desired NIC and see where it appears in the list of network interfaces and what MAC is used when it tries to boot.  Go to the ILOM for the server.  Reset the server and start the console.  When the BIOS loads, select the boot menu, usually with Ctrl-P.  This will give you a menu of devices to boot from, including all of the NICs.  Select the NIC you want to boot from.  Its position in the list is a good indication of what network number Solaris will give the device. In this case, we want to boot from the 5th interface (GB_4, net4).  Pick it and start the boot processes.  When it starts to boot, you will see the MAC address for the interface Once you have the network instance and the MAC, go through the same process of declaring the asset as in the SPARC case.  This associates the additional network interface with the server.. Creating an OS Provisioning Plan The simplest way to do the boot via an alternate interface on an x86 system is to do a manual boot.  Update the OS provisioning profile as in the SPARC case to reflect the fact that we are booting from a different interface.  Update, in this case, the network boot device to be GB_4/net4, or the device corresponding to your network instance number.  Configure the profile to support manual network boot by checking the box for manual boot in the OS Provisioning profile. Booting the System Once you have created a profile and plan to support booting from the additional NIC, we are ready to install the server. Again, from the ILOM, reset the system and start the console.  When the BIOS loads, select boot from the Boot Menu as above.  Select the network interface from the list as before and start the boot process.  When the grub bootloader loads, the default boot image is the Solaris Text Installer.  On the grub menu, select Automated Installer and Ops Center takes over from there. Lessons The key lesson from all of this is that Ops Center is a valuable tool for provisioning servers whether they are connected via built-in network interfaces or via high-speed NICs on cards.  This is great news for modern datacenters using converged network infrastructures.  The process works for both SPARC and x86 Solaris installations.  And it's easy and repeatable.

    Read the article

  • Sql server version error...655 version needed but you computer has 612 or earlier version ? error

    - by xpugur
    Hi i have a error like " dbFileName cannot be opened because it is version 655. This server supports version 612 and earlier. " what should i do ? some friend of mine done a project but i guess he done it with sql 2008 and i have sql 2005 is that the reason why i got this error? can i fix it ? if i setup a newer version of sql does it will solve the problem? www.microsoft.com/express/Database/default.aspx#Installation_Options here sql server 2008 R2 express is available can it be the solution? thank you... by the way i found a link of an update http://www.microsoft.com/downloads/details.aspx?FamilyID=E1109AEF-1AA2-408D-AA0F-9DF094F993BF&displaylang=en is this a solution to my problem ?

    Read the article

  • Can a sub-procedure procedure lock and modify the same rows FOR UPDATE that its calling procedure al

    - by RenderIn
    Will the following code lead to a deadlock or should it work without any problem? I've got something similar and it's working but I didn't think it would. I thought the parent procedure's lock would have resulted in a deadlock for the child procedure but it doesn't seem to be. If it works, why? My guess is that the nested FOR UPDATE is not running into a deadlock because it's smart enough to realize that it is being called by the same procedure that has the current lock. Would this be a deadlock if FOO_PROC was not a nested procedure? DECLARE FOO_PROC(c_someName VARCHAR2) as cursor c1 is select * from awesome_people where person_name = c_someName FOR UPDATE; BEGIN open c1; update awesome_people set person_name = UPPER(person_name); close c1; END FOO_PROC; cursor my_cur is select * from awesome_people where person_name = 'John Doe' FOR UPDATE; BEGIN for onerow in c1 loop FOO_PROC(onerow.person_name); end loop; END;

    Read the article

  • Visual Web Developer 2008 Express wanting SQL Server 2005 instead of 2008?

    - by J. Pablo Fernández
    When I double click on an mdf file on Visual Web Developer 2008 (NerdDinner.mdf) it says: Connections to SQL Server files (*.mdf) require SQL Server Express 2005 to function properly. Please verify the installation of the component or download from the URL: http://go.microsoft.com/fwlink/?LinkId=49251 The URL of course points to SQL Server Express 2008. I have that one installed and running. Any ideas why am I getting that message?

    Read the article

  • Convert ADO.Net EF Connection String To Be SQL Azure Cloud Connection String Compatible!?

    - by Goober
    The Scenario I have written a Silverlight 3 Application that uses an SQL Server database. I'm moving the application onto the Cloud (Azure Platform). In order to do this I have had to setup my database on SQL Azure. I am using the ADO.Net Entity Framework to model my database. I have got the application running on the cloud, but I cannot get it to connect to the database. Below is the original localhost connection string, followed by the SQL Azure connection string that isn't working. The application itself runs fine, but fails when trying to retrieve data. The Original Localhost Connection String <add name="InmZenEntities" connectionString="metadata=res://*/InmZenModel.csdl|res://*/InmZenModel.ssdl|res://*/InmZenModel.msl; provider=System.Data.SqlClient; provider connection string=&quot; Data Source=localhost; Initial Catalog=InmarsatZenith; Integrated Security=True; MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /> The Converted SQL Azure Connection String <add name="InmZenEntities" connectionString="metadata=res://*/InmZenModel.csdl|res://*/InmZenModel.ssdl|res://*/InmZenModel.msl; provider=System.Data.SqlClient; provider connection string=&quot; Server=tcp:MYSERVER.ctp.database.windows.net; Database=InmarsatZenith; UserID=MYUSERID;Password=MYPASSWORD; Trusted_Connection=False; MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /> The Question Anyone know if this connection string for SQL Azure is correct? Help greatly appreciated.

    Read the article

  • What is the best way to cofigure a sql server for 50 developers?

    - by Lakhlani Prashant
    Hi, If I am running an organization that has 50 .net developers and all are using sql server, what is the best way to make single sql server available to them? Here is some of the concerns that I want to be careful about Should I configure database users per project or per user? or both? Should I provide single sql server instance? There are some more concerns but I think getting answer of these two will be a good starting point.

    Read the article

  • How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large bu

    - by Ian Ringrose
    We have code like: ms = New IO.MemoryStream bin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bin.Serialize(ms, largeGraphOfObjects) dataToSaveToDatabase = ms.ToArray() // put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. So how can we stream the data without needing enough free memory to hold the serialized objects. I am looking for a way to get a Stream from SQL server that can then be passed to bin.Serialize() so avoiding keeping all the data in my processes memory. Likewise for reading the data back... Some more background. This is part of a complex numerical processing system that processes data in near real time looking for equipment problems etc, the serialization is done to allow a restart when there is a problem with data quality from a data feed etc. (We store the data feeds and can rerun them after the operator has edited out bad values.) Therefore we serialize the object a lot more often then we de-serialize them. The objects we are serializing include very large arrays mostly of doubles as well as a lot of small “more normal” objects. We are pushing the memory limit on a 32 bit system and make the garage collector work very hard. (Effects are being made elsewhere in the system to improve this, e.g. reusing large arrays rather then create new arrays.) Often the serialization of the state is the last straw that courses an out of memory exception; our peak memory usage is while this serialization is being done. I think we get large memory pool fragmentation when we de-serialize the object, I expect there are also other problem with large memory pool fragmentation given the size of the arrays. (This has not yet been investigated, as the person that first looked at this is a numerical processing expert, not a memory management expert.) Are customers use a mix of Sql Server 2000, 2005 and 2008 and we would rather not have different code paths for each version of Sql Server if possible. We can have many active models at a time (in different process, across many machines), each model can have many saved states. Hence the saved state is stored in a database blob rather then a file. As the spread of saving the state is important, I would rather not serialize the object to a file, and then put the file in a BLOB one block at a time. Other related questions I have asked How to Stream data from/to SQL Server BLOB fields? Is there a SqlFileStream like class that works with Sql Server 2005?

    Read the article

  • Linq to Sql, Repositories, and Asp.Net MVC ViewData: How to remove redundancy?

    - by Dr. Zim
    Linq to SQL creates objects which are IQueryable and full of relations. Html Helpers require specific interface objects like IEnumerable<SelectListItem>. What I think could happen: Reuse the objects from Linq to SQL without all the baggage, i.e., return Pocos from the Linq to SQL objects without additional Domain Model classes? Extract objects that easily convert to (or are) Html helper objects like the SelectListItem enumeration? Is there any way to do this without breaking separation of concerns? Some neat oop trick to bridge the needs? For example, if this were within a repository, the SelectListItem wouldn't be there. The select new is a nice way to cut out an object from the Linq to SQL without the baggage but it's still referencing a class that shouldn't be referenced: IEnumerable<SelectListItem> result = (from record in db.table select new SelectListItem { Selected = record.selected, Text= record.Text, Value= record.Value } ).AsEnumerable();

    Read the article

  • How to deploy SQL Reporting 2005 when Data Sources are locked?

    - by spoulson
    The DBAs here maintain all SQL Server and SQL Reporting servers. I have a custom developed SQL Reporting 2005 project in Visual Studio that runs fine on my local SQL Database and Reporting instances. I need to deploy to a production server, so I had a folder created on a SQL Reporting 2005 server with permissions to upload files. Normally, a deploy from within Visual Studio is all that is needed to upload the report files. However, for security purposes, data sources are maintained explicitly by DBAs and stored in a separated locked down common folder on the reporting server. I had them create the data source for me. When I attempt to deploy from VS, it gives me the error "The item '/Data Sources' already exists." I get this whether I'm deploying the whole project or just a single report file. I already set OverwriteDataSources=false in the project properties. The TargetServer URL and folder are verified correct. I suppose I could copy the files manually, but I'd like to be able to deploy from within VS. What could I be doing wrong?

    Read the article

  • SSIS 2008 - How to read from SQL Server Compact Edition file?

    - by Gustavo Cavalcanti
    I can see "SQL Server Compact Destination" under Data Flow Destinations, but I am looking for its source counterpart. If I choose ADO.Net source and create a new connection, there's no provider for SQL CE. What am I missing? Thanks! Update: I am able to create a "Data Source" (under "Data Sources" folder in my SSIS project") that connects to an existing Sql CE file. But how can I use this Data Source in my data flow?

    Read the article

  • Dump Linq-To-Sql now that Entity Framework 4.0 has been released?

    - by DanM
    The relative simplicity of Linq-To-Sql as well as all the criticism leveled at version 1 of Entity Framework (especially, the vote of no confidence) convinced me to go with Linq-To-Sql "for the time being". Now that EF 4.0 is out, I wonder if it's time to start migrating over to it. Questions: What are the pros and cons of EF 4.0 relative to Linq-To-Sql? Is EF 4.0 finally ready for prime time? Is now the time to switch over?

    Read the article

  • What is the best way to configure a SQL Server for 50 developers?

    - by Lakhlani Prashant
    Hi, If I am running an organization that has 50 .net developers and all are using SQL Server, what is the best way to make a single SQL Server available to them? Here is some of the concerns that I want to be careful about Should I configure database users per project or per user? or both? Should I provide single SQL Server instance? There are some more concerns but I think getting answer of these two will be a good starting point.

    Read the article

  • EXPORT AS INSERT STATEMENTS: But in SQL Plus the line overrides 2500 characters!

    - by The chicken in the kitchen
    Hello, I have to export an Oracle table as INSERT STATEMENTS. But the INSERT STATEMENTS so generated, override 2500 characters. I am obliged to execute them in SQL Plus, so I receive an error message. This is my Oracle table: CREATE TABLE SAMPLE_TABLE ( C01 VARCHAR2 (5 BYTE) NOT NULL, C02 NUMBER (10) NOT NULL, C03 NUMBER (5) NOT NULL, C04 NUMBER (5) NOT NULL, C05 VARCHAR2 (20 BYTE) NOT NULL, c06 VARCHAR2 (200 BYTE) NOT NULL, c07 VARCHAR2 (200 BYTE) NOT NULL, c08 NUMBER (5) NOT NULL, c09 NUMBER (10) NOT NULL, c10 VARCHAR2 (80 BYTE), c11 VARCHAR2 (200 BYTE), c12 VARCHAR2 (200 BYTE), c13 VARCHAR2 (4000 BYTE), c14 VARCHAR2 (1 BYTE) DEFAULT 'N' NOT NULL, c15 CHAR (1 BYTE), c16 CHAR (1 BYTE) ); ASSUMPTIONS: a) I am OBLIGED to export table data as INSERT STATEMENTS; I am allowed to use UPDATE statements, in order to avoid the SQL*Plus error "sp2-0027 input is too long(2499 characters)"; b) I am OBLIGED to use SQL*Plus to execute the script so generated. c) Please assume that every record can contain special characters: CHR(10), CHR(13), and so on; d) I CAN'T use SQL Loader; e) I CAN'T export and then import the table: I can only add the "delta" using INSERT / UPDATE statements through SQL Plus.

    Read the article

< Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >