Search Results

Search found 3920 results on 157 pages for 'dave mess'.

Page 106/157 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • SQL SERVER – Fix: Error: 8117: Operand data type bit is invalid for sum operator

    - by pinaldave
    Here is the very interesting error I received from a reader. He has very interesting question. He attempted to use BIT filed in the SUM aggregation function and he got following error. He went ahead with various different datatype (i.e. INT, TINYINT etc) and he was able to do the SUM but with BIT he faced the problem. Error Received: Msg 8117, Level 16, State 1, Line 1 Operand data type bit is invalid for sum operator. Reproduction of the error: Set up the environment USE tempdb GO -- Preparing Sample Data CREATE TABLE TestTable (ID INT, Flag BIT) GO INSERT INTO TestTable (ID, Flag) SELECT 1, 0 UNION ALL SELECT 2, 1 UNION ALL SELECT 3, 0 UNION ALL SELECT 4, 1 GO SELECT * FROM TestTable GO Following script will work fine: -- This will work fine SELECT SUM(ID) FROM TestTable GO However following generate error: -- This will generate error SELECT SUM(Flag) FROM TestTable GO The workaround is to convert or cast the BIT to INT: -- Workaround of error SELECT SUM(CONVERT(INT, Flag)) FROM TestTable GO Clean up the setup -- Clean up DROP TABLE TestTable GO Workaround: As mentioned in above script the workaround is to covert the bit datatype to another friendly data types like INT, TINYINT etc. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Technical Reference Guides for Designing Mission-Critical Solutions – A Must Read

    - by pinaldave
    Yesterday I was reading architecture reference material helping my friend who was looking for material in this respect. While working together we were searching twitter, facebook and search engines to find relevant material.While searching online we end up on very interactive reference point. Once I send the same to him, he replied he may not need anything more after referencing this material. The best part of this article was it gives access to various aspect of the technology of the image map. Here is the abstract of the original article from the site: The Technical Reference Guides for Designing Mission-Critical Solutions provide planning and architecture guidance for various mission-critical workloads deployed by users. These guides reflect the knowledge gained by Microsoft while working with customers on mission-critical deployments. Each guide provides not only the key technical concepts and information helpful for design, but also “lessons learned,” best practices, and references to customer case studies. Once you click on any of the desired topic, you will see further detailed image map of the selected topic. Personally once I ended up on this site, I was there for more than 2 hours clicking through various links. Click on image to see larger image Read more here: Technical Reference Guides for Designing Mission-Critical Solutions Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology

    Read the article

  • SQLAuthority News – Whitepaper Download – Using Star Join and Few-Outer-Row Optimizations to Improve Data Warehousing Queries

    - by pinaldave
    Size of the database is growing every day. Many organizations now a days have more than TB of the Data in their system. Performance is always part of the issue. Microsoft is really paying attention to the same and also focusing on improving performance for Data Warehousing. Microsoft has recently released whitepaper on the performance tuning subject of Data Warehousing. Here is the abstract about the whitepaper from official site: In this white paper we discuss two of the new features introduced in SQL Server 2008, Star Join and Few-Outer-Row optimizations. These two features are in SQL Server 2008 R2 as well.  We test the performance of SQL Server 2008 on a set of complex data warehouse queries designed to highlight the effect of these two features and observed a significant performance gain over SQL Server 2005 (without these two features). The results observed also apply to SQL Server 2008 R2.  On average, about 75 percent of the query execution time has been reduced, compared to SQL Server 2005. We also include data that shows a reduction in the number of rows processed and improved balance in parallel queries, both of which highlight the important role the Star Join and Few Outer-Row features played. I encouraged all of those interested in Data Warehouse to read it and see if they can learn the tricks. Using Star Join and Few-Outer-Row Optimizations to Improve Data Warehousing Queries Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL SERVER – A Cool Trick – Restoring the Default SQL Server Management Studio – SSMS

    - by pinaldave
    “I do not know where my windows went!” “I just closed my object explorer and now I cannot find it.” “How do I get my original windows layout back in SQL Server Management Studio?” “How do I get the window which was there in left side back again?” Since last 2-3 years, every single day I receive more than 5 emails on SSMS and its layout. For the beginners it is very common to get confused when they attempt to change SQL Server Management Studio’s windows layout. They often change the layout and are not able to get the original layout back. Often people do not change the layout whole of their life, leading to uncomfortable feeling when they go to another’s computer where the windows are differently placed. Today’s blog post is dedicated all the beginners in SQL Server. It is extremely simple to reset the SSMS layout to default layout. The default layout involves 2 major things 1) Object Explorer on left side 2) Query Windows on right side (80% screen estate). Personally I am so used to this as well that if there is any other changes in the same, I do not enjoy working on the environment. Well, the solution to rest the SSMS layout is very simple. One can do it in split seconds.  To restore the default configuration, on the Window menu, click Reset Window Layout. Have you ever used this feature? Do you feel uncomfortable when SSMS layout is not in default state? How do you address this situation? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • WPF Applications &ndash; Handling the Unhandled

    - by David Totzke
    Instead of just letting your application crash, you can attach a method to the DispatcherUnhandledExceptionEventHandler and one to the AppDomain.Current.UnhandledException.  You wire these up in the code behind of your application which by default is App.xaml.cs.  You can log these errors or throw up a message Don Box and tell the user what happened.  Then you shut down the app gracefully.  You shut it down because something bad happened that you weren’t expecting and at this point there is no guarantee as to the state of the stack or memory or anything really.  All bets are off. If, on the other hand, the method for the UnhandledException is empty and the method for the DispatcherUnhandledEventHandler ends up in a call to a method called LogError() and the LogError() method is FUCKING EMPTY, and you just swallow the exceptions and keep on running, then, not so much.  I spent nearly a day trying to track down a bug that would have been obvious had something been logged or if it just crashed.  It’s my own fault I suppose.  I knew these were hooked up.  I just never suspected that there wouldn’t be any implementation at all.  Live and learn. Customs Man at Heathrow: Anything to declare, Sir? Jekyll and Hyde: Man has not evolved an inch from the slime that spawned him. Customs Man at Heathrow: Very Good, Sir. I tend to agree. Dave Just because I can…

    Read the article

  • Getting Xbox Live via a wired network with my laptop that has internet access wirelessly

    - by Alex Franco
    I'm running the latest version (as of yesterday anyways) of Ubuntu Desktop 64bit, but installed on my laptop if it makes a difference. I had Windows 7 preinstalled when i bought it and it worked fine with the wireless from my house and bridging the connection with a LAN to my xbox for Live. Now with Ubuntu I tried the same setup, but I'm unfamiliar with Ubuntu so I didn't get far. Best I got so far is wireless internet on my laptop and a wired connection to the xbox that continually connects and disconnects. Heres my network settings. if theres fields not included its because theyre empty on mine or theyre my MAC address or network password Wireless Network 1 settings: Connect Automatically: Checked. Available to all Users: Checked Wireless: SSID: Franco's Mode: Infrastructure MTU: Automatic IPv4 Settings: Method: Automatic (DHCP) IPv6 Settings: Method: Automatic Wired Network 1: Connect Automatically: Checked Available to all Users: Checked Wired: MTU: Automatic IPv4 Settings: Method: Automatic (DHCP) IPv6 Settings: Method: Automatic Any help would be greatly appreciated. EDIT: 6:26pm It seems to be staying connected now. Doing the Network test on my xbox it pickups the network, but cannot detect any PC. Restarting the Xbox, however, leaves my computer unable to connect bringing up the Wire Network disconnected 'blip' every minute or so again. Before I had restarted the Xbox it said "Connected 100 MB/s". Now it only says "connecting". I did have my computer and xbox on in this Wired Network Disconnected blip cycle for a long period of time so it may have finally connected, just without the ability to detect my laptop. I left for 2 hours or so in the middle of typing up the original question. I finished posting this when i got back and then tried to mess with it a bit again, in case youre wondering why i didnt include this before... I've said too much. Forgive my long-winded fingers :p

    Read the article

  • Program error trying to generate Outlook 2013 email from Visual Basic 2010 [on hold]

    - by Dewayne Pinion
    I am using vb to send emails through outlook. Currently we have a mix of outlook versions at our office: 2010 and 2013 with a mix of 32 bit and 64 bit (a mess, I know). The code I have works well for Outlook 2010: Private Sub btnEmail_Click(sender As System.Object, e As System.EventArgs) Handles btnEmail.Click CreateMailItem() End Sub Private Sub CreateMailItem() Dim application As New Application Dim mailItem As Microsoft.Office.Interop.Outlook.MailItem = CType(application.CreateItem( _ Microsoft.Office.Interop.Outlook.OlItemType.olMailItem), Microsoft.Office.Interop.Outlook.MailItem) 'Me.a(Microsoft.Office.Interop.Outlook.OlItemType.olMailItem) mailItem.Subject = "This is the subject" mailItem.To = "[email protected]" mailItem.Body = "This is the message." mailItem.Importance = Microsoft.Office.Interop.Outlook.OlImportance.olImportanceLow mailItem.Display(True) End Sub However, I cannot get this to work for 2013. I have referenced the version 15 dll for 2013 and it seems to be backward compatible, but when I try to use the above code for 2013 (it is 64 bit) it says it cannot start Microsoft Outlook. A program error has occured. This is happening on the application Dim statement line. I have tried googling around but there doesn't seem to be much out there referencing 2013 but I feel that the problem here probably has more to do with 64 bit than the software version. Thank you for any suggestions!

    Read the article

  • IASA ITARC &ndash; Denver May 6th

    - by Jeff Certain
    The Denver chapter of the International Association of Software Architects (IASA) is holding an IT Architect Regional Conference (ITARC) in Denver on May 6th. The speaker list for this conference is amazing. Paul Rayner, Dave McComb, Randy Kahle, Peter Provost, Randy Stafford, George Fairbanks – all great speakers, and from Colorado. Brandon Satrom (who also happens to be the president of the IASA Austin chapter) will also be speaking, as will some other heavy hitters (for example, Ted Farrell, Chief Architect and Senior VP of Oracle). This is an amazing line-up, and the conference is quite reasonably priced ($150 for IASA members until April 10th, including a catered lunch). I also have the privilege of being a presenter at this conference. If you’ve ever heard any of the previously named speakers, you know that they set the bar quite high. Sounds like I’m going to have to step up my game. What I get to talk about is really cool stuff. The company I work for – Colorado CustomWare – brought me on board nearly two years ago. To say there was some technical debt is somewhat… understated. Equally understated would be that management is committed to doing the right thing. Over the past two years, we’ve done significant architectural refactoring – including an effort that took the entire team offline for most of a month. We’ve reduced the application size by 50% without losing functionality. As you can imagine, this has reduced the complexity of the application, making development faster and less prone to bugs. We’ve made many other changes – moving to an agile process, training developers, moving towards a more OO architecture. The changes we’ve made reveal, in some ways, just how far afield we were.. and there are still more changes to be made. Amazingly enough, our leadership team is eager for me to share these experiences with other architects. I’m really looking forward to being able to do so.

    Read the article

  • SQL SERVER – Validating Unique Columnname Across Whole Database

    - by pinaldave
    I sometimes come across very strange requirements and often I do not receive a proper explanation of the same. Here is the one of those examples. Asker: “Our business requirement is when we add new column we want it unique across current database.” Pinal: “Why do you have such requirement?” Asker: “Do you know the solution?” Pinal: “Sure I can come up with the answer but it will help me to come up with an optimal answer if I know the business need.” Asker: “Thanks – what will be the answer in that case.” Pinal: “Honestly I am just curious about the reason why you need your column name to be unique across database.” (Silence) Pinal: “Alright – here is the answer – I guess you do not want to tell me reason.” Option 1: Check if Column Exists in Current Database IF EXISTS (  SELECT * FROM sys.columns WHERE Name = N'NameofColumn') BEGIN SELECT 'Column Exists' -- add other logic END ELSE BEGIN SELECT 'Column Does NOT Exists' -- add other logic END Option 2: Check if Column Exists in Current Database in Specific Table IF EXISTS (  SELECT * FROM sys.columns WHERE Name = N'NameofColumn' AND OBJECT_ID = OBJECT_ID(N'tableName')) BEGIN SELECT 'Column Exists' -- add other logic END ELSE BEGIN SELECT 'Column Does NOT Exists' -- add other logic END I guess user did not want to share the reason why he had a unique requirement of having column name unique across databases. Here is my question back to you – have you faced a similar situation ever where you needed unique column name across a database. If not, can you guess what could be the reason for this kind of requirement?  Additional Reference: SQL SERVER – Query to Find Column From All Tables of Database Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Load Generator – Free Tool From CodePlex

    - by pinaldave
    One of the most common questions I receive is if there any tool available to generate load on SQL Server. Absolutely there is a fabulous free tool available to generate load on SQL Server on Codeplex. This tool was released in 2008 but it is still extremely relevant to generate the load on SQL Server as well works fabulously. CodePlex is a project initiated by Microsoft for hosting open source softwares. The best part of this SQL Server Load Generator is that users can run multiple simultaneous queries again SQL Server using different login account and different application name. The interface of the tool is extremely easy to use and very intuitive as well. One of the things which I felt needed improvement was a default configuration. As every single time when I was adding a query the default settings were showing up and I had to manually change that. However, when I went to Menu >> Tools >>Options I was really happy as it has options to change every single default which is available. Here one can give default username, password, database name as well various settings related to configuration. Additionally application logging is also possible through the options. A couple of other important points I noticed was the button to reset counters as well status bar containing useful information of Total Threads, Completed Queries and Failed Queries. I use this frequently for my load testing. What tool do you use for SQL Server Load Generator? Download SQL Load Generator Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • SQLAuthority News – Download Whitepaper – Choosing a Tabular or Multidimensional Modeling Experience in SQL Server 2012 Analysis Services

    - by pinaldave
    Data modeling is the most important task for any BI professional. Matter of the fact, the biggest challenge is to organizing disparate data into an analytic model that effectively and efficiently supports the reporting and analysis. SQL Server 2012 introduces BI Semantic Model (BISM), a single model that can support a broad range of reporting and analysis while blending two Analysis Services modeling experiences behind the scenes. Multidimensional modeling – enables BI professionals to create sophisticated multidimensional cubes using traditional online analytical processing (OLAP). Tabular modeling – provides self-service data modeling capabilities to business and data analysts. As data modeling is evolving and business needs are growing new technologies and tools are emerging to help end users to make the necessary adjustment to the reporting and analysis needs. This white paper is will provide practical guidance to help you decide which SQL Server 2012 Analysis Services modeling experience – tabular or multidimensional. Do let me know what do is your opinion as a comment. In simple word – I would like to know when will you use Tabular modeling and when Multidimensional modeling? Download Choosing a Tabular or Multidimensional Modeling Experience in SQL Server 2012 Analysis Services Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology

    Read the article

  • SQLAuthority News – NuoDB MeetUp on Nov 8, 2012 in Seattle

    - by pinaldave
    I am pleased to let you know that I will be attending again this year’s SQLPASS conference in Seattle and look forward to meeting all of you while at the conference. In the next two weeks, I will provide you with a full agenda of where I will be during PASS. During the week, I will also be stopping by at the NuoDB MeetUp, which will be held close by at the Edge Grill at 1522 6th Ave in Seattle on Thursday, November 8th. This will be an excellent opportunity for you to learn more about their brand new distributed, peer-to-peer database solution, which I believe will revolutionize SQL cloud database technology in the 21th century.  I have been personally following NuoDB for months now and am very excited about the architecture and capabilities of this innovative product. Wiqar Chaudry, NuoDB technology evangelist, will give a presentation and demonstration of their elastically scalable SQL cloud database in this Meetup event.  Prior to joining NuoDB, Wiqar was a Senior Architect at Epsilon, the data intelligence company with big brand name customers in insurance, consumer goods, etc.  He’s also going to discuss how NuoDB compares with Azure, the hometown favorite, and why cloud-based SQL deployment will pave the way for the future. I will be at the NuoDB MeetUp to briefly talk about my own experiences with NuoDB and will be giving away some signed copies of my latest book as well will have some interesting goodies. So please join me and the NuoDB team at their Meetup event. RSVP here. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • SQLAuthority News – BI Quiz Question – How to Optimize Cube? – Hints

    - by pinaldave
    I earlier wrote about SQL BI Quiz over here. The details of the quiz is as following: Working with huge data is very common when it is about Data Warehousing. It is necessary to create Cubes on the data to make it meaningful and consumable. There are cases when retrieving the data from cube takes lots of the time. Let us assume that your cube is returning you data very quickly. Suddenly on one day it is returning the data very slowly. What are the three things will you to diagnose this. After diagnose what you will do to resolve performance issue. Participate in my question over here Here is a couple of hints what I am looking for in answer: How to reach to root of slow performance? Is hardware causing the problem or something else? Is slowness is due to how cube is build and its granularity? Is underlying tables require maintenance? Is there is chance to refractor the process? Are there any tool which can help diagnosis the slowness of the cube? It is not necessary to answer all the question – but something to start with. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • 2D map/plane with nodes overlayed that supports panning, scaling and clicking on nodes

    - by garlicman
    I'm trying my hand at Android development and seem to be running into an invisible ceiling in trying to get what I want accomplished. Basically I'm trying to create an app that renders a 2D surface map that I can (pinch) zoom and pan. I'll have to place nodes on the surface of the map that will scale/zoom and pan in relation to the surface. I started out with a 2D ImageView approach and got as far as pinch zoom, pan and laying nodes as relative ImageViews, but all the methods I tried to get X,Y,W,H for the 2D surface were always off for some reason. Additionally, I was never able to scale the node ImageViews correctly, and as a result never got far enough to try and work out their X,Y scaled offset. So I decided to get back to 3D rendering. Conceptually pan/zoom is camera manipulation, so I don't have to mess with how to scale the 2D map or the nodes. But I need a starting point or sample to get me going that's close to what I'm trying to achieve. A sample on a translucent spinning cube isn't helping as much as I need it to. Any tips? Links, insults and sympathy are all welcome!

    Read the article

  • Is there an established or defined best practice for source control branching between development and production builds?

    - by Matthew Patrick Cashatt
    Thanks for looking. I struggled in how to phrase my question, so let me give an example in hopes of making more clear what I am after: I currently work on a dev team responsible for maintaining and adding features to a web application. We have a development server and we use source control (TFS). Each day everyone checks in their code and when the code (running on the dev server) passes our QA/QC program, it goes to production. Recently, however, we had a bug in production which required an immediate production fix. The problem was that several of us developers had code checked in that was not ready for production so we had to either quickly complete and QA the code, or roll back everything, undo pending changes, etc. In other words, it was a mess. This made me wonder: Is there an established design pattern that prevents this type of scenario. It seems like there must be some "textbook" answer to this, but I am unsure what that would be. Perhaps a development branch of the code and a "release-ready" or production branch of the code?

    Read the article

  • I don't know C. And why should I learn it?

    - by Stephen
    My first programming language was PHP (gasp). After that I started working with JavaScript. I've recently done work in C#. I've never once looked at low or mid level languages like C. The general consensus in the programming-community-at-large is that "a programmer who hasn't learned something like C, frankly, just can't handle programming concepts like pointers, data types, passing values by reference, etc." I do not agree. I argue that: Because high level languages are easily accessible, more "non-programmers" dive in and make a mess, and In order to really get anything done in a high level language, one needs to understand the same similar concepts that most proponents of "learn-low-level-first" evangelize about. Some people need to know C. Those people have jobs that require them to write low to mid-level code. I'm sure C is awesome. I'm sure there are a few bad programmers who know C. My question is, why the bias? As a good, honest, hungry programmer, if I had to learn C (for some unforeseen reason), I would learn C. Considering the multitude of languages out there, shouldn't good programmers focus on learning what advances us? Shouldn't we learn what interests us? Should we not utilize our finite time moving forward? Why do some programmers disagree with this? I believe that striving for excellence in what you do is the fundamental deterministic trait between good programmers and bad ones. Does anyone have any real world examples of how something written in a high level language--say Java, Pascal, PHP, or Javascript--truely benefitted from a prior knowledge of C? Examples would be most appreciated. (revised to better coincide with the six guidelines.)

    Read the article

  • WPF and Composite Application Library &ndash; Missing The Point

    - by David Totzke
    I have a headache and it’s not even 9AM yet.  Well, ok, it’s nearly ten here now in GMT –5 but it’s before nine somewhere still. Sometimes people will miss the point of something so utterly and completely that one is left wondering how such a person can even dress themselves. Writing an application using WPF and the Composite Application Library (Prism) means that one must learn the various programming idioms common to these frameworks.  The Windows Forms event driven model simply will not suffice.  You need to come to grips with the idea of a very loosely coupled application.  Concepts that must be absorbed and internalized include Data Binding, Control and Data Templates, Commands, Dependency Injection, and Inversion of Control, as well as the Supervising Controller, Presentation Model and Model-View-View-Model patterns. It is as simple as that.  Not to embrace these concepts is to invite pain.  It is to invite noodles; and not the holy kind. Someone actually said to me that “just because it’s not WPF, doesn’t mean it’s wrong.”  And he’s right.  Unless, of course, you are writing a WPF application and especially if you are using the Composite Application Library. In simple terms then; YOU’RE DOING IT WRONG!   Dave Just because I can…

    Read the article

  • Oracle Solaris 11 Summit Day at the LISA Conference 2011-Register Today!

    - by Terri Wischmann
    We have successfully launched and shipped Oracle Solaris 11!  Come to the LISA 2011 Conference in Boston, MA to learn about all the latest and greatest Oracle Solaris 11 technologies. On Tuesday, 12/6/11 we are hosting our 2nd annual Oracle Solaris 11 Summit Day! It's a Free full day of sessions covering the latest OS technologies, and a chance for you to meet key members of the Oracle Solaris engineering team as they conduct a deep-dive exploration of core Solaris features. See agenda below -Register Today!!  Time  Topic  Presenter  9:00 -9:30 am  Oracle Solaris 11 Strategy  Markus Flierl  9:30 - 11:00 am  Next Generation OS Lifecycle Management with Oracle Solaris 11  Dave Miner/Bart Smaalders  11:00 am  - 12:00 pm  Data Management with ZFS  Mark Maybee  12:00 - 1:00 pm  LUNCH  All  1:00 - 2:30 pm Oracle Solaris Virtualization and Oracle Solaris Networking  Mike Gerdts/Sebastian Roy 2:30 - 3:15 pm Security in your Oracle Solaris Cloud Environment  Glenn Faden  3:15 - 3:30 pm  BREAK  All  3:30 - 4:15 pm Oracle Solaris - The Best Platform to run your Oracle Applications David Brean  4:15 - 5:00 pm Oracle Solaris Cluster - HA in the Cloud Gia-Khahn Nguyen  5:00 - 6:30 pm  Reception  All

    Read the article

  • SQL SERVER – Follow up – Usage of $rowguid and $IDENTITY

    - by pinaldave
    The most common question I often receive is why do I blog? The answer is even simpler – I blog because I get an extremely constructive comment and conversation from people like DHall and Kumar Harsh. Earlier this week, I shared a conversation between Madhivanan and myself regarding how to find out if a table uses ROWGUID or not? I encourage all of you to read the conversation here: SQL SERVER – Identifying Column Data Type of uniqueidentifier without Querying System Tables. In simple words the conversation between Madhivanan and myself brought out a simple query which returns the values of the UNIQUEIDENTIFIER  without knowing the name of the column. David Hall wrote few excellent comments as a follow up and every SQL Enthusiast must read them first, second and third. David is always with positive energy, he first of all shows the limitation of my solution here and here which he follows up with his own solution here. As he said his solution is also not perfect but it indeed leaves learning bites for all of us – worth reading if you are interested in unorthodox solutions. Kumar Harsh suggested that one can also find Identity Column used in the table very similar way using $IDENTITY. Here is how one can do the same. DECLARE @t TABLE ( GuidCol UNIQUEIDENTIFIER DEFAULT newsequentialid() ROWGUIDCOL, IDENTITYCL INT IDENTITY(1,1), data VARCHAR(60) ) INSERT INTO @t (data) SELECT 'test' INSERT INTO @t (data) SELECT 'test1' SELECT $rowguid,$IDENTITY FROM @t There are alternate ways also to find an identity column in the database as well. Following query will give a list of all column names with their corresponding tablename. SELECT SCHEMA_NAME(so.schema_id) SchemaName, so.name TableName, sc.name ColumnName FROM sys.objects so INNER JOIN sys.columns sc ON so.OBJECT_ID = sc.OBJECT_ID AND sc.is_identity = 1 Let me know if you use any alternate method related to identity, I would like to know what you do and how you do when you have to deal with Identity Column. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Gimpshop 2.8. Available for Win & Mac. No Linux version?

    - by Jorge M. Treviño
    Finally got around to upgrading 12.04 to 12.10. One of the nice things about the new version is that Gimp 2.8 is in the repositories. Installed and it's a far cry from the 2.2, 2.4 and 2.6 versions which were —at least from my untrained point of view– next to unusable. Now 2.8 is a much more intuitive, for Photoshop users at least, and I'm trying to really learn it. Browsing around, I found that there's s new version of Gimpshop, something that was a sorely amateurish attempt to a PS interface over an old Gimp version and sure to mess up your system. Seeing "2.8" prominently displayed in the page, I decided to try the Windows version. Oddly, there's a Mac version too but no Linux one. The link directs one to a non-existent file in one of the cloud storage sites. After the Win version was installed, I fired it up and, surprise!, it's exactly the same as I can tell without diving into menus and dialogs, as the plain vanilla Ubuntu version I have installed. Can anybody shed light on what goes on here? Is this a scheme to get inadvertent users to install some "optional extras" that come with the installer? Very curious about it (thanks God I'm not a cat ).

    Read the article

  • What is a widely accepted term for a string variable that would probably contain a file path and file name?

    - by Peter Turner
    For functions that need to index files in a directory and rename them FileName0001, FileName0002, etc... I often need to write a function that splits the file name from the file path and rename the file. When I put the file name and file path back together, I don't have a very good name for the variable that contains both of them and I usually just wind up concatenating them every time I want to use them (usually using them as parameters for functions labeled either filename or filepath) so I never really know what I'm doing until I notice a lot of files being written in the same directory as my binaries. Anyway, what do I call a file name and a file path? I don't want to call it File, because that usually means the binary information behind the file. I don't want to call it URI because that usually means I've got some sort of protocol, which I don't. I just want a good way to denote "c:\somedir\somedir\somedir\somefile.txt" so as to deconfuse this mess I've just realized I'm in. Please don't just list your personal preference. I think an excellent answer should "'site its sources". (as in, provide a link to a repository with a good example of the code being used as I described)

    Read the article

  • In a Tower defense game, how to do buffs/debuffs

    - by Gabe
    The question is at the very bottom. If you understand Buffs/Debuffs in tower defense games then you should skip the bulk of this question and go to the bottom (seperated with the long line) I plan on making an IPhone TD game. The fact that its an iPhone game isn't relevant but I am coding in Objective-c with Cocos2D. I am relatively inexperienced in the field of game design so I'm looking for some advice from someone experienced in this field. In tower defense, there are two things that are relevant to my question: towers/enemies (both have their own classes/children). They each have stats like hp, damage, speed, etc. I want to add buffs/defuffs, for instance: Towers A,B and C each have 15 base damage. Tower D would be a buff tower with no damage, a tower with an AOE(area of effect) aura that gives 10% damage to all towers in range. Tower E might slow enemies in its AOE, a debuff. Stuff like that. The same could go for enemies. Enemy A is a boss that has a slow aura that affects towers and slows their base attack speed or something along those lines. So the question is, what would be the most effective way to implement this? If it was just towers then I would just mess around with the tower classes, but since tower classes and enemy classes are both affected, should I make a buff class? TD games can consume quite a bit of memory with large amounts of creeps and towers, and buffs I feel like would also consume quit a bit... So I'm trying to be as effective as possible.

    Read the article

  • SQLAuthority News – 6th Anniversary and 50 Million Views and Over 2300 Blog Posts – Thank You Thank You

    - by pinaldave
    Six years ago, I started this SQLAuthority.com blog. There are so many things I want to say today – it is very very emotional. Instead of writing long I am including few images and cartoons. Last month we have also reached 50 Million Total Views on this blog. Here is the screen captured at that time. Click Image to Enlarge In 6 years there are total 2192 days (including 2 leap year day) and my total blog post count is 2300. That means I have been blogging more than 1 blog post every day. Here is the quick glance to all the numbers. Here you can find the list of all the 2300 blog posts. I am very glad to see my many of the friends stay in USA, India, United Kingdom, Canada and Australia in that order. You can see the geographic distribution of the support I receive on the blog from worldwide. On this day I would like to call out one 2 individuals who contribute equally or more in my success. When I started this blog 6 years ago, I was walking alone. After 2 years my wife Nupur joined my journey and 3 years later my daughter Shaivi joined the journey. Here is the example of the common conversation among us almost every day - Shaivi: Daddy, play catch-catch. Nupur: Shaivi, daddy will play with you once he finishes tomorrow’s blog. Shaivi: Daddy, Finish Blog. Okey. I play catch-catch (alone). SQLAuthority Family Well, thank you very much! We all love you! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • White box testing with Google Test

    - by Daemin
    I've been trying out using GoogleTest for my C++ hobby project, and I need to test the internals of a component (hence white box testing). At my previous work we just made the test classes friends of the class being tested. But with Google Test that doesn't work as each test is given its own unique class, derived from the fixture class if specified, and friend-ness doesn't transfer to derived classes. Initially I created a test proxy class that is friends with the tested class. It contains a pointer to an instance of the tested class and provides methods for the required, but hidden, members. This worked for a simple class, but now I'm up to testing a tree class with an internal private node class, of which I need to access and mess with. I'm just wondering if anyone using the GoogleTest library has done any white box testing and if they have any hints or helpful constructs that would make this easier. Ok, I've found the FRIEND_TEST macro defined in the documentation, as well as some hints on how to test private code in the advanced guide. But apart from having a huge amount of friend declerations (i.e. one FRIEND_TEST for each test), is there an easier idion to use, or should I abandon using GoogleTest and move to a different test framework?

    Read the article

  • How do you stay productive when dealing with extremely badly written code?

    - by gaearon
    I don't have much experience in working in software industry, being self-taught and having participated in open source before deciding to take a job. Now that I work for money, I also have to deal with some unpleasant stuff, which is normal of course. Recently I was assigned to add logging to a large SharePoint project which is written by some programmer who obviously was learning to code on the job. After 2 years of collaboration, the client switched to our company, but the damage was done, and now somehow I need to maintain this code. Not that the code was too hard to read. Despite problems - each project has one class with several copy-pasted methods, enormous if nestings, Systems Hungarian, undisposed connections — it's still readable. However, I found myself absolutely unproductive despite working on something as simple as adding logging. Basically, I just need to go through the code step by step and add some trace calls. However, the idiocy of the code is so annoying that I get tired within 10 minutes of starting. In the beginning, I used to add using constructs, reduce nesting by reversing if's, rename the variables to readable names—but the project is large, and eventually I gave up. I know this is not the task I should be doing, but at least reducing the mess gave me some kind of psychological reward so I could keep going. Now the trick stopped working, and I still have 60% of my work to do. I started having headaches after work, and I no longer get the feeling of satisfaction I used to get - which would usually allow me to code for 10 hours straight and still feel fresh. This is not just one big rant, for I really do have an actual question: Is there a way to stay productive and not to fight the windmills? Is there some kind of psychological trick to stay focused on the task, instead of thinking “How stupid is that?” each time I see another clever trick by the previous programmer? The problem with adding logging is that I actually have to understand what the code does, and doing so hurts my brain in an unpleasant fashion.

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >