Search Results

Search found 33316 results on 1333 pages for 'sql team'.

Page 469/1333 | < Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >

  • Cannot connect on TFS 2012 server through SSL with invalid certificate

    - by DaveWut
    I saw the problem on some forums and even here, but not as specific as mine. So here's the thing, So I've configured a TFS 2012 server, on one of my personnel server at home, and now, I'm trying to make it available through the internet, with the help of apache2 on a different UNIX based, physical server. The thing is working perfectly, I don't have any problem accessing the address https://tfs.something.com/tfs through my browser. The address can be pinged and I do have access to the TFS control panel through it. How does it work? Well, with apache2 you can set a virtual host and set up the ProxyPass and ProxyPassReserver setting, so the traffic can externally comes from a secure SSL connection, through a specified domain or sub-domain, but it can be locally redirect on a clear http session on a different port. This is my current setup. As I already said, I can access the web interface, but when I'm trying to connect with Visual Studio 2012, it can't be done. Here's the error I receive: http://i.imgur.com/TLQIn.png The technical information tells me: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. My SSL certificate is invalid and was automatically generated on my UNIX server. Even if I try to add it in the Trusted Root Certification Authorities either on my TFS server or on my local workstation, it doesn't work. I still receive the same error. Is there's a way to completely ignore certificate validation? If not, what's have I done? I mean, I've added the certificate in the trusted root certificates, it should works as mentioned on some forums... If you need more information, please ask me, I'll be pleased to provide you more. Dave

    Read the article

  • I have written an SQL query but I want to optimize it [closed]

    - by ankit gupta
    is there any way to do this using minimum no of joins and select? 2 tables are involved in this operation transaction_pci_details and transaction SELECT t6.transaction_pci_details_id, t6.terminal_id, t6.transaction_no, t6.transaction_id, t6.transaction_type, t6.reversal_flag, t6.transmission_date_time, t6.retrivel_ref_no, t6.card_no,t6.card_type, t6.expires_on, t6.transaction_amount, t6.currency_code, t6.response_code, t6.action_code, t6.message_reason_code, t6.merchant_id, t6.auth_code, t6.actual_trans_amnt, t6.bal_card_amnt, t5.sales_person_id FROM TRANSACTION AS t5 INNER JOIN ( SELECT t4.transaction_pci_details_id, t4.terminal_id, t4.transaction_no, t4.transaction_id, t4.transaction_type, t4.reversal_flag, t4.transmission_date_time, t4.retrivel_ref_no, t4.card_no, t4.card_type, t4.expires_on, t4.transaction_amount, t4.currency_code, t4.response_code, t4.action_code, t3.message_reason_code, t4.merchant_id, t4.auth_code, t4.actual_trans_amnt, t4.bal_card_amnt FROM ( SELECT* FROM transaction_pci_details WHERE message_reason_code LIKE '%OUT%'|| message_reason_code LIKE '%FAILED%' /*we can add date here*/ UNION ALL SELECT t2.transaction_pci_details_id, t2.terminal_id, t2.transaction_no, t2.transaction_id, t2.transaction_type, t2.reversal_flag, t2.transmission_date_time, t2.retrivel_ref_no, t2.card_no, t2.card_type, t2.expires_on, t2.transaction_amount, t2.currency_code, t2.response_code, t2.action_code, t2.message_reason_code, t2.merchant_id, t2.auth_code, t2.actual_trans_amnt, t2.bal_card_amnt FROM ( SELECT transaction_id FROM TRANSACTION WHERE transaction_type_id = 8 ) AS t1 INNER JOIN ( SELECT * FROM transaction_pci_details WHERE message_reason_code LIKE '%appro%' /*we can add date here*/ ) AS t2 ON t1.transaction_id = t2.transaction_id ) AS t3 INNER JOIN ( SELECT* FROM transaction_pci_details WHERE action_code LIKE '%REQ%' /*we can add date here*/ ) AS t4 ON t3.transaction_pci_details_id - t4.transaction_pci_details_id = 1 ) AS t6 ON t5.transaction_id = t6.transaction_id

    Read the article

  • Redundant Microsoft server solution for small company

    - by MadBoy
    I'm planning to change one server Microsoft SBS 2003 with SharePoint, Exchange and SQL database into something that will provide me with some redundancy and won't be single point of failure. I was thinking to buy 2x exactly the same physical servers and put 2 virtualized servers on HyperV or VMWare on each. Then i would put SharePoint, Exchange and SQL on that 1 physical server (shared onto 2x VM's). I would like 2nd physical server to be exact duplicate of the first one so that when 1st server goes down (for reboot or hw failure), 2nd takes care of everything so that users don't even see anything changed (in terms all their emails, sharepoint stuff is available). My questions are: Will I have to pay for licenses for both servers even thou only one instance of SharePoint, Exchange, SQL will be used at same time? What are proposed solutions to do that? Any additional hardware I would need, any complicated software configuration to be expected to configure such redundancy so that when one physical server goes down 2nd one is taking care of rest? What problems should I expect? This solution is for 60 people. Later on it may or may expand.

    Read the article

  • form checkboxes different names each into multiple rows database [closed]

    - by Darlene
    Hi i've been at this for hours and need help. Thanks in advice. i have the following tables: tblprequestion quesid| ques tblanswers answerid|quesid | ans |date This is my form: prequestion form <?php $con = mysql_connect("localhost","root","") or die ("Could not connect to DB Server"); $db_selected = mysql_select_db("nbtsdb", $con) or die("Could not locate the DB"); $query3= mysql_query("SELECT * FROM tblprequestions", $con) or die("Cannot Access prequestions description from Server"); echo"<legend> Pre question :</legend>"; echo"<p></p>"; while($row = mysql_fetch_array($query3)) { echo"<p>"; echo"<input type='checkbox' name='question".$row['quesid']."[]' value='yes' />"; echo"<label>".$row['ques']."</label>&nbsp;&nbsp;&nbsp;"; echo"</p>"; } echo"<p></p>"; ?> i would like to know how to get the values from the form for each question (total of 17) to submit into the database. for example tblprequestion quesid| ques 1 Had a cold or fever in the last week? 2 Had minor outpaient surgery? tblanswers answerid|username |quesid | ans |date 1 lisa 1 yes 10/10/12 2 lisa 2 no 10/10/12

    Read the article

  • MySQL Online Database

    - by Marian
    Can anyone suggest a good online free MySQL database. I've tried four till now: db4free FreeMySQL onPhP 000webhost Either of them gave me an timeout error on my connect file or actively restricted connection to it, meaning the host won't allow a remote connection to the database. If there isn't any good online database can I create my own server using my computer, since it gets rarely turned off and when my server is offline I could return an error message saying that the server is currently offline. My final objective is to have a simple comment box for a webpage. Witch I believe it won't need a massive data storage with 3 columns (id, name, comment) NOTE: Can't post more then two links yet.

    Read the article

  • storing map template in database

    - by Timigen
    I am working on an application that displays choropleth maps. These maps are of all different types, some display state by county, country by state/province, or world by country. How should I handle storing the map information in the database? My Thoughts: I won't need to do queries to find POI inside a region, so I don't think there is a need to use spatial datatypes. I am considering storing a map as a geoJSON object (I am using JS mapping library that accepts geoJSON). The only issue is what if I want a map of the US northeast. Then I would have geoJSON for the US and a separate one for the US northeast, which would be redundant. Would it make sense to have a shape database where I had each state then when I needed a map of the US I could query for each state, and when I needed a map of the US Northeast I could again query for what I need? Note: I am not concerned with storing the data for each region, just the region itself. I will query for the data on the fly for the specific region.

    Read the article

  • Daily tech links for .net and related technologies - May 10-12, 2010

    - by SanjeevAgarwal
    Daily tech links for .net and related technologies - May 10-12, 2010 Web Development jQuery Templates and Data Linking (and Microsoft contributing to jQuery) - ScottGu ASP.NET MVC and jQuery Part 4 – Advanced Model Binding - Mister James Creating an ASP.NET report using Visual Studio 2010 - Part 1 & Part 2 & Part 3 - rajbk Caching Images in ASP.NET MVC -Evan How to Localize an ASP.NET MVC Application - mikeceranski Localization in ASP.NET MVC 2 using ModelMetadata - Raj Kiamal Web Design...(read more)

    Read the article

  • Daily tech links for .net and related technologies - May 13-16, 2010

    - by SanjeevAgarwal
    Daily tech links for .net and related technologies - May 13-16, 2010 Web Development Integrating Twitter Into An ASP.NET Website Using OAuth - Scott Mitchell T4MVC Extensions for MVC Partials - Evan Building a Data Grid in ASP.NET MVC - Ali Bastani Introducing the MVC Music Store - MVC 2 Sample Application and Tutorial - Jon Galloway Announcing the RTM of MvcExtensions - kazimanzurrashid Optimizing Your Website For Speed Web Design Validation with the jQuery UI Tabs Widget - Chris Love A Brief History...(read more)

    Read the article

  • March 24 VTSQL Meeting: BI with SQL Server guru Rushabh Mehta

    When: March 24th, 6PM Where: Competitive Computing, Colchester Vermont (www.competitive.com) From Zero to BI in 10 Minutes or less By Rushabh Mehta Finally a technology that the Information Worker can use to take raw data and turn it into valuable information in a matter of minutes from the comfort of their own desktop! In this very exciting and interactive session full of exciting demos, we will walk you through taking raw information from a variety of sources and building a powerful analytical...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Daily tech links for .net and related technologies - Apr 1-3, 2010

    - by SanjeevAgarwal
    Daily tech links for .net and related technologies - Apr 1-3, 2010 Web Development Cleaner HTML Markup with ASP.NET 4 Web Forms - Client IDs - ScottGu Using jQuery and OData to Insert a Database Record - Stephen Walter Apple vs. Microsoft – A Website Usability Study Mastering ASP.NET MVC 2.0: Preview - TekPub Web Design UX Lessons Learned From Offline Experiences - Jon Phillips 5 Steps Toward jQuery Mastery - Dave Ward 20 jQuery Cheatsheets, Docs and References for Every Occasion - Paul Andrew 11...(read more)

    Read the article

  • SQL SERVER Difference Between GRANT and WITH GRANT

    This was very interesting question recently asked me to during my session at TechMela Nepal. The question is what is the difference between GRANT and WITH GRANT when giving permissions to user.Let us first see syntax for the same.GRANT:USE master;GRANTVIEW ANY DATABASETO username;GOWITH GRANT:USE master;GRANTVIEW ANY DATABASETO username WITHGRANTOPTION;GOThe difference between both of this option [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • TSQL formatting - a sure fire way to start a conversation.

    - by fatherjack
    There are probably as many opinions on ways to format code as there are people writing code and I am not here to say that any one is better than any other. Well, that isn't true. I am here to say that one way is better than another but this isn't a matter of preference or personal taste, this is an example of where sloppy formatting can cause TSQL to weird and whacky things but following some simple methods can make your code more reliable and more robust when . Take these two pieces of code, ready...(read more)

    Read the article

  • Shared Data Source name error underscore characters added

    - by mick
    The name of our shared data source in RS (report server) is "AF1 Live Database" (no underscore characters - just spaces between words) and is the same in report builder in VS. However, the following error pops up when the RDL of this report is uploaded onto our company site and run. (error we are receiving...) The report server cannot process the report or shared dataset. The shared data source 'AF1_Live_Database' for the report server or SharePoint site is not valid. Browse to the server or site and select a shared data source. (rsInvalidDataSourceReference) We have no idea why the error reports the shared data source as 'AF1_Live_Database' with underscore characters? As this appears to be the problem that keeps the report from running we are seeking your help, thanks.

    Read the article

  • SQL Query for Determining SharePoint ACL Sizes

    - by Damon Armstrong
    When a SharePoint Access Control List (ACL) size exceeds more than 64kb for a particular URL, the contents under that URL become unsearchable due to limitations in the SharePoint search engine.  The error most often seen is The Parameter is Incorrect which really helps to pinpoint the problem (its difficult to convey extreme sarcasm here, please note that it is intended).  Exceeding this limit is not unheard of – it can happen when users brute force security into working by continually overriding inherited permissions and assigning user-level access to securable objects. Once you have this issue, determining where you need to focus to fix the problem can be difficult.  Fortunately, there is a query that you can run on a content database that can help identify the issue: SELECT [SiteId],      MIN([ScopeUrl]) AS URL,      SUM(DATALENGTH([Acl]))/1024 as AclSizeKB,      COUNT(*) AS AclEntries FROM [Perms] (NOLOCK) GROUP BY siteid ORDER BY AclSizeKB DESC This query results in a list of ACL sizes and entry counts on a site-by-site basis.  You can also remove grouping to see a more granular breakdown: SELECT [ScopeUrl] AS URL,       SUM(DATALENGTH([Acl]))/1024 as AclSizeKB,      COUNT(*) AS AclEntries FROM [Perms] (NOLOCK) GROUP BY ScopeUrl ORDER BY AclSizeKB DESC

    Read the article

  • SQL SERVER Quick Note of Database Mirroring

    Just a day ago, I was invited at Round Table meeting at prestigious organization. They were planning to implement High Availability solution using Database Mirroring. During the meeting, I have made few notes of what was being discussed there. I just thought it would be interested for all of you know about it.Database Mirroring works [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Proper Data Structure for Commentable Comments

    - by Wesley
    Been struggling with this on an architectural level. I have an object which can be commented on, let's call it a Post. Every post has a unique ID. Now I want to comment on that Post, and I can use ID as a foreign key, and each PostComment has an ItemID field which correlates to the Post. Since each Post has a unique ID, it is very easy to assign "Top Level" comments. When I comment on a comment however, I feel like I now need a PostCommentComment, which attaches to the ID of the PostComment. Since ID's are assigned sequentially, I can no longer simply use ItemID to differentiate where in the tree the comment is assigned. I.E. both a Post and a Post Comment might have an ID of '5', so my foreign key relationship is invalid. This seems like it could go on infinitely, with PostCommentCommentComment's etc... What's the best way to solve this? Should I have a field in the comment called "IsPostComment" or something of the like to know which collection to attach the ID to? This strikes me as the best solution I've seen so far, but now I feel like I need to make recursive DataBase calls which start to get expensive. Meaning, I get a Post and get all PostComments where ItemID == Post.ID && where IsPostComment == true Then I take that as a collection, gather all the ID's of the PostComments, and do another search where ItemID == PostComment[all].ID && where IsPostComment == false, then repeat infinitely. This means I make a call for every layer, and if I'm calling 100 Posts, I might make 1000 DB calls to get 10 layers of comments each. What is the right way to do this?

    Read the article

  • Daily tech links for .net and related technologies - June 8-11, 2010

    - by SanjeevAgarwal
    Daily tech links for .net and related technologies - June 8-11, 2010 Web Development ASPNET MVC: Handling Multiple Buttons on a Form with jQuery - Donn Building a MVC2 Template, Part 14, Logging Services - Eric Simple Accordion Menu With jQuery & ASP.NET - Steve Boschi Conditional Validation in MVC -Simonince Creating a RESTful Web Service Using ASP.Net MVC Part 23 – Bug Fixes and Area Support - Shoulders of Giants Web Design The Principles Of Cross-Browser CSS Coding - Louis Lazaris Transparency...(read more)

    Read the article

  • Basics of Join Factorization

    - by Hong Su
    We continue our series on optimizer transformations with a post that describes the Join Factorization transformation. The Join Factorization transformation was introduced in Oracle 11g Release 2 and applies to UNION ALL queries. Union all queries are commonly used in database applications, especially in data integration applications. In many scenarios the branches in a UNION All query share a common processing, i.e, refer to the same tables. In the current Oracle execution strategy, each branch of a UNION ALL query is evaluated independently, which leads to repetitive processing, including data access and join. The join factorization transformation offers an opportunity to share the common computations across the UNION ALL branches. Currently, join factorization only factorizes common references to base tables only, i.e, not views. Consider a simple example of query Q1. Q1:    select t1.c1, t2.c2    from t1, t2, t3    where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c2 = 2 and t2.c2 = t3.c2   union all    select t1.c1, t2.c2    from t1, t2, t4    where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c3 = t4.c3; Table t1 appears in both the branches. As does the filter predicates on t1 (t1.c1 > 1) and the join predicates involving t1 (t1.c1 = t2.c1). Nevertheless, without any transformation, the scan (and the filtering) on t1 has to be done twice, once per branch. Such a query may benefit from join factorization which can transform Q1 into Q2 as follows: Q2:    select t1.c1, VW_JF_1.item_2    from t1, (select t2.c1 item_1, t2.c2 item_2                   from t2, t3                    where t2.c2 = t3.c2 and t2.c2 = 2                                  union all                   select t2.c1 item_1, t2.c2 item_2                   from t2, t4                    where t2.c3 = t4.c3) VW_JF_1    where t1.c1 = VW_JF_1.item_1 and t1.c1 > 1; In Q2, t1 is "factorized" and thus the table scan and the filtering on t1 is done only once (it's shared). If t1 is large, then avoiding one extra scan of t1 can lead to a huge performance improvement. Another benefit of join factorization is that it can open up more join orders. Let's look at query Q3. Q3:    select *    from t5, (select t1.c1, t2.c2                  from t1, t2, t3                  where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c2 = 2 and t2.c2 = t3.c2                 union all                  select t1.c1, t2.c2                  from t1, t2, t4                  where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c3 = t4.c3) V;   where t5.c1 = V.c1 In Q3, view V is same as Q1. Before join factorization, t1, t2 and t3 must be joined first before they can be joined with t5. But if join factorization factorizes t1 from view V, t1 can then be joined with t5. This opens up new join orders. That being said, join factorization imposes certain join orders. For example, in Q2, t2 and t3 appear in the first branch of the UNION ALL query in view VW_JF_1. T2 must be joined with t3 before it can be joined with t1 which is outside of the VW_JF_1 view. The imposed join order may not necessarily be the best join order. For this reason, join factorization is performed under cost-based transformation framework; this means that we cost the plans with and without join factorization and choose the cheapest plan. Note that if the branches in UNION ALL have DISTINCT clauses, join factorization is not valid. For example, Q4 is NOT semantically equivalent to Q5.   Q4:     select distinct t1.*      from t1, t2      where t1.c1 = t2.c1  union all      select distinct t1.*      from t1, t2      where t1.c1 = t2.c1 Q5:    select distinct t1.*     from t1, (select t2.c1 item_1                   from t2                union all                   select t2.c1 item_1                  from t2) VW_JF_1     where t1.c1 = VW_JF_1.item_1 Q4 might return more rows than Q5. Q5's results are guaranteed to be duplicate free because of the DISTINCT key word at the top level while Q4's results might contain duplicates.   The examples given so far involve inner joins only. Join factorization is also supported in outer join, anti join and semi join. But only the right tables of outer join, anti join and semi joins can be factorized. It is not semantically correct to factorize the left table of outer join, anti join or semi join. For example, Q6 is NOT semantically equivalent to Q7. Q6:     select t1.c1, t2.c2    from t1, t2    where t1.c1 = t2.c1(+) and t2.c2 (+) = 2  union all    select t1.c1, t2.c2    from t1, t2      where t1.c1 = t2.c1(+) and t2.c2 (+) = 3 Q7:     select t1.c1, VW_JF_1.item_2    from t1, (select t2.c1 item_1, t2.c2 item_2                  from t2                  where t2.c2 = 2                union all                  select t2.c1 item_1, t2.c2 item_2                  from t2                                                                                                    where t2.c2 = 3) VW_JF_1       where t1.c1 = VW_JF_1.item_1(+)                                                                  However, the right side of an outer join can be factorized. For example, join factorization can transform Q8 to Q9 by factorizing t2, which is the right table of an outer join. Q8:    select t1.c2, t2.c2    from t1, t2      where t1.c1 = t2.c1 (+) and t1.c1 = 1 union all    select t1.c2, t2.c2    from t1, t2    where t1.c1 = t2.c1(+) and t1.c1 = 2 Q9:   select VW_JF_1.item_2, t2.c2   from t2,             (select t1.c1 item_1, t1.c2 item_2            from t1            where t1.c1 = 1           union all            select t1.c1 item_1, t1.c2 item_2            from t1            where t1.c1 = 2) VW_JF_1   where VW_JF_1.item_1 = t2.c1(+) All of the examples in this blog show factorizing a single table from two branches. This is just for ease of illustration. Join factorization can factorize multiple tables and from more than two UNION ALL branches.  SummaryJoin factorization is a cost-based transformation. It can factorize common computations from branches in a UNION ALL query which can lead to huge performance improvement. 

    Read the article

  • SQL SERVER What is Spatial Database? Developing with SQL Server Spatial and Deep Dive into Spatial

    What is Spatial Database?A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. (Source: Wikipedia)Today [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Serial plans: Threshold / Parallel_degree_limit = 1

    - by jean-pierre.dijcks
    As a very short follow up on the previous post. So here is some more on getting a serial plan and why that happens Another reason - compared to the auto DOP is not on as we looked at in the earlier post - and often more prevalent to get a serial plan is if the plan simply does not take long enough to consider a parallel path. The resulting plan and note looks like this (note that this is a serial plan!): explain plan for select count(1) from sales; SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY()); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- Plan hash value: 672559287 -------------------------------------------------------------------------------------- | Id  | Operation            | Name  | Rows  | Cost (%CPU)| Time     | Pstart| Pstop | -------------------------------------------------------------------------------------- PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- |   0 | SELECT STATEMENT     |       |     1 |     5   (0)| 00:00:01 |       |     | |   1 |  SORT AGGREGATE      |       |     1 |            |          |       |     | |   2 |   PARTITION RANGE ALL|       |   960 |     5   (0)| 00:00:01 |     1 |  16 | |   3 |    TABLE ACCESS FULL | SALES |   960 |     5   (0)| 00:00:01 |     1 |  16 | Note -----    - automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold 14 rows selected. The parallel threshold is referring to parallel_min_time_threshold and since I did not change the default (10s) the plan is not being considered for a parallel degree computation and is therefore staying with the serial execution. Now we go into the land of crazy: Assume I do want this DOP=1 to happen, I could set the parameter in the init.ora, but to highlight it in this case I changed it on the session: alter session set parallel_degree_limit = 1; The result I get is: ERROR: ORA-02097: parameter cannot be modified because specified value is invalid ORA-00096: invalid value 1 for parameter parallel_degree_limit, must be from among CPU IO AUTO INTEGER>=2 Which of course makes perfect sense...

    Read the article

  • Interviewing a DBA

    - by kev
    Our Company is in the Process of recuiting a DBA. I have built a group test of questions from basic questions such as Pk and Fk constraints, simple querries(fizzbuzz style) to more advanced things such as indexes, Collation, isolation levels and how to trace deadlocks. However, that is the limit of my knowledge. So my question to all the DBA's is what is the base level knowledge that all DBA's should have? We are really looking for someone that will be able to manage our replication, analyzing some of our slower running queries(that the devs can go to for help) and someone that can trace some of the deadlock issues that we are having. Any help would be most appreciated!

    Read the article

  • Interviewing a DBA

    - by kev
    Our Company is in the Process of recuiting a DBA. I have built a group test of questions from basic questions such as Pk and Fk constraints, simple querries(fizzbuzz style) to more advanced things such as indexes, Collation, isolation levels and how to trace deadlocks. However, that is the limit of my knowledge. So my question to all the DBA's is what is the base level knowledge that all DBA's should have? We are really looking for someone that will be able to manage our replication, analyzing some of our slower running queries(that the devs can go to for help) and someone that can trace some of the deadlock issues that we are having. Any help would be most appreciated!

    Read the article

  • Adventures in Scrum: Lesson 1 &ndash; The failed Sprint

    - by Martin Hinshelwood
    I recently had a conversation with a product owner that wanted to have the Scrum team broken up into smaller units so that less time was wasted on the Scrum Ceremonies! Their complaint was around the need in Scrum to have the entire “Team” (7+-2) involved in the sizing of the work during the “Sprint Planning Meeting”.  The standard flippant answer of all Scrum professionals, “Well that's not Scrum”, does not get you any brownie points in these situations. The response could be “Well we are not doing Scrum then” which in turn leads to “We are doing Scrum…But, we have split the scrum team into units of 2/3 so that they can concentrate on a specific area of work”. While this may work, it is not Scrum and should not be called so… It is just a form of Agile. Don’t get me wrong at this stage, there is nothing wrong with Agile, just don’t call it Scrum. The reason that the Product Owner wants to do this is that, in effect, through a number of miscommunications and failings in our implementation of Scrum, there was NO unit of potentially Shippable software at the end of the first sprint. It does not matter to them that most Scrum teams will fail the first Sprint, even those that are high performing teams. Remember it is the product owners their money! We should NOT break up scrum teams into smaller units for the purpose of having less people tied up in the Scrum Ceremonies. The amount of backlog the Team selects is solely up to the Team… Only the Team can assess what it can accomplish over the upcoming Sprint. - Scrum Guide, Scrum.org The entire team must accept the work and in order to understand what they can accept they must be free to size it as a team. This both encourages common understanding and increases visibility on why team members think a task is of a particular size. This has the benefit of increasing the knowledge of the entire team in the problem domain. A new Team often first realizes that it will either sink or swim as a Team, not individually, in this meeting. The Team realizes that it must rely on itself. As it realizes this, it starts to self-organize to take on the characteristics and behaviour of a real Team. - Scrum Guide, Scrum.org This paragraph goes to the why of having the whole team at the meeting; The goal of Scrum it to produce a unit of potentially shippable software at the end of every Sprint. In order to achieve this we need high performing teams and this is what Scrum as a framework has been optimised to produce. I think that our Product Owner is understandably upset over loosing two weeks work and is losing sight the end goal of Scrum in the failures of the moment. As the man spending the money, I completely understand his perspective and I think that we should not have started Scrum on an internal project, but selected a customer  that is open to the ideas and complications of Scrum. So, what should we have NOT done on our first Scrum project: Should not have had 3 interns as the only on site resource – This lead to bad practices as the experienced guys were not there helping and correcting as they usually would. Should not have had the only experienced guys offsite – With both the experienced technical guys in completely different time zones it was difficult to get time for questions. Helping the guys on site was just plain impossible. Should not have used a part time ScrumMaster – Although the ScrumMaster attended all of the Ceremonies, because they are only in 2 full days of the week it makes it difficult for the team to raise impediments as they go. Should not have used a proxy product owner. – This was probably the worst decision that was made. Mainly because the proxy product owner did not have the same vision as the product owner. While Scrum does not explicitly reject the idea of a Proxy Product Owner, I do not think it works very well in practice. The “single wringable neck” needs to contain both the Money and the Vision as well as attending the required meetings. I will be brining all of these things up at the Sprint Retrospective and we will learn from our mistakes and move on. Do, Inspect then Adapt…   Technorati Tags: Scrum,Sprint Planing,Sprint Retrospective,Scrum.org,Scrum Guide,Scrum Ceremonies,Scrummaster,Product Owner Need Help? Professional Scrum Developer Training SSW has six Professional Scrum Developer Trainers who specialise in training your developers in implementing Scrum with Microsoft's Visual Studio ALM tools.

    Read the article

< Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >