Search Results

Search found 97855 results on 3915 pages for 'code performance'.

Page 46/3915 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • How do I take responsibility for my code when colleague makes unnecessary improvements without notice?

    - by Jesslyn
    One of my teammates is a jack of all trades in our IT shop and I respect his insight. However, sometimes he reviews my code (he's second in command to our team leader, so that's expected) without a heads up. So sometimes he reviews my changes before they complete the end goal and makes changes right away... and has even broken my work once. Other times, he has made unnecessary improvements to some of my code that is 3+ months old. This annoys me for a few reasons: I am not always given a chance to fix my mistakes He has not taken the time to ask me what I was trying to accomplish when he is confused, which could affect his testing or changes I don't always think his code is readable Deadlines are not an issue and his current workload doesn't require any work in my projects other than reviewing my code changes. Anyways, I have told him in the past to please keep me posted if he sees something in my work that he wants to change so that I could take ownership of my code (maybe I should have said "shortcomings") and he's not been responsive. I fear that I may come off as aggressive when I ask him to explain his changes to me. He's just a quiet person who keeps to himself, but his actions continue. I don't want to banish him from making code changes (not like I could), because we are a team--but I want to do my part to help our team. Added clarifications: We share 1 development branch. I do not wait until all my changes complete a single task because I risk losing some significant work--so I make sure my changes build and do not break anything. My concern is that my teammate doesn't explain the reason or purpose behind his changes. I don't think he should need my blessing, but if we disagree on an approach I thought it would be best to discuss the pros and cons and make a decision once we both understand what is going on. I have not discussed this with our team lead yet as I would prefer to resolve personal disagreements without getting management involved unless it is necessary. Since my concern seemed more of personal issue than a threat to our work, I chose to not bother the team lead. I am working on code review process ideas--to help promote the benefits of more organized code reviews without making it all about my pet peeves.

    Read the article

  • Red Samurai Performance Audit Tool – OOW 2013 release (v 1.1)

    - by JuergenKress
    We are running our Red Samurai Performance Audit tool and monitoring ADF performance in various projects already for about one year and the half. It helps us a lot to understand ADF performance bottlenecks and tune slow ADF BC View Objects or optimise large ADF BC fetches from DB. There is special update implemented for OOW'13 - advanced ADF BC statistics are collected directly from your application ADF BC runtime and later displayed as graphical information in the dashboard. I will be attending OOW'13 in San Francisco, feel free to stop me and ask about this tool - I will be happy to give it away and explain how to use it in your project. Original audit screen with ADF BC performance issues, this is part of our Audit console application: Audit console v1.1 is improved with one more tab - Statistics. This tab displays all SQL Selects statements produced by ADF BC over time, logged users, AM access load distribution and number of AM activations along with user sessions. Available graphs: Daily Queries  - total number of SQL selects per day Hourly Queries - Last 48 Hours Logged Users - total number of user sessions per day SQL Selects per Application Module - workload per Application Module Number of Activations and User sessions - last 48 hours - displays stress load Read the complete article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Red Samurai,ADF performance,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • GDD-BR 2010 [1G] Android: Building High-Performance Applications

    GDD-BR 2010 [1G] Android: Building High-Performance Applications Speaker: Tim Bray Track: Android Time: G [16:30 - 17:15] Room: 1 Level: 151 Build Android applications that are smooth, fast, responsive, and a pleasure to use. Also, learn about the tools and techniques we use to track down and fix performance problems. From: GoogleDevelopers Views: 20 0 ratings Time: 33:34 More in Science & Technology

    Read the article

  • Calculating RAM Performance? Example: DDR3-2133 CL9-11-10-28 1.65V vs DDR3-1600 CL10-10-10-30 1.5V

    - by user1131467
    How do you calculate the relative performance of PC RAM? For example, what is the relative performance of the following: G.Skill Ripjaws Z 8 x 4GB Kit, DDR3-2133, [email protected] G.Skill Ripjaws Z 4 x 8GB Kit, DDR3-1600, [email protected] If it's relevant, when they are used in a top of the line ASUS Rampage IV Extreme motherboard and Intel i7 3960X? By performance, I mean relative: read latency write latency read bandwidth write bandwidth Please include working. (I mean how did you arrive at the figures based on timing and DDR3-speed)

    Read the article

  • PPT Leveraging Azure for Performance Testing

    - by Tarun Arora
    I have recently presented a session on “How you can leverage Azure for Performance Testing” your application.  It goes without saying that performance testing your application not only gives you the confidence that the application will work under heavy levels of stress but also gives you the ability to test how scalable the architecture of your application is. It is important to know how much is too much for your application! Working with various clients in the industry I have realized that the biggest barrier in Load Testing & Performance Testing adoption is the high infrastructure and administration cost that comes with this phase of testing. In the session I tried to demonstrate how you can use the power of Windows Azure to effectively abstract the administration cost of infrastructure management and lower the total cost of Load & Performance Testing. You can view the session presentation here, http://www.slideshare.net/aroratarun/leveraging-azure-for-performance-testing  I’ll be adding a video on this subject shortly… If you have any feedback or further suggestions to add to the goodness of this solution please get in touch.

    Read the article

  • Nvidia Powermizer Performance Levels

    - by jeffrey
    Is there anyway to configure Nvidia Powerimizer performance levels? My current setup has 3 power levels with the lowest one being 50mhz. The problem with this it that it lags compiz when it goes to the lowest performance level 0. Minimizing, maximizing, dragging windows, etc. are all sluggish when it's at the lowest level. Once powermizer leaves level 0 everything is very smooth and runs great. Is there anyway for me to remove level 0 and just run Level the two higher levels 1/2? I don't want to complete disable powermizer, but I can't stand the lagging once powermizer goes into performance level 0. Setting the option "prefer maximum performance" fixes the problem as it disables powermizer, but the GPU is overkill at stock speeds for most desktop use @ 850mhz. intel i5 2500k asus gene-z z68 evga 560ti fpb (driver 295.40) ubuntu 12.04 LTS x64

    Read the article

  • Harping on Metadata Performance: New Benchmarks

    <b>Linux Magazine:</b> "Metadata performance is perhaps the most neglected facet of storage performance. In previous articles we&#8217;ve looked into how best to improve metadata performance without too much luck. Could that be a function of the benchmark? Hmmm..."

    Read the article

  • Granularity and Parallel Performance

    One key to attaining good parallel performance is choosing the right granularity for the application. The goal is to determine the right granularity (usually larger is better) for parallel tasks, while avoiding load imbalance and communication overhead to achieve the best performance.

    Read the article

  • Writing a code example

    - by Stefano Borini
    I would like to have your feedback regarding code examples. One of the most frustrating experiences I sometimes have when learning a new technology is finding useless examples. I think an example as the most precious thing that comes with a new library, language, or technology. It must be a starting point, a wise and unadulterated explanation on how to achieve a given result. A perfect example must have the following characteristics: Self contained: it should be small enough to be compiled or executed as a single program, without dependencies or complex makefiles. An example is also a strong functional test if you correctly installed the new technology. The more issues could arise, the more likely is that something goes wrong, and the more difficult is to debug and solve the situation. Pertinent: it should demonstrate one, and only one, specific feature of your software/library, involving the minimal additional behavior from external libraries. Helpful: the code should bring you forward, step by step, using comments or self-documenting code. Extensible: the example code should be a small “framework” or blueprint for additional tinkering. A learner can start by adding features to this blueprint. Recyclable: it should be possible to extract parts of the example to use in your own code Easy: An example code is not the place to show your code-fu skillz. Keep it easy. helpful acronym: SPHERE. Prototypical examples of violations of those rules are the following: Violation of self-containedness: an example spanning multiple files without any real need for it. If your example is a python program, keep everything into a single module file. Don’t sub-modularize it. In Java, try to keep everything into a single class, unless you really must partition some entity into a meaningful object you need to pass around (and java mandates one class per file, if I remember correctly). Violation of Pertinency: When showing how many different shapes you can draw, adding radio buttons and complex controls with all the possible choices for point shapes is a bad idea. You de-focalize your example code, introducing code for event handling, controls initialization etc., and this is not part the feature you want to demonstrate, they are unnecessary noise in the understanding of the crucial mechanisms providing the feature. Violation of Helpfulness: code containing dubious naming, wrong comments, hacks, and functions longer than one page of code. Violation of Extensibility: badly factored code that have everything into a single function, with potentially swappable entities embedded within the code. Example: if an example reads data from a file and displays it, create a method getData() returning a useful entity, instead of opening the file raw and plotting the stuff. This way, if the user of the library needs to read data from a HTTP server instead, he just has to modify the getData() module and use the example almost as-is. Another violation of Extensibility comes if the example code is not under a fully liberal (e.g. MIT or BSD) license. Violation of Recyclability: when the code layout is so intermingled that is difficult to easily copy and paste parts of it and recycle them into another program. Again, licensing is also a factor. Violation of Easiness: Yes, you are a functional-programming nerd and want to show how cool you are by doing everything on a single line of map, filter and so on, but that could not be helpful to someone else, who is already under pressure to understand your library, and now has to understand your code as well. And in general, the final rule: if it takes more than 10 minutes to do the following: compile the code, run it, read the source, and understand it fully, it means that the example is not a good one. Please let me know your opinion, either positive or negative, or experience on this regard.

    Read the article

  • code review: Is it subjective or objective(quantifiable) ?

    - by Ram
    I am putting together some guidelines for code reviews. We do not have one formal process yet, and trying to formalize it. And our team is geographically distributed We are using TFS for source control (used it for tasks/bug tracking/project management as well, but migrated that to JIRA) with VS2008 for development. What are the things you look for when doing a code review ? These are the things I came up with Enforce FXCop rules (we are a Microsoft shop) Check for performance (any tools ?) and security (thinking about using OWASP- code crawler) and thread safety Adhere to naming conventions The code should cover edge cases and boundaries conditions Should handle exceptions correctly (do not swallow exceptions) Check if the functionality is duplicated elsewhere method body should be small(20-30 lines) , and methods should do one thing and one thing only (no side effects/ avoid temporal coupling -) Do not pass/return nulls in methods Avoid dead code Document public and protected methods/properties/variables What other things do you generally look for ? I am trying to see if we can quantify the review process (it would produce identical output when reviewed by different persons) Example: Saying "the method body should be no longer than 20-30 lines of code" as opposed to saying "the method body should be small" Or is code review very subjective ( and would differ from one reviewer to another ) ? The objective is to have a marking system (say -1 point for each FXCop rule violation,-2 points for not following naming conventions,2 point for refactoring etc) so that developers would be more careful when they check in their code.This way, we can identify developers who are consistently writing good/bad code.The goal is to have the reviewer spend about 30 minutes max, to do a review (I know this is subjective, considering the fact that the changeset/revision might include multiple files/huge changes to the existing architecture etc , but you get the general idea, the reviewer should not spend days reviewing someone's code) What other objective/quantifiable system do you follow to identify good/bad code written by developers? Book reference: Clean Code: A handbook of agile software craftmanship by Robert Martin

    Read the article

  • Delphi: Fast(er) widestring concatenation

    - by Ian Boyd
    i have a function who's job is to convert an ADO Recordset into html: class function RecordsetToHtml(const rs: _Recordset): WideString; And the guts of the function involves a lot of wide string concatenation: while not rs.EOF do begin Result := Result+CRLF+ '<TR>'; for i := 0 to rs.Fields.Count-1 do Result := Result+'<TD>'+VarAsString(rs.Fields[i].Value)+'</TD>'; Result := Result+'</TR>'; rs.MoveNext; end; With a few thousand results, the function takes, what any user would feel, is too long to run. The Delphi Sampling Profiler shows that 99.3% of the time is spent in widestring concatenation (@WStrCatN and @WstrCat). Can anyone think of a way to improve widestring concatenation? i don't think Delphi 5 has any kind of string builder. And Format doesn't support Unicode. And to make sure nobody tries to weasel out: pretend you are implementing the interface: IRecordsetToHtml = interface(IUnknown) function RecordsetToHtml(const rs: _Recordset): WideString; end; Update One I thought of using an IXMLDOMDocument, to build up the HTML as xml. But then i realized that the final HTML would be xhtml and not html - a subtle, but important, difference. Update Two Microsoft knowledge base article: How To Improve String Concatenation Performance

    Read the article

  • What are provenly scalable data persistence solutions for consumer profiles?

    - by Hubbard
    Consumer profiles with analytical scores [ConsumerID, 1..n demographical variables, 1...n analytical scores e.g. "likely to churn" "likely to buy an item 100$ in worth" etc.] have to be possible to query fast if they are to be used in customizing web-sites, consumer communications etc. Well. If you have: Large number of consumers Large profiles with a huge set of variables (as profiles describing human behaviour are likely to be..) ...you are in trouble. If you really have a physical relational database to which you target a query and then a physical disk starts to rotate someplace to give you an individual profile or a set of profiles, the profile user (a web site customizing a page, a recommendation engine making a recommendation..) has died of boredom before getting any observable results. There is the possibility of having the profiles in memory, which would of course increase the performance hugely. What are the most proven solutions for a fast-response, scalable consumer profile storage? Is there a shootout of these someplace?

    Read the article

  • Update table instantly or “Bulk” Update in database later? And is it advisable?

    - by Mestika
    Hi, I have a question regarding a semi-constant update in a database. In short it is regarding a checkout function on a web page, which each time the checkout function is evoked it do five steps. I want to try to optimize this function and have my eye on a step where I update a table each time the checkout is performed. I take the information retrieved from the shopping cart and then update the table in question. I do have some indexes on the table, the gain from those are greater than leaving them so this is a cost I’m willing to take. Now, my question is. Could it in some way regarding to performance be better to not update the table instantly but collect every checkout items and save them in some way (maybe in a file) and then at a specific time (or several times) at day take this file and then update the table with the new information. Then I started thinking about if there was a possibility to use some sort of Bulk Update to take a file, hashmap, array (or?) and then update it. And I’m using IBM DB2 version 9.7 Mestika

    Read the article

  • SQL SERVER – What is Spatial Database? – Developing with SQL Server Spatial and Deep Dive into Spati

    - by pinaldave
    What is Spatial Database? A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. (Source: Wikipedia) Today I will be talking about the same subject at Microsoft TechEd India. If you want to learn about how to spatial aspect of data and how to integrate them with SQL Server this is the perfect session for you. Spatial is very special concept of SQL Server and I really like how it is implemented in SQL Server. In general Performance Tuning and Query Optimization is something I always have enjoyed in my professional life. Index are my best friends and many time, by implementing and many time by removing I have improved the performance of the system. In this session, I will be talking about Index along with Spatial Data. As Spatial Database is very interesting concept, I will cover super short but very interesting 10 quick slides about this subject. I will make sure in very first 20 mins, you will understand following topics Introduction to Spatial Database One line definition Understanding Spatial Indexing Index Internals Query/Performance Tuning Query Hinting/Cost Analysis Spatial Index Catalog Views Performance Troubleshooting Finding Optimal Index using Spatial Index SP Common Errors Index Maintenance This slides decks will be followed by around 30 mins demo which will have story of geometry, geography, index internals and performance tuning. If you are interested in learning how GIS works and how SQL Server out of the box supports this wonderful tools, you will really like how the story is told. I am sure all people who attend the event will know how the Bangalore is positioned on the map of India. I will take example of Bangalore and Hyderabad and demonstrate how index can improve the performance. Well there are lots of story to tell in the session, and I will be opening this session with the beautiful script of Botticelli’s Birth of Venus created by Michael J. Swart. I will also demonstrate few real life scenario where I will be talking about Spatial Database and its usage. Do not miss this session. At the end of session there will be book awarded to best participant. My session details: Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Date: April 14, 2010 Time: 5:00pm-6:00pm Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: Spatial Database

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario Conventional Structures Columnstore ? SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • Are there any books dedicated to writing test code? [on hold]

    - by joshin4colours
    There are many programming books dedicated to useful programming and engineering topics, like working with legacy code or particular languages. The best of these books become "standard" or "canonical" references for professional programmers. Are there any books like this (or that could be like this) for writing test code? I don't mean books about Test-Driven Development, nor do I mean books about writing good (clean) code in general. I'm looking for books that discuss test code specifically (unit-level, integration-level, UI-level, design patterns, code structures and organization, etc.)

    Read the article

  • What is missing and should be added to Code Complete 3rd Edition? [closed]

    - by Peter Turner
    It's been quite a few years since Code Complete was published. I really love the book, I keep it in the bathroom at the office and read a little out of it once or twice a day. What developments in computer software... development need to be added to Code Complete 3e, and for the sake of reductionism, what should be removed to make room for them? Is it necessary even possible to call Code Complete Code Complete if it doesn't have language features that even Delphi has like anonymous methods and generics? Also, what languages would be more appropriate than C++ to use for a majority of code examples?

    Read the article

  • After how much line of code a function should be break down?

    - by Sumeet
    While working on existing code base, I usually come across procedures that contain Abusive use of IF and Switch statements. The procedures consist of overwhelming code, which I think require re-factoring badly. The situation gets worse when I identify that some of these are recursive as well. But this is always a matter of debate as the code is working fine and no one wants to wake up the dragon. But, everyone accepts it is very expensive code to manage. I am wondering if are any recommendations to determine if a particular Method is a culprit and needs a revisit/rewrite , so that it can broken down or polymophized in an effective manner. Are there any Metrics (like no. of lines in procedure) that can be used to identify such segment of code. The checklist or advice to convince everyone, will be great!

    Read the article

  • Do you use to third party companies to review your company's code?

    - by CodeToGlory
    I am looking to get the following - Basic code review to make sure they follow the guidelines imposed. Security code analysis to make sure there are no loopholes. No performance bottlenecks by doing a load test etc. We have lot of code coming in from third parties and is becoming laborious to manage code reviews and hence looking to see if others employ such practices. I understand that it may be a concern for some and would raise the question "Well, who is going to make sure the agency is doing their job right?" But basically I am just looking for a third party who can hold all vendor code to the same standards.

    Read the article

  • What is the right level of granularity for code commenting?

    - by Nick
    Commenting in code I believe is very important but recently I've been reviewing code that has left me wondering particular this one. //due to lack of confidence with web programming leaving this note in for now What is the right level of granularity for code commenting? EDIT: Obviously the above comment is shocking hence why I'm asking the question. I've recently noticed the inline comments in the code at my work place annoying. Instead of getting angry I want discovery the acceptable level of granularity for code commenting in the community.

    Read the article

  • What are the Crappy Code Games - Tips on how to win?

    - by simonsabin
    This is part of a series on the Crappy Code Games The background Who can enter? What are the challenges? What are the prizes? Why should I attend? Tips on how to win Tips on how to win Each test has some different aspect that will define how you win. In this post we will give you some tips on how to try and win. As a starter why not watch some of the sessions from previous SQLBits Storage sessions Sessions on IO The background Who can enter? What are the challenges? What are the prizes? Why should...(read more)

    Read the article

  • What are the Crappy Code Games - Who can enter?

    - by simonsabin
    This is part of a series on the Crappy Code Games The background Who can enter? What are the challenges? What are the prizes? Why should I attend? Tips on how to win Who can enter? Anyone can enter the competition, whats more anyone can just come along for the evening. All the evenings are aimed at being a social evening with food and drink. So just think of them as usergroup meetings with The evening will have lots of SQL bods in attendance and I'll be trying to help out anyone that wants some...(read more)

    Read the article

  • Is imposing the same code format for all developers a good idea?

    - by Stijn Geukens
    We are considering to impose a single standard code format in our project (auto format with save actions in Eclipse). The reason is that currently there is a big difference in the code formats used by several (10) developers which makes it harder for one developer to work on the code of another developer. The same Java file sometimes uses 3 different formats. So I believe the advantage is clear (readability = productivity) but would it be a good idea to impose this? And if not, why? UPDATE We all use Eclipse and everyone is aware of the plan. There already is a code format used by most but it is not enforced since some prefer to stick to their own code format. Because of the above reasons some would prefer to enforce it.

    Read the article

  • What are the Crappy Code Games - What are the challenges?

    - by simonsabin
    This is part of a series on the Crappy Code Games The background Who can enter? What are the challenges? What are the prizes? Why should I attend? Tips on how to win What are the challenges? There are 4 games that you can enter. Each one is to test a different aspect of SQL Server. The High Jump: Generate the highest I/O per second The 100 m dash: Cumulative highest number of I/O’s in 60 seconds The SSIS-athon: Load one billion row fact table in the shortest time The Marathon: Generate the highest...(read more)

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >