Search Results

Search found 16772 results on 671 pages for 'charles long'.

Page 363/671 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • FormsAuthentication.SignOut() does not log the user out.

    - by Jason
    Smashed my head against this a bit too long. How do I prevent a user from browsing a site's pages after they have been logged out using FormsAuthentication.SignOut? I would expect this to do it: FormsAuthentication.SignOut(); Session.Abandon(); FormsAuthentication.RedirectToLoginPage(); But it doesn't. If I type in a URL directly, I can still browse to the page. I haven't used roll-your-own security in a while so I forget why this doesn't work.

    Read the article

  • How to (pre)start xamlx workflow service

    - by rwwilden
    Related to this question. I have a xamlx workflow service that loads part of its definition from a database when it runs (using ActivityXamlServices.Load). Reason for this is that I need versioning, see the related question. I'll use WCF routing to direct calls to the right service. The part that I load dynamically contains a Receive activity. However, this activity is 'invisible' as long as the workflow doesn't start because the part of the workflow I load from the database is only loaded when the workflow starts. So from the outside it appears as if there is no Receive activity in the workflow. Apart from not being able to generate a contract for the workflow service, I can't call the service either. My first attempt was to do a soap call with the right contract on the workflow service. However, the runtime doesn't automagically activate my workflow in that case. So the question is, how do I start a workflow that is hosted inside IIS? Regards, Ronald

    Read the article

  • jQuery: Dynamic image handling (waiting for load)

    - by dclowd9901
    I'm trying to write a plugin that has a built in function to wait until all images that are on the page to be loaded before it executes itself. $(window).load() only works if it's the initial load of the page, but if someone wants to pull down some HTML through AJAX that contains images, it doesn't work. Is there any good way of doing this AND implementing it so that it can be self-contained in the plug-in? I've tried looping over $('img').complete, and I know that doesn't work (the images never load, and the browser crashes under a "script takes too long to complete" bug). For an example of what I'm trying to do, visit the page I'm looking to house the plugin at: http://www.simplesli.de If you go to the "more uses" section (click it on the nav bar), you'll see that the image slideshow won't load properly. I know the current code doesn't work, but it's just holding place until I figure something out.

    Read the article

  • TypeLoadException at startup of WCF

    - by Kelly
    I get the following error long before I hit the breakpoint at Main(). System.Reflection.ReflectionTypeLoadException: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. at System.Reflection.Module._GetTypesInternal(StackCrawlMark& stackMark) at System.Reflection.Assembly.GetTypes() at Microsoft.Tools.SvcHost.ServiceHostHelper.LoadServiceAssembly(String svcAssemblyPath) There is a suggestion that it might be a configuration error, but I don't see it when comparing to a similar, working example. How do I "Retrieve the LoaderExceptions property" when it happens this early? Thanks!

    Read the article

  • to_date in MS SQL Server 2005

    - by Chin
    Does any one know how I would have to change the following to work with ms sql? WHERE registrationDate between to_date ('2003/01/01', 'yyyy/mm/dd') AND to_date ('2003/12/31', 'yyyy/mm/dd'); What I have read implies I would have to construct it using DATEPART() which could become very long winded. Especially when the goal would be to compare on dates which I receive in the following format "2003-12-30 10:07:42". It would be nice to pass them off to the database as is. Any pointers appreciated.

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    "Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack. Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while. Self-Service BI Self-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI. This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me: PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.) Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.) One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.) Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.) Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.) This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users. It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations. Collaborative BI I have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time. Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people." The Microsoft BI Stack in General A question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years. Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?" Expo Hall I had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here. Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions. Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind! Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Which Devices Support Javascript Geolocation via navigator.geolocation ?

    - by Maciek
    The iPhone supports geolocation in mobile Safari via the following call: navigator.geolocation.getCurrentPosition( function(pos){ var lat = pos.coords.latitude; var long = pos.coords.longitude; }, function(){ /* Handler if location could not be found */ } ); I'd like to build a good list of devices that have one of the following: support this feature out of the box, or support this feature with an upgrade, or support geolocation with equivalent fidelity of data with some other snippet of Javascript. I'm only familiar with my own device, so this is my list so far: Out of the box: iPhone 3GS Supported, but only with an update iPhone 3G iPhone 2G (?) PC or Mac computer with Firefox 3.5 Supported with some other snippet ? What is the level of support in Blackberry, Android phones, etc?

    Read the article

  • WPF ListView column auto sizing

    - by Diego Mijelshon
    Let's say I have the following ListView: <ListView ScrollViewer.VerticalScrollBarVisibility="Auto"> <ListView.View> <GridView> <GridViewColumn Header="Something" DisplayMemberBinding="{Binding Path=ShortText}" /> <GridViewColumn Header="Description" DisplayMemberBinding="{Binding Path=VeryLongTextWithCRs}" /> <GridViewColumn Header="Something Else" DisplayMemberBinding="{Binding Path=AnotherShortText}" /> </GridView> </ListView.View> </ListView> I'd like the short text columns to always fit in the screen, and the long text column to use the remaining space, word-wrapping if necessary. Is that possible?

    Read the article

  • Documenting applications - automation / semi-automation for screenshots?

    - by bguiz
    For me one of the biggest bores of being a developer is writing user documentation. (I am referring to the stuff that gets exported into PDF files files that ship with the product, not comments in code here). The task off adding or updating new bits of text to the existing documentation is OK. However having to take screenshots of select screens can be quite a tedious process. Is there a way to automate or even semi-automate the process of taking screenshots? The main requirement is the ability to crop images such that they contain only the window, including window manager areas (such as the title bar). The secondary requirement is that any format is OK, so long as it can be exported to PDF. EDIT: Any more biters?

    Read the article

  • iPhone CoreData join

    - by Ken
    Hi guys, long time reader, first time poster. (A link to this schema is located here) http://www.weeshsoft.com/pix/DatabasePic.jpg I'm trying to get all LanguageEntries from a database for a given category.categoryName and languageset.languageSetName e.g. NSFetchRequest* fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"LanguageEntry" inManagedObjectContext:del.managedObjectContext]; [fetchRequest setEntity:entity]; NSString* predicateString = [NSString stringWithFormat:@"Category.categoryName = %@ AND LanguageSet.languageSetName = %@", @"Food", @"English####Spanish"]; fetchRequest.predicate = [NSPredicate predicateWithFormat:predicateString]; NSError *error = nil; NSArray* objects = [del.managedObjectContext executeFetchRequest:fetchRequest error:&error]; This always returns 0 objects. If I set the predicate string to match on one relationship (e.g. Category.categoryName = Food or languageSet.languageSetName = English####Spanish) it will return data. This is baffling, can anyone shed some light? -Ken

    Read the article

  • Antlr question: cannot get Antlr tool to compile simple file from ANTLRWorks

    - by Don Henton
    Here is the grammar file: grammar fred; test : 'fred'; Here is the batch file to launch the tool: SET JAVA_HOME=C:\Program Files\Java\jdk1.6.0_24 SET PATH=%PATH%;%JAVA_HOME%\bin SET ANTLR_HOME=c:/users/don/workspace/antlrAssign/lib/ java -cp %ANTLR_HOME%/antlr-3.3-complete.jar antlr.Tool fred.g Here's the result: ANTLR Parser Generator Version 2.7.7 (20060906) 1989-2005 fred.g:1:1: unexpected token: grammar error: Token stream error reading grammar(s): fred.g:3:19: expecting ''', found 'r' fred.g:1:1: rule grammar trapped: fred.g:1:1: unexpected token: grammar TokenStreamException: expecting ''', found 'r' Prior postings refer to "org.antlr.Tool" but the 3.3 jar has it located as above. The idea was to create a debug version of a tree parser, and according to the documentation, you have to use the command line tool. Has anyone seen this before? Am I nuts? It's two lines long and its dying on the first word in the file. Of course this compiles in antlrworks. Any help appreciated, I can't afford any more adjustments to my medications.

    Read the article

  • Multiple update panels and multiple postbacks cause entire page to refresh...

    - by Matt
    I'm having the same problem I had yesterday... The solution Aristos provided helped solve the problem, but I have other places sending updatepanel postbacks causing the same problem. When an update panel is updated and another request for an update is called before it has a chance to render the first updates, the entire page refreshes instead of just the update panel. I used fiddler to see what was going on and here's what happens... If I wait for the request to return before doing another request I get this: 21443|updatePanel|dnn_ctr1107_CRM_SalesTool_LeadsUpdatePanel| But if I don't wait, I get this: 66|pageRedirect||http://mysite.com/salesdashboard.aspx| The code from the previous question is still the same except I added UpdateMode="Conditional" to the update panel. Any ideas? Or is the only solution to this making sure that 2+ updates for any number of update panels (as long as they're on the same page) never happen? Thanks, Matt

    Read the article

  • Empty "Groups" in People Picker Navigation Controller

    - by Meltemi
    I'm using the standard People Picker Navigation Controller to let users associate one of their contacts with one of my objects. It presents itself as a long list of ALL contacts. The back button (top left) says "Groups" but when clicked on it shows an empty screen...even when there are many Groups in the user's address book. I've read through the docs and can't seem to find how to populate the Groups area. Here's how I present the people picker. Fairly standard: - (IBAction)showPicker:(id)sender { ABPeoplePickerNavigationController *picker = [[ABPeoplePickerNavigationController alloc] init]; picker.peoplePickerDelegate = self; [self presentModalViewController:picker animated:YES]; [picker release]; } What am I missing?

    Read the article

  • Translate query to NHibernate

    - by Rob Walker
    I am trying to learn NHibernate, and am having difficulty translating a SQL query into one using the criteria API. The data model has tables: Part (Id, Name, ...), Order (Id, PartId, Qty), Shipment (Id, PartId, Qty) For all the parts I want to find the total quantity ordered and the total quantity shipped. In SQL I have: select shipment.part_id, sum(shipment.quantity), sum(order.quantity) from shipment cross join order on order.part_id = shipment.part_id group by shipment.part_id Alternatively: select id, (select sum(quantity) from shipment where part_id = part.id), (select sum(quantity) from order where part_id = part.id) from part But the latter query takes over twice as long to execute. Any suggestions on how to create these queries in (fluent) NHibernate? I have all the tables mapped and loading/saving/etc the entities works fine.

    Read the article

  • Problem with the nonresponding threads

    - by Oxygen
    Hello there, I have a web application which runs multiple threads on button click each thread making IO call on different ipAddresses ie(login windows account and then making file operations). There is a treshold value of 30 seconds. I assume that while login attempt if the treshold is exceeded, device on ipAddress does not match my conditions thus I dont care it. Thread.Abort() does not fit my situation where it waits for the IO call to finish which might take long time. I tried doing the db operations acording to states of the threads right after the treshold timeout. It worked fine but when I checked out the log file, I noticed that the thread.IsAlive property of the nonresponding threads were still true. After several debuggings on my local pc, I encountered a possible deadlock situation (which i suspect) that my pc crashed badly. In short, do you have any idea about killing (forcefully) nonresponding threads (waiting for the IO opreation) right after the execution of the button_click? (PS: I am not using the threadpool) Oguzhan

    Read the article

  • Best Practices for persisting iPod Playlist (MPMediaItemCollection) across sessions

    - by coneybeare
    When using in-app audio in the iPhone SDK, it is possible to allow users to select a list from their ipod library and create an in-app local playlist. If I want to persist this choice, it is easy to serialize the data and write to file, then recover. Just vanilla like this, however, leads me to think there is going to be something wrong. For example, what if the user syncs and removes sounds? I can loop across them all and query the iPod DB at setup time, but with lists that could be 50,000 long, this could take some time. How are other people doing this and what are some gotchas that I haven't though about?

    Read the article

  • Does Sandcastle support Entity Framework Partial Classes?

    - by ChrisHDog
    I am attempting to use Sandcastle (and Sandcastle Help File Builder) to do some "auto-documentation" of some classes I am using. The classes that are giving me trouble are some partial classes on Entity Framework items that add methods and properties to those Framework items. The triple slash comments don't appear to come through on the methods and properties created in the partial classes. I have out how to get xml documentation of the base properties using the short summary and long description fields on the .emdx editor, but that doesn't provide a solution for the items in the partial classes. Is this possible? Is it perhaps just settings that I'm not setting correctly to pick up the partial classes? Does Sandcastle do partial classes in non-Entity Framework settings? Is what I'm doing even possible (has anyone else successfully used the xml created from triple slash comments to create documentation on entity framework partial classes, and if so how did you do that)? Any assistance is appreciated

    Read the article

  • Obtaining IP addresses in Bittorrent

    - by Legend
    I am trying to get a list of IP addresses serving or downloading a file. What I did was to contact a tracker like openbittorrent.com to get the following (as part of the scrape file): B%00%00%0C%5F%B1%B1l%CAGa%84S%CB%B0%9BG%84%3BE:0:1 Now, the long string in the beginning is the info hash. As a next step, I did this: http://tracker.sometracker.com/announce?info_hash=B%00%00%0C%5F%B1%B1l%CAGa%84S%CB%B0%9BG%84%3BE It gave me back the following. So far so good. The message contained this: d8:completei0e10:downloadedi0e10:incompletei2e8:intervali1931e12:min intervali965e5:peers12:U????????^@^@e Can someone tell me what should I be doing after this to get the IP addresses currently serving the file or downloading it?

    Read the article

  • HttpWebRequest to different IP than the domain resolves to

    - by fyjham
    Hey, Long story short an API I'm calling's different environments (dev/staging/uat/live) is set up by putting a host-record on the server so the live domain resolves to their other server in for the HTTP request. The problem is that they've done this with so many different environments that we don't have enough servers to use the server-wide host files for it anymore (We've got some environments running off the same servers - luckily not dev and live though :P). I'm wondering if there's a way to make WebRequest request to a domain but explicitly specify the IP of the server it should connect to? Or is there any way of doing this short of going all the way down to socket connections (Which I'd really prefer not to waste time/create bugs by trying to re-implementing the HTTP protocol). PS: I've tried and we can't just get a new sub-domain for each environment.

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • How much overhead does a msg_send call incur?

    - by pxl
    I'm attempting to piece together and run a list of tasks put together by a user. These task lists can be hundreds or thousand of items long. From what I know, the easiest and most obvious way would be to build an array and then iterate through them: NSArray *arrayOfTasks = .... init and fill with thousands of tasks for (id *eachTask in arrayOfTasks) { if ( eachTask && [eachTask respondsToSelector:@selector(execute)] ) [eachTask execute]; } For a desktop, this may be no problem, but for an iphone or ipad, this may be a problem. Is this a good way to go about it, or is there a faster way to accomplish the same thing? The reason why I'm asking about how much overhead a msg_send occurs is that I could also do a straight C implementation as well. For example, I could put together a linked list and use a block to handle the next task. Will I gain anything from that or is it really more trouble than its worth?

    Read the article

  • What exactly is the "Multi-threaded Debug DLL" Runtime Library option doing in VS 2008?

    - by GregH
    I have a solution in VS 2008 that creates a DLL. I then use that DLL in another application. If I go in to the DLL projects property pages and change the following configuration for a DEBUG build then the built dll no long provides the desired functionality. If I change it back and rebuild the DLL, then the DLL does provide the correct functionality: Property Pages = Configuration Properties = C/C++ = Code Generation = Runtime Library If set to "Multi-threaded Debug DLL (/MDd)" then everything works as it should. I get the correct functionality from the DLL If set to "Multi-threaded DLL (/MD)" then the DLL does not function properly...no runtime errors or anything, it just doesn't work (The DLL is supposed to plot some lines on a map but does not in this mode). So the question is, why does using the /MDd flag result in correction functionality of the underlying code, while /MD results in incorrect functionality? A little background...somebody else developed the DLL in C++ and I am using this DLL in a VB.net application.

    Read the article

  • MySQL query performance - 100Mb ethernet vs 1Gb ethernet

    - by Rob Penridge
    Hi All I've just started a new job and noticed that the analysts computers are connected to the network at 100Mbps. The queries we run against the MySQL server can easily be 500MB+ and it seems at times when the servers are under high load the DBAs kill low priority jobs as they are taking too long to run. My question is this... How much of this server time is spent executing the request, and how much time is spent returning the data to the client? Could the query speeds be improved by upgrading the network connections to 1Gbps? Thanks Rob

    Read the article

  • Was Joel right about XML being slow?

    - by Will
    A long time ago Joel explained how various every-day coding things were slow, and this led to XML as a data store being slow: http://www.joelonsoftware.com/articles/fog0000000319.html Are those every-day coding things - strcat and malloc - still slow in a std::string and dlmalloc world? What else has changed in modern processors and mainstream frameworks? And is XML still slow? You can't find an RDBMS that doesn't claim some kind of native XML support these days; haven't they got it faster - a single pass to index it for example - yet?

    Read the article

  • Eager-loading association count with Arel (Rails 3)

    - by Matchu
    Simple task: given that an article has many comments, be able to display in a long list of articles how many comments each article has. I'm trying to work out how to preload this data with Arel. The "Complex Aggregations" section of the README file seems to discuss that type of situation, but it doesn't exactly offer sample code, nor does it offer a way to do it in two queries instead of one joined query, which is worse for performance. Given the following: class Article has_many :comments end class Comment belongs_to :article end How can I preload for an article set how many comments each has?

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >