Search Results

Search found 16554 results on 663 pages for 'programmers identity'.

Page 218/663 | < Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >

  • How do you go about understanding the source code of an Open source project?

    - by Anirudh Vemula
    I am planning to contribute code through patches to some open source organisations to become more aware of open source development. I have chosen some organisations but when I download their source code, I don't seem to understand even a bit of it. How do I go about understanding their source code? I tried going through resolving a bug but finding the place in the source code where the bug is present is also difficult when you have no idea about how the code is structured and implemented. I need help on this so I can start working on an open source code.

    Read the article

  • Creating a layer of abstraction over the ORM layer

    - by Daok
    I believe that if you have your repositories use an ORM that it's already enough abstracted from the database. However, where I am working now, someone believe that we should have a layer that abstract the ORM in case that we would like to change the ORM later. Is it really necessary or it's simply a lot of over head to create a layer that will work on many ORM? Edit Just to give more detail: We have POCO class and Entity Class that are mapped with AutoMapper. Entity class are used by the Repository layer. The repository layer then use the additional layer of abstraction to communicate with Entity Framework. The business layer has in no way a direct access to Entity Framework. Even without the additional layer of abstraction over the ORM, this one need to use the service layer that user the repository layer. In both case, the business layer is totally separated from the ORM. The main argument is to be able to change ORM in the future. Since it's really localized inside the Repository layer, to me, it's already well separated and I do not see why an additional layer of abstraction is required to have a "quality" code.

    Read the article

  • Optimization ended up in casting an object at each method call

    - by Aybe
    I've been doing some optimization for the following piece of code : public void DrawLine(int x1, int y1, int x2, int y2, int color) { _bitmap.DrawLineBresenham(x1, y1, x2, y2, color); } After profiling it about 70% of the time spent was in getting a context for drawing and disposing it. I ended up sketching the following overload : public void DrawLine(int x1, int y1, int x2, int y2, int color, BitmapContext bitmapContext) { _bitmap.DrawLineBresenham(x1, y1, x2, y2, color, bitmapContext); } Until here no problems, all the user has to do is to pass a context and performance is really great as a context is created/disposed one time only (previously it was a thousand times per second). The next step was to make it generic in the sense it doesn't depend on a particular framework for rendering (besides .NET obvisouly). So I wrote this method : public void DrawLine(int x1, int y1, int x2, int y2, int color, IDisposable bitmapContext) { _bitmap.DrawLineBresenham(x1, y1, x2, y2, color, (BitmapContext)bitmapContext); } Now every time a line is drawn the generic context is casted, this was unexpected for me. Are there any approaches for fixing this design issue ? Note : _bitmap is a WriteableBitmap from WPF BitmapContext is from WriteableBitmapEx library DrawLineBresenham is an extension method from WriteableBitmapEx

    Read the article

  • Why did an interviewer ask me a question about people eating curry?

    - by Barry
    I had an interview question once which went... Interviewer: "Could you tell me how many people will eat curry for their dinner this evening" Me: "Er, sorry?" Interviewer: "Not the actual number just an estimate" I actually started to stumble my way through it, when I stopped and questioned what it had to do with anything about the job. The interviewer mumbled something and moved on. I guess the question is, what is the point in the ridiculous questions? I just don't understand why they started coming up with these things.

    Read the article

  • Composite-like pattern and SRP violation

    - by jimmy_keen
    Recently I've noticed myself implementing pattern similar to the one described below. Starting with interface: public interface IUserProvider { User GetUser(UserData data); } GetUser method's pure job is to somehow return user (that would be an operation speaking in composite terms). There might be many implementations of IUserProvider, which all do the same thing - return user basing on input data. It doesn't really matter, as they are only leaves in composite terms and that's fairly simple. Now, my leaves are used by one own them all composite class, which at the moment follows this implementation: public interface IUserProviderComposite : IUserProvider { void RegisterProvider(Predicate<UserData> predicate, IUserProvider provider); } public class UserProviderComposite : IUserProviderComposite { public User GetUser(SomeUserData data) ... public void RegisterProvider(Predicate<UserData> predicate, IUserProvider provider) ... } Idea behind UserProviderComposite is simple. You register providers, and this class acts as a reusable entry-point. When calling GetUser, it will use whatever registered provider matches predicate for requested user data (if that helps, it stores key-value map of predicates and providers internally). Now, what confuses me is whether RegisterProvider method (brings to mind composite's add operation) should be a part of that class. It kind of expands its responsibilities from providing user to also managing providers collection. As far as my understanding goes, this violates Single Responsibility Principle... or am I wrong here? I thought about extracting register part into separate entity and inject it to the composite. As long as it looks decent on paper (in terms of SRP), it feels bit awkward because: I would be essentially injecting Dictionary (or other key-value map) ...or silly wrapper around it, doing nothing more than adding entires This won't be following composite anymore (as add won't be part of composite) What exactly is the presented pattern called? Composite felt natural to compare it with, but I realize it's not exactly the one however nothing else rings any bells. Which approach would you take - stick with SRP or stick with "composite"/pattern? Or is the design here flawed and given the problem this can be done in a better way?

    Read the article

  • Why is using C++ libraries so complicated?

    - by Pius
    First of all, I want to note I love C++ and I'm one of those people who thinks it is easier to code in C++ than Java. Except for one tiny thing: libraries. In Java you can simply add some jar to the build path and you're done. In C++ you usually have to set multiple paths for the header files and the library itself. In some cases, you even have to use special build flags. I have mainly used Visual Studio, Code Blocks and no IDE at all. All 3 options do not differ much when talking about using external libraries. I wonder why was there made no simpler alternative for this? Like having a special .zip file that has everything you need in one place so the IDE can do all the work for you setting up the build flags. Is there any technical barrier for this?

    Read the article

  • How to design highly scalable web services in Java?

    - by Kshitiz Sharma
    I am creating some Web Services that would have 2000 concurrent users. The services are offered for free and are hence expected to get a large user base. In the future it may be required to scale up to 50,000 users. There are already a few other questions that address the issue like - Building highly scalable web services However my requirements differ from the question above. For example - My application does not have a user interface, so images, CSS, javascript are not an issue. It is in Java so suggestions like using HipHop to translate PHP to native code are useless. Hence I decided to ask my question separately. This is my project setup - Rest based Web services using Apache CXF Hibernate 3.0 (With relevant optimizations like lazy loading and custom HQL for tune up) Tomcat 6.0 MySql 5.5 My questions are - Are there alternatives to Mysql that offer better performance for what I'm trying to do? What are some general things to abide by in order to scale a Java based web application? I am thinking of putting my Application in two tomcat instances with httpd redirecting the request to appropriate tomcat on basis of load. Is this the right approach? Separate tomcat instances can help but then database becomes the bottleneck since both applications access the same database? I am a programmer not a Db Admin, how difficult would it be to cluster a Mysql database (or, to cluster whatever database offered as an alternative to 1)? How effective are caching solutions like EHCache? Any other general best practices? Some clarifications - Could you partition the data? Yes we could but we're trying to avoid it. We need to run a lot of data mining algorithms and the design would evolve over time so we can't be sure what lines of partition should be there.

    Read the article

  • How to keep AST for feature access?

    - by greenoldman
    Consider such code (let's say it is C++) Foo::Bar.get().X How one should keep the AST for this -- as "tree" with root at left Foo(Bar(get(X)), or with root at right (((Foo)Bar)get)X? Or maybe as a flat structure (list)? The first one seems more convenient when resolving names, the second when working with it as expression. I set tag parsing but I am asking from semantic analysis POV really (there is no such tag).

    Read the article

  • Is there an easy way to type in common math symbols?

    - by srcspider
    Disclaimer: I'm sure someone is going to moan about easy-of-use, for the purpose of this question consider readability to be the only factor that matters So I found this site that converts to easting northing, it's not really important what that even means but here's how the piece of javascript looks. /** * Convert Ordnance Survey grid reference easting/northing coordinate to (OSGB36) latitude/longitude * * @param {OsGridRef} gridref - easting/northing to be converted to latitude/longitude * @returns {LatLonE} latitude/longitude (in OSGB36) of supplied grid reference */ OsGridRef.osGridToLatLong = function(gridref) { var E = gridref.easting; var N = gridref.northing; var a = 6377563.396, b = 6356256.909; // Airy 1830 major & minor semi-axes var F0 = 0.9996012717; // NatGrid scale factor on central meridian var f0 = 49*Math.PI/180, ?0 = -2*Math.PI/180; // NatGrid true origin var N0 = -100000, E0 = 400000; // northing & easting of true origin, metres var e2 = 1 - (b*b)/(a*a); // eccentricity squared var n = (a-b)/(a+b), n2 = n*n, n3 = n*n*n; // n, n², n³ var f=f0, M=0; do { f = (N-N0-M)/(a*F0) + f; var Ma = (1 + n + (5/4)*n2 + (5/4)*n3) * (f-f0); var Mb = (3*n + 3*n*n + (21/8)*n3) * Math.sin(f-f0) * Math.cos(f+f0); var Mc = ((15/8)*n2 + (15/8)*n3) * Math.sin(2*(f-f0)) * Math.cos(2*(f+f0)); var Md = (35/24)*n3 * Math.sin(3*(f-f0)) * Math.cos(3*(f+f0)); M = b * F0 * (Ma - Mb + Mc - Md); // meridional arc } while (N-N0-M >= 0.00001); // ie until < 0.01mm var cosf = Math.cos(f), sinf = Math.sin(f); var ? = a*F0/Math.sqrt(1-e2*sinf*sinf); // nu = transverse radius of curvature var ? = a*F0*(1-e2)/Math.pow(1-e2*sinf*sinf, 1.5); // rho = meridional radius of curvature var ?2 = ?/?-1; // eta = ? var tanf = Math.tan(f); var tan2f = tanf*tanf, tan4f = tan2f*tan2f, tan6f = tan4f*tan2f; var secf = 1/cosf; var ?3 = ?*?*?, ?5 = ?3*?*?, ?7 = ?5*?*?; var VII = tanf/(2*?*?); var VIII = tanf/(24*?*?3)*(5+3*tan2f+?2-9*tan2f*?2); var IX = tanf/(720*?*?5)*(61+90*tan2f+45*tan4f); var X = secf/?; var XI = secf/(6*?3)*(?/?+2*tan2f); var XII = secf/(120*?5)*(5+28*tan2f+24*tan4f); var XIIA = secf/(5040*?7)*(61+662*tan2f+1320*tan4f+720*tan6f); var dE = (E-E0), dE2 = dE*dE, dE3 = dE2*dE, dE4 = dE2*dE2, dE5 = dE3*dE2, dE6 = dE4*dE2, dE7 = dE5*dE2; f = f - VII*dE2 + VIII*dE4 - IX*dE6; var ? = ?0 + X*dE - XI*dE3 + XII*dE5 - XIIA*dE7; return new LatLonE(f.toDegrees(), ?.toDegrees(), GeoParams.datum.OSGB36); } I found that to be a really nice way of writing an algorythm, at least as far as redability is concerned. Is there any way to easily write the special symbols. And by easily write I mean NOT copy/paste them.

    Read the article

  • How broad should a computer science/engineering student go?

    - by AskQuestions
    I have less than 2 years of college left and I still don't know what to focus on. But this is not about me, this is about being a future developer. I realize that questions like "Which language should I learn next?" are not really popular, but I think my question is broader than that. I often see people write things like "You have to learn many different things. Being a developer is not about learning one programming language / technology and then doing that for the rest of your life". Well, sure, but it's impossible to really learn everything thoroughly. Does that mean that one should just learn the basics of everything and then learn some things more thoroughly AFTER getting a particular job? I mean, the best way to learn programming is by actually programming stuff... But projects take time. Does an average developer really switch between (for example) being a web developer, doing artificial intelligence and machine learning related stuff and programming close to the hardware? I mean, I know a lot of different things, but I don't feel proficient in any of those things. If I want to find a job as a web developer (that's just an example) after I finish college, shouldn't I do some web related project (maybe using something I still don't know) rather than try to learn functional programming? So, the question is: How broad should a computer science student's field of focus be? One programming language is surely far too narrow, but what is too broad?

    Read the article

  • How abstract should you get with BDD

    - by Newton
    I was writing some tests in Gherkin (using Cucumber/Specflow). I was wondering how abstract should I get with my tests. In order to not make this open-ended, which of the following statements is better for BDD: Given I am logged in with email [email protected] and password 12345 When I do something Then something happens as opposed to Given I am logged in as the Administrator When I do something Then something happens The reason I am confused is because 1 is more based on the behaviour (filing in email and password) and 2 is easier to process and write the tests.

    Read the article

  • Is taking a semester or year off from college a good idea?

    - by astrieanna
    I am currently a Junior majoring in Computer Science at a top university (in the USA). As I'm really getting tired of taking classes, I was wondering if taking a semester or year off to do an internship(s) is a reasonable idea? It seems like it would give me more experience programming (making classes a bit easier), and give me a chance to recover from the burnout that comes from taking 18 credits a semester. A friend suggested that I just take a lighter course load, but I only have 2 more semesters of financial aid, so I need to take 18 credits in each of them in order to finish. Taking time off from school is not a normal thing to do, at least at this school. Since more internships are advertised for the summer (that I've seen), I was wondering if there are internships available in times other than the summer? If I took off for a whole year, would it be more valuable to try to stay at the same company for the whole time or to try to get a series of internships at different ones? Valuable in both the sense of resume value and personal value. Would it be easier or harder to get multiple shorter internships?

    Read the article

  • Do real developers use UML and other CASE tools?

    - by Avi
    I'm a CS student, currently a junior, and in one of my classes this semester they have us studying all sorts of UML diagramming methods. Among others, we've touched on Petri nets, DFD diagrams, sequence diagrams, use case diagrams, collaboration diagrams, Jackson System Development diagrams, entity-relation diagrams, and more. I've worked on more than a few professional projects over the years and never encountered anyone who used these systems to any great degree (other than a general class diagram or a diagram of the tables in a database). I was just wondering if I could query the hive mind to see if this is true in your work experience too. Have you used these models at all and found them to be as important as they tell us students they are? Or is all this stuff just academic ivory-tower crap that people in the real world hardly ever touch? Which of these systems have you found to be effective and useful? Are there specific kinds of scenarios that they are more intended to be used in than what the typical software developer encounters?

    Read the article

  • Should Professional Development occur on company time?

    - by jshu
    As a first-time part-time software developer at a small consulting company, I'm struggling to organise time to further my own software development knowledge - whether that's reading a book, keeping up with the popular questions on StackOverflow, researching a technology we're using in-depth, or following the front page of Hacker News. I can see results borne from my self-allocated study time, but listing and demonstrating the skills and knowledge gained through Professional Development is difficult. The company does not have any defined PD policy, and there's a lot of pressure to get something deliverable done now! when working for consultants. I've checked what my coworkers do, and they don't appear to allocate any time to self-improvement; they just work at the problems they're given, looking up specific MSDN references, code samples, and the like as they need them. I realise that PD policy is going to vary across companies of different size and culture, and a company like my own is probably a bit of an edge case. I'd love to hear views and experiences from more seasoned developers; especially those who have to make the PD policy choices in their team or company. I'd also like to learn about the more radical approaches to PD, even if they're completely out there; it's always interesting to see what other people are trying. Not quite a summary, but what I'm trying to ask: Is it common or recommended for companies to allocate PD time? Whose responsibility is it to ensure a developer's knowledge and skills are up to date? Should a part-time work schedule inspire a lower ratio of PD time : work? How can a developer show non-developer coworkers that reading blogs and books is net productive? Is reading blogs and books actually net productive? (references welcomed) Is writing blogs effective as a way of PD? (a recent theme on Hacker News) This is sort of a broad question because I don't know exactly which questions I need to ask here, so any thoughts on relevant issues I haven't addressed are very welcome.

    Read the article

  • Need Help in optimizing a loop in C [migrated]

    - by WedaPashi
    I am trying to draw a Checkerboard pattern on a lcd using a GUI library called emWin. I have actually managed to draw it using the following code. But having these many loops in the program body for a single task, that too in the internal flash of the Microcontroller is not a good idea. Those who have not worked with emWin, I will try and explain a few things before we go for actual logic. GUI_REST is a structure which id define source files of emWin and I am blind to it. Rect, REct2,Rec3.. and so on till Rect10 are objects. Elements of the Rect array are {x0,y0,x1,y1}, where x0,y0 are starting locations of rectangle in X-Y plane and x1, y1 are end locations of Rectangle in x-Y plane. So, Rect={0,0,79,79} is a rectangle starts at top left of the LCD and is upto (79,79), so its a square basically. The function GUI_setBkColor(int color); sets the color of the background. The function GUI_setColor(int color); sets the color of the foreground. GUI_WHITE and DM_CHECKERBOARD_COLOR are two color values, #defineed GUI_FillRectEx(&Rect); will draw the Rectangle. The code below works fine but I want to make it smarter. GUI_RECT Rect = {0, 0, 79, 79}; GUI_RECT Rect2 = {80, 0, 159, 79}; GUI_RECT Rect3 = {160, 0, 239, 79}; GUI_RECT Rect4 = {240, 0, 319, 79}; GUI_RECT Rect5 = {320, 0, 399, 79}; GUI_RECT Rect6 = {400, 0, 479, 79}; GUI_RECT Rect7 = {480, 0, 559, 79}; GUI_RECT Rect8 = {560, 0, 639, 79}; GUI_RECT Rect9 = {640, 0, 719, 79}; GUI_RECT Rect10 = {720, 0, 799, 79}; WM_SelectWindow(Win_DM_Main); GUI_SetBkColor(GUI_BLACK); GUI_Clear(); for(i = 0; i < 6; i++) { if(i%2 == 0) GUI_SetColor(GUI_WHITE); else GUI_SetColor(DM_CHECKERBOARD_COLOR); GUI_FillRectEx(&Rect); Rect.y0 += 80; Rect.y1 += 80; } /* for(j=0,j<11;j++) { for(i = 0; i < 6; i++) { if(i%2 == 0) GUI_SetColor(GUI_WHITE); else GUI_SetColor(DM_CHECKERBOARD_COLOR); GUI_FillRectEx(&Rect); Rect.y0 += 80; Rect.y1 += 80; } Rect.x0 += 80; Rect.x1 += 80; } */ for(i = 0; i < 6; i++) { if(i%2 == 0) GUI_SetColor(DM_CHECKERBOARD_COLOR); else GUI_SetColor(GUI_WHITE); GUI_FillRectEx(&Rect2); Rect2.y0 += 80; Rect2.y1 += 80; } for(i = 0; i < 6; i++) { if(i%2 == 0) GUI_SetColor(GUI_WHITE); else GUI_SetColor(DM_CHECKERBOARD_COLOR); GUI_FillRectEx(&Rect3); Rect3.y0 += 80; Rect3.y1 += 80; } for(i = 0; i < 6; i++) { if(i%2 == 0) GUI_SetColor(DM_CHECKERBOARD_COLOR); else GUI_SetColor(GUI_WHITE); GUI_FillRectEx(&Rect4); Rect4.y0 += 80; Rect4.y1 += 80; } for(i = 0; i < 6; i++) { if(i%2 == 0) GUI_SetColor(GUI_WHITE); else GUI_SetColor(DM_CHECKERBOARD_COLOR); GUI_FillRectEx(&Rect5); Rect5.y0 += 80; Rect5.y1 += 80; } for(i = 0; i < 6; i++) { if(i%2 == 0) GUI_SetColor(DM_CHECKERBOARD_COLOR); else GUI_SetColor(GUI_WHITE); GUI_FillRectEx(&Rect6); Rect6.y0 += 80; Rect6.y1 += 80; } for(i = 0; i < 6; i++) { if(i%2 == 0) GUI_SetColor(GUI_WHITE); else GUI_SetColor(DM_CHECKERBOARD_COLOR); GUI_FillRectEx(&Rect7); Rect7.y0 += 80; Rect7.y1 += 80; } for(i = 0; i < 6; i++) { if(i%2 == 0) GUI_SetColor(DM_CHECKERBOARD_COLOR); else GUI_SetColor(GUI_WHITE); GUI_FillRectEx(&Rect8); Rect8.y0 += 80; Rect8.y1 += 80; } for(i = 0; i < 6; i++) { if(i%2 == 0) GUI_SetColor(GUI_WHITE); else GUI_SetColor(DM_CHECKERBOARD_COLOR); GUI_FillRectEx(&Rect9); Rect9.y0 += 80; Rect9.y1 += 80; } for(i = 0; i < 6; i++) { if(i%2 == 0) GUI_SetColor(DM_CHECKERBOARD_COLOR); else GUI_SetColor(GUI_WHITE); GUI_FillRectEx(&Rect10); Rect10.y0 += 80; Rect10.y1 += 80; }

    Read the article

  • Examples of limitations in IT due to different bit length by design

    - by Alaudo
    I am teaching the course "Introduction in Programming" for the first-year students and would like to find interesting examples where the datatype size in bits, chosen by design, led to certain known restrictions or important values. Here are some examples: Due to the fact that the Bell teleprinter used 7-bit-code (later accepted as ASCII) until now have we often to encode attachments in electronic messages to contain only 7 bit data. Classical limitation of 32-bit address space leads to the 4Gb maximal RAM size available for 32-bit systems and 4Gb maximal file size in FAT32. Do you know some other interesting examples how the choice of the data type (and especially its binary length) influenced the modern IT world. Added after some discussion in comments: I am not going to teach how to overcome limitations. I just want them to know that 1 byte can hold the values from -127..0..+127 o 0..255, 2 bytes cover the range 0..65535 etc by proving examples they know from other sources, like the above-mentioned base64 encoding etc. We are just learning the basic datatypes and I am trying to find a good reference for "how large" these types are.

    Read the article

  • How to price code reviews to encourage good behavior?

    - by Chris Clark
    I work for a company that has a hosted .net internet application with many clients. Those clients often want to write customizations for our application. We have APIs to hook into the app, but the customizations themselves are written in .net. This is a shared, secure hosting environment and we have to code review these customizations before we can deploy them in our datacenter to ensure that they don't degrade performance, crash our servers, or open any security vulnerabilities. We charge for these code reviews. The current pricing model is simply a function of the number of lines of code. I think this is a bad idea for a variety of reasons, but primarily because, if we are interested in verifying that the code works as expected, we should be incentivizing good, readable code, not compaction. I would like to propose a pricing model that incorporates some, or all of the following as inputs: Lines of code Cyclomatic complexity Avg function length # of functions Are there any other metrics I should incorporate, or other ideas for how we can reasonably create pricing for code reviews that encourages safe and understandable code?

    Read the article

  • Does one's native spoken language affect quality of code?

    - by Xepoch
    There is a school of thought in linguistics that problem solving is very much tied to the syntax, semantics, grammar, and flexibility of one's own native spoken language. Working with various international development teams, I can clearly see a mental culture (if you will) in the codebase. Programming language aside, the German coding is quite different from my colleagues in India. As well, code is distinctly different in Middle America as it is in Coastal America (actually, IBM noticed this years ago). Do you notice with your international colleagues (from ANY country) that coding style and problem solving are in-line with native tongues?

    Read the article

  • Thoughts and comments on Search Neutrality?

    - by SprocketGizmo
    Following the cases brought forward by Foundem, Ciao!, and eJustice.fr what are your thoughts on Search Neutrality? Should search engines be regulated by the FCC or FTC similarly to the way the FCC is pushing to regulate Net Neutrality? Relevant Articles: Op-Ed to the New York Times from the founder of Foundem Excerpt from Book on Search/Net Neutrality Blog discussing preceding link. Site founded by Foundem to promote Search Neutrality awareness.

    Read the article

  • Visual Studio 2010 on Macbook Air

    - by Kyle B.
    Does anyone here run Visual Studio 2010 (or VS12 RC) on a Macbook Air? I have the current model with 4GB ram, 13" screen, and 256GB SSD drive. Before I go through the effort of configuring this, I'd like to know if anyone from the community has done this and: Was the performance acceptable? If it is, I plan to get a larger cinema display monitor as a second display and do all my coding on this machine ditching my desktop. Did you use Boot camp, Parallels, or VMWare? I feel to maximize performance that boot camp would be necessary to make the most utilization of the memory, but am not sure if this completely necessary. I'd prefer to use a VM, but wasn't sure if this was practical and would value your input before buying a license. Did you also run anything else on the Windows installation, such as SQL Server express, IISExpress, etc? Did performance lag after a certain point? Note: I would have asked this in superuser.com, but felt this applied more directly to the programming community.

    Read the article

  • When using membership provider, do you use the user ID or the username?

    - by Chris
    I've come across this is in a couple of different applications that I've worked on. They all used the ASP.NET Membership Provider for user accounts and controlling access to certain areas, but when we've gotten down into the code I've noticed that in one we're passing around the string based user name, like "Ralph Waters", or we're passing around the Guid based user ID from the membership table. Now both seem to work. You can make methods which get by username, or get by user ID, but both have felt somewhat "funny". When you pass a string like "Ralph Waters" you're passing essentially two separate words that make up a unique identifier. And with a Guid, you're passing around a string/number combination which can be cast and made unique. So my question is this; when using Membership Provider, which do you use, the username or the user ID to get back to the user? Thanks all!

    Read the article

  • There is any reason for which a delete method/field/function refactoring doesn't exist?

    - by raisercostin
    An operation in an interface is obsolete so I decided to delete it. It seems that there is no automatic support for such a "refactoring". For me is a refactoring operation since the behavior of the code will be preserved since nobody(tests, client apis) will notice that the operation was removed. In eclipse, in java code, on an method in an interface I have the following options: rename, move, change method signature, inline, extract interface, extract superclass, use supertype when possible, pull up, push down, introduce parameter objet, introduce indirection, generate declared type. There is any reason for which a delete method/field/function refactoring doesn't exist?

    Read the article

  • Is Perforce as good as merging as DVCSs?

    - by dukeofgaming
    I've heard that Perforce is very good at merging, I'm guessing this has to do with that it tracks changes in the form of changelists where you can add differences across several files in a single blow. I think this implies Perforce gathers more metadata and therefore has more information to do smarter merging (at least smarter than Subversion, being Perforce centralized). Since this is similar to how Mercurial and Git handle changes (I know DVCSs track content rather than files), I was wondering if somebody knew what were the subtle differences that makes Perforce better or worse than a DVCS like Mercurial or Git.

    Read the article

  • Perl numerical sorting: how to ignore leading alpha character [migrated]

    - by Luke Sheppard
    I have a 1,660 row array like this: ... H00504 H00085 H00181 H00500 H00103 H00007 H00890 H08793 H94316 H00217 ... And the leading character never changes. It is always "H" then five digits. But when I do what I believe is a numerical sort in Perl, I'm getting strange results. Some segments are sorted in order, but then a different segment starts up. Here is a segment after sorting: ... H01578 H01579 H01580 H01581 H01582 H01583 H01584 H00536 H00537 H00538 H01585 H01586 H01587 H01588 H01589 H01590 ... What I'm trying is this: my @sorted_array = sort {$a <=> $b} @raw_array; But obviously it is not working. Anyone know why?

    Read the article

  • How to prepare for a programming competition? Graphs, Stacks, Trees, oh my! [closed]

    - by Simucal
    Last semester I attended ACM's (Association for Computing Machinery) bi-annual programming competition at a local University. My University sent 2 teams of 3 people and we competed amongst other schools in the mid-west. We got our butts kicked. You are given a packet with about 11 problems (1 problem per page) and you have 4 hours to solve as many as you can. They'll run your program you submit against a set of data and your output must match theirs exactly. In fact, the judging is automated for the most part. In any case.. I went there fairly confident in my programming skills and I left there feeling drained and weak. It was a terribly humbling experience. In 4 hours my team of 3 people completed only one of the problems. The top team completed 4 of them and took 1st place. The problems they asked were like no problems I have ever had to answer before. I later learned that in order to solve them some of them effectively you have to use graphs/graph algorithms, trees, stacks. Some of them were simply "greedy" algo's. My question is, how can I better prepare for this semesters programming competition so I don't leave there feeling like a complete moron? What tips do you have for me to be able to answer these problems that involve graphs, trees, various "well known" algorithms? How can I easily identify the algorithm we should implement for a given problem? I have yet to take Algorithm Design in school so I just feel a little out of my element. Here are some examples of the questions asked at the competitions: ACM Problem Sets Update: Just wanted to update this since the latest competition is over. My team placed 1st for our small region (about 6-7 universities with between 1-5 teams each school) and ~15th for the midwest! So, it is a marked improvement over last years performance for sure. We also had no graduate students on our team and after reviewing the rules we found out that many teams had several! So, that would be a pretty big advantage in my own opinion. Problems this semester ranged from about 1-2 "easy" problems (ie bit manipulation, string manipulation) to hard (graph problems involving fairly complex math and network flow problems). We were able to solve 4 problems in our 5 hours. Just wanted to thank everyone for the resources they provided here, we used them for our weekly team practices and it definitely helped! Some quick tips that I have that aren't suggested below: When you are seated at your computer before the competition starts, quickly type out various data structures that you might need that you won't have access to in your languages libraries. I typed out a Graph data-structure complete with floyd-warshall and dijkstra's algorithm before the competition began. We ended up using it in our 2nd problem that we solved and this is the main reason why we solved this problem before anyone else in the midwest. We had it ready to go from the beginning. Similarly, type out the code to read in a file since this will be required for every problem. Save this answer "template" someplace so you can quickly copy/paste it to your IDE at the beginning of each problem. There are no rules on programming anything before the competition starts so get any boilerplate code out the way. We found it useful to have one person who is on permanent whiteboard duty. This is usually the person who is best at math and at working out solutions to get a head start on future problems you will be doing. One person is on permanent programming duty. Your fastest/most skilled "programmer" (most familiar with the language). This will save debugging time also. The last person has several roles between assessing the packet of problems for the next "easiest" problem, helping the person on the whiteboard work out solutions and helping the person programming work out bugs/issues. This person needs to be flexible and be able to switch between roles easily.

    Read the article

< Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >