Search Results

Search found 25198 results on 1008 pages for 'non programmers'.

Page 338/1008 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • Strategies for Indexing Custom Fields in RavenDB

    - by Adrian Thompson Phillips
    In the relational database world, if I was developing a CRM system and wanted to have the user add their own custom fields that are searchable, I could have tables that store the name of the new column, the data type and the value, etc. (which would be less inefficient to index) or I could use the less elegant (but more searchable) solution that software like Dynamics and SharePoint use, whereas I create a load of columns on my aggregate root called CustomInt1, CustomInt2, etc. (which looks dirty and has a limit of how many custom fields a user can have, but has indexing advantages). But my questions is this, in NoSQL databases, what would be the best way of achieving the same thing? My priority would be for searchability. So what would be the best way to store this data? If I used a predefined set of properties (i.e. CustomData1, CustomData2, etc.), because these are all stored as JSON (i.e. strings) in the database, does this make it simpler because I don't have to worry about data types?

    Read the article

  • Should you charge clients hours spent on the wrong track?

    - by Lea Verou
    I took up a small CSS challenge to solve for a client and I'm going to be paid on a hourly rate. I eventually solved it, it took 4 hours but I spent roughly 30% of the time in the wrong track, trying a CSS3 solution that only worked in recent browsers and finally discovering that no fallback is possible via JS (like I originally thought). Should I charge the client that 30%? More details: I didn't provide an estimate, I liked the challenge per se, so I started working on it before giving an estimate (but I have worked with him before, so I know he's not one of those people that have unrealistic expectations). At the very worst I will have spent 4 unpaid hours on an intriguing CSS challenge. And I will give the fairest possible estimate for both of us, since I will have already done the work. :)

    Read the article

  • How do you name/brand software?

    - by JasCav
    I am working on an application that is brand new. We have been using an internal name up to this point, but it's not very imaginative and is definitely not good from a branding standpoint. I have been tasked to come up with a name for the software. I am really struggling with this as every name I have thought about really doesn't seem to do the software justice. I have tried acronyms, mythology names, scientific names, etc. I may be being a bit picky, but I feel like the name should just hit me when its right. Here are my requirements: The name has to be easy to say (and hopefully catchy). The name has to relate back to what the software does. The name has to be able to be branded. (For example, vivid imagery, tagline, etc.) So, although I can't give the specifics of the software, I was hoping the brains here could provide some insight as to how I should go about naming/branding a piece of software?

    Read the article

  • How do you choose to use a specific programming language?

    - by Jesús Bracamonte
    I was having a small talk between teammates about how you choose a programming language for use in a project which lead me to think that there are many criteria to choose one in the beginning of a project but no real standard. Do you chose a programming language for the syntax and semantics? Or do you choose one because it has the best support to do certain things? Or because you have better libraries? Or do you choose it for the paradigm? What criteria do you use to choose one language when you are going to do a project?

    Read the article

  • Japanese Multiplication simulation - is a program actually capable of improving calculation speed?

    - by jt0dd
    On SuperUser, I asked a (possibly silly) question about processors using mathematical shortcuts and would like to have a look at the possibility at the software application of that concept. I'd like to write a simulation of Japanese Multiplication to get benchmarks on large calculations utilizing the shortcut vs traditional CPU multiplication. I'm curious as to whether it makes sense to try this. My Question: I'd like to know whether or not a software math shortcut, as described above is actually a shortcut at all. This is a question of programming concept. By utilizing the simulation of Japanese Multiplication, is a program actually capable of improving calculation speed? Or am I doomed from the start? The answer to this question isn't required to determine whether or not the experiment will succeed, but rather whether or not it's logically possible for such a thing to occur in any program, using this concept as an example. My theory is that since addition is computed faster than multiplication, a simulation of Japanese multiplication may actually allow a program to multiply (large) numbers faster than the CPU arithmetic unit can. I think this would be a very interesting finding, if it proves to be true. If, in the multiplication of numbers of any immense size, the shortcut were to calculate the result via less instructions (or faster) than traditional ALU multiplication, I would consider the experiment a success.

    Read the article

  • How to do dependency Injection and conditional object creation based on type?

    - by Pradeep
    I have a service endpoint initialized using DI. It is of the following style. This end point is used across the app. public class CustomerService : ICustomerService { private IValidationService ValidationService { get; set; } private ICustomerRepository Repository { get; set; } public CustomerService(IValidationService validationService,ICustomerRepository repository) { ValidationService = validationService; Repository = repository; } public void Save(CustomerDTO customer) { if (ValidationService.Valid(customer)) Repository.Save(customer); } Now, With the changing requirements, there are going to be different types of customers (Legacy/Regular). The requirement is based on the type of the customer I have to validate and persist the customer in a different way (e.g. if Legacy customer persist to LegacyRepository). The wrong way to do this will be to break DI and do somthing like public void Save(CustomerDTO customer) { if(customer.Type == CustomerTypes.Legacy) { if (LegacyValidationService.Valid(customer)) LegacyRepository.Save(customer); } else { if (ValidationService.Valid(customer)) Repository.Save(customer); } } My options to me seems like DI all possible IValidationService and ICustomerRepository and switch based on type, which seems wrong. The other is to change the service signature to Save(IValidationService validation, ICustomerRepository repository, CustomerDTO customer) which is an invasive change. Break DI. Use the Strategy pattern approach for each type and do something like: validation= CustomerValidationServiceFactory.GetStratedgy(customer.Type); validation.Valid(customer) but now I have a static method which needs to know how to initialize different services. I am sure this is a very common problem, What is the right way to solve this without changing service signatures or breaking DI?

    Read the article

  • Why is C++ often the first language taught in college?

    - by Casey Patton
    My school starts the computer science curriculum with C++ programming courses, meaning this is the first language that many of the students learn. I've seen that many people dislike C++, and I've read a variety of reasons why. It almost seems to be popular opinion that C++ isn't a very good language. I get the impression it's not very liked based on some questions on StackExchange as well as posts such as: http://damienkatz.net/2004/08/why-c-sucks.html http://blogs.kde.org/node/2298 http://blogs.cio.com/esther_schindler/linus_torvalds_why_c_sucks http://www.dacris.com/blog/2010/02/16/why-c-sucks-part-2/ etc. (Note: It is not my opinion that C++ is a bad language. In fact, it's the main language I use. However, the internet as well as some professors have given me the impression that it's not a very widely liked language. In fact, one of my professor constantly rags on C++, yet it's still the starting language at my college!) With that in mind, why is this the first language taught at many schools? What are the reasons for starting a programming curriculum with C++? Note: This question is similar to "Is C++ suitable as a first language", but is a little different since I'm not interested in whether it's suitable, but why it's been chosen.

    Read the article

  • Relative encapsulation design

    - by taher1992
    Let's say I am doing a 2D application with the following design: There is the Level object that manages the world, and there are world objects which are entities inside the Level object. A world object has a location and velocity, as well as size and a texture. However, a world object only exposes get properties. The set properties are private (or protected) and are only available to inherited classes. But of course, Level is responsible for these world objects, and must somehow be able to manipulate at least some of its private setters. But as of now, Level has no access, meaning world objects must change its private setters to public (violating encapsulation). How to tackle this problem? Should I just make everything public? Currently what I'm doing is having a inner class inside game object that does the set work. So when Level needs to update an objects location it goes something like this: void ChangeObject(GameObject targetObject, int newX, int newY){ // targetObject.SetX and targetObject.SetY cannot be set directly var setter = new GameObject.Setter(targetObject); setter.SetX(newX); setter.SetY(newY); } This code feels like overkill, but it doesn't feel right to have everything public so that anything can change an objects location for example.

    Read the article

  • Software developer needs Validation for VA Chap 31 to purchase Macbook Pro vs. PC [closed]

    - by David
    I am currently attending college with a path of software development and working towards my BS thanks to VA Chap 31. My old original Macbook Pro is near dead and no longer upgradable on the software or hardware side. The VA has offered to purchase a PC laptop for me (Because my syllabi says computer required), but I do not want to go backwards. I have a lot invested in OS X software and Mac peripherals, not to mention I prefer to program in an Apple environment. PC vs. Mac costs are so drastically different that I must validate my request for a new Macbook Pro. In my request to the VA, I stated the above and some other topics but they requested more validation. Can anyone recommend issues, reasons, etc. to help me validate this purchase by the VA for school? Thanks in advance for your help, David

    Read the article

  • A programming language that does not allow IO. Haskell is not a pure language

    - by TheIronKnuckle
    (I asked this on Stack Overflow and it got closed as off-topic, I was a bit confused until I read the FAQ, which discouraged subjective theoratical debate style questions. The FAQ here doesn't seem to have a problem with it and it sounds like this is a more appropriate place to post. If this gets closed again, forgive me, I'm not trying to troll) Are there any 100% pure languages (as I describe in the Stack Overflow post) out there already and if so, could they feasibly be used to actually do stuff? i.e. do they have an implementation? I'm not looking for raw maths on paper/Pure lambda calculus. However Pure lambda calculus with a compiler or a runtime system attached is something I'd be interested in hearing about.

    Read the article

  • Cheerp -- C++ for web: advance or regression?

    - by Henrique Barcelos
    Recently I've run into Cheerp, a C++ to Javascript compiler, which uses a modified version of clang to generate Javascript code from C++ sources. That makes me wonder: why in the seven kingdoms would someone do this in their right mind? I mean: why would you take a language that is not designed for web at all, that is far more convoluted and bureaucratic, write your code and then compile it into Javascript itself? Can anybody see any advantages in doing so? We surely can discard performance as a reason, because in the end it generates pure Javascript code. Is there anyone here that have real experience with this? P.S.: I'm not sure if this is an on topic question, but this is the most general forum about programming that I could find in the StackExchange network. Edit Although this seems like a subjective question, it is not. I am asking for reasons that this tool could be useful. I got interested at first, but started wondering why would someone use it.

    Read the article

  • How many copies are needed to enlarge an array?

    - by user10326
    I am reading an analysis on dynamic arrays (from the Skiena's algorithm manual). I.e. when we have an array structure and each time we are out of space we allocate a new array of double the size of the original. It describes the waste that occurs when the array has to be resized. It says that (n/2)+1 through n will be moved at most once or not at all. This is clear. Then by describing that half the elements move once, a quarter of the elements twice, and so on, the total number of movements M is given by: This seems to me that it adds more copies than actually happen. E.g. if we have the following: array of 1 element +--+ |a | +--+ double the array (2 elements) +--++--+ |a ||b | +--++--+ double the array (4 elements) +--++--++--++--+ |a ||b ||c ||c | +--++--++--++--+ double the array (8 elements) +--++--++--++--++--++--++--++--+ |a ||b ||c ||c ||x ||x ||x ||x | +--++--++--++--++--++--++--++--+ double the array (16 elements) +--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--+ |a ||b ||c ||c ||x ||x ||x ||x || || || || || || || || | +--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--+ We have the x element copied 4 times, c element copied 4 times, b element copied 4 times and a element copied 5 times so total is 4+4+4+5 = 17 copies/movements. But according to formula we should have 1*(16/2)+2*(16/4)+3*(16/8)+4*(16/16)= 8+8+6+4=26 copies of elements for the enlargement of the array to 16 elements. Is this some mistake or the aim of the formula is to provide a rough upper limit approximation? Or am I missunderstanding something here?

    Read the article

  • Is it common to lie in job ads regarding the technologies in use?

    - by Desolate Planet
    Wanted: Experienced Delphi programmer to maintain ginormous legacy application and assist in migration to C# Later on, as the new hire settles into his role... "Oh, that C# migration? Yeah, we'd love to do that. But management is dead-set against it. Good thing you love Pascal, eh?" I've noticed quite a lot of this where I live (Scotland) and I'm not sure how common this is across IT: a company is using a legacy technology and they know that most developers will avoid them to keep mainstream technology on their resumes. So, they will put out a advertisement saying they are looking to move their product to some hip new tech (C#, Ruby, FORTRAN 99) and require someone who has exposure to both - but the migration is just a carrot on a stick, perpetually hung in front of the hungry developer as he spends each day maintaining the legacy app. I've experienced this myself, and heard far too many similar stories to the point where it seems like common practice. I've learned over time that every company has legacy problems of some sort, but I fail to see why they can't be honest about it. It should be common sense to any developer that the technology in place is there to support the business and not the other way round. Unless the technology is hurting the business in someway, I hardly see any just cause for reworking the software stack to be made up whatever is currently vogue in the industry. Would you say that this is commonplace? If so, how can I detect these kinds of leading advertisements beforehand?

    Read the article

  • How do I make the jump from developing for Android to Windows Phone 7?

    - by Rob S.
    I'm planning on making the jump over from developing apps for Android to developing apps for Windows Phone 7 as well. For starters, I figured I would port over my simplest app. The code itself isn't much of a problem as the transition from Java to C# isn't that bad. If anything, this transition is actually easier than I expected. What is troublesome is switching SDKs. I've already compiled some basic Windows Phone 7 apps and ran through some tutorials but I'm still feeling a bit lost. For example, I'm not sure what the equivalent of a ScrollView on Android would be on Windows Phone 7. So does anyone have any advice or any resources they can offer me to help me make this transition? Additionally, any comments on the Windows Phone 7 app market (especially in comparison to the Android market) would also be greatly appreciated. Thank you very much in advance for your time.

    Read the article

  • converting dates things from visual basic to c-sharp

    - by sinrtb
    So as an excercise in utility i've taken it upon myself to convert one of our poor old vb .net 1.1 apps to C# .net 4.0. I used telerik code conversion for a starting point and ended up with ~150 errors (not too bad considering its over 20k of code and rarely can i get it to run without an error using the production source) many of which deal with time/date in vb versus c#. my question is this how would you represent the following statement in VB If oStruct.AH_DATE <> #1/1/1900# Then in C#? The converter gave me if (oStruct.AH_DATE != 1/1/1900 12:00:00 AM) { which is of course not correct but I cannot seem to work out how to make it correct.

    Read the article

  • What do you do about content when someone asks you to build a website

    - by Jon
    I am an experienced asp.net developer and asp.net mvc and I have my own CMS that I have written but starting to think there should be another approach. When someone asks you to develop them a website how do you develop it so that they can add pictures,slideshows, content, news items, diary events. On a side note do you give them a design for the home page and inner page and thats it. I'm just thinking if they turn around and say 6 months down the line I want a jquery slideshow on the right hand side of this page how do you or CMS's handle it?

    Read the article

  • Is there a Design Pattern for preventing dangling references?

    - by iFreilicht
    I was thinking about a design for custom handles. The thought is to prevent clients from copying around large objects. Now a regular handle class would probably suffice for that, but it doesn't solve the "dangling reference problem"; If a client has multiple handles of the same object and deletes the object via one of them, all the others would be invalid, but not know it, so the client could write or read parts of the memory he shouldn't have access to. Is there a design pattern to prevent this from happening? Two ideas: An observer-like pattern where the destructor of an object would notify all handles. "Handle handles" (does such a thing even exist?). All the handles don't really point to the object, but to another handle. When the object gets destroyed, this "master-handle" invalidates itself and therefore all that point to it.

    Read the article

  • Search network device from LAN through my C++ application and change the IP address

    - by Arun Kumar K S
    I am developing an application in C++ to communicate with my network device. I used UDP classes to search the device from the network. I done the code in such a way that from my application a broadcast message will send to the local network. The device will respond to the broadcast message and the application will get the IP address from that response. After establishing a network communication send a message to the device for changing the IP address. That worked fine if my devices IP address is correct. But when I set a wrong IP address and subnet to the device. My application will never get any messages from the device. So I can't able communicate to the device and not able get the device and unable to change the IP address etc. Say example IP address of the device 20.1.1.1 Subnet Mask 255.0.0.0 And in my system that runs the application IP address 192.168.1.23 Subnet Mask 255.255.255.0 I tried the Lantronix device installation software with their Lantronix device in network. It listed the device from the network and I am able to change the IP address from their software. Any one know how this is done in this type of software? How I can search in network to find the device and change the IP address when its IP address is not in range? Which protocol they used to find the device?

    Read the article

  • ASP.NET Web Forms is bad, or what am I missing?

    - by iveqy
    Being a PHP guy myself I recently had to write a spider to an asp.net site. I was really surprised by the different approach to ajax and form-handling. For example, in the PHP sites I've worked with, a deletion of a database entry would be something like: GET delete.php?id=&confirm=yes and get a "success" back in some form (in the ajax case, probably a json reply). In this asp.net application you would instead post a form, including all inputs on the page, with a huge __VIEWSTATE and __EVENTVALIDATION. This would be more than 10 times as big as above. The reply would be the complete side again, with a footer containing some structured data for javascript to parse and display the result. Again, the whole page is sent, and then throwed away(?) since it's already displayed. Why not just send the footer with the data to parse (it's not json nor xml but a | separated list). I really can't see why you would design a system that way. Usually you've a fast client, and a somewhat fast server but a really slow connection. Why not keep the datatransfer to a minimum? Why those huge __VIEWSTATE and __EVENTVALIDATION? It seems that everything is done way to chatty and way to complicated. I really can't see the point and that usually means that I'm missing something. So please tell me, what are the reasons for this design and what benefits (and weaknesses) does it have? (Yes I know that __VIEWSTATE is used to tell what type of form-konfiguration should be sent back to the server. But WHY is this needed?) Please keep this discussion strictly technical and avoid flamewars. Update: Please excuse the somewhat rantish question. I tried to explain my view to be able to get a better answer. I am not saying that asp.net is bad, I am saying that I don't understand the meaning of those concepts. Usually that means that I've things to learn instead of the concepts beeing wrong. I appreciate the explanations about that "you don't have to do this way in asp.net", I'll read up on MVC and other .net technologies. However, there most be a reason for this site (the one I referred to) to be written the way it is. It's written by professionals for a big organisation with far more experience than what I've. Any explanation about their (possible) design choice would be welcome.

    Read the article

  • Is it correct to fix bugs without adding new features when releasing software for system testing?

    - by Pratik
    This question is to experienced testers or test leads. This is a scenario from a software project: Say the dev team have completed the first iteration of 10 features and released it to system testing. The test team has created test cases for these 10 features and estimated 5 days for testing. The dev team of course cannot sit idle for 5 days and they start creating 10 new features for next iteration. During this time the test team found defects and raised some bugs. The bugs are prioritised and some of them have to be fixed before next iteration. The catch is that they would not accept the new release with any new features or changes to existing features until all those bugs fixed. The test team says that's how can we guarantee a stable release for testing if we also introduce new features along with the bug fix. They also cannot do regression tests of all their test cases each iteration. Apparently this is proper testing process according to ISQTB. This means the dev team has to create a branch of code solely for bug fixing and another branch where they continue development. There is more merging overhead specially with refactoring and architectural changes. Can you agree if this is a common testing principle. Is the test team's concern valid. Have you encountered this in practice in your project.

    Read the article

  • Can anyone help solve this complex algorithmic problem?

    - by Locaaaaa
    I got this question in an interview and I was not able to solve it. You have a circular road, with N number of gas stations. You know the ammount of gas that each station has. You know the ammount of gas you need to GO from one station to the next one. Your car starts with 0. The question is: Create an algorithm, to know from which gas station you must start driving. As an exercise to me, I would translate the algorithm to C#.

    Read the article

  • Examples of datecentric LOB-application in Silverlight with creative design?

    - by Alexander Galkin
    I am a database developer and I have rather limited experience with interface design. I am currently working on a project in my free-time a freetime which is mostly data centric and I would like to develop an eye-catchy interface for it using Silverlight. What I am looking for are examples of nice and interesting LOB applications in Siverlight without the use of paid frameworks. So far, I could only find something like Telerik sample application, but it uses a lot of Telerik controls I can't afford to buy.

    Read the article

  • Help needed on a UI/Developer Interview

    - by AJ Seth
    I have a phone interview with a major Internet company and it is a mostly front-end developer position. If anyone has experience with UI/developer interviews and can give some advice/questions asked etc. that'll be great. Additionally, what resources can be read and reviewed for the following: Designing for performance, scalability and availability Internet and OS security fundamentals EDIT: Now I am told that the interview I am told will be mostly on coding, Data Structures, design questions etc. Anyone?

    Read the article

  • What does the English word "for" exactly mean in "for" loops?

    - by kol
    English is not my first language, but since the keywords in programming languages are English words, I usually find it easy to read source code as English sentences: if (x > 10) f(); = "If variable x is greater than 10, then call function f." while (i < 10) ++i; = "While variable i is less than 10, increase i by 1." But how a for loop is supposed to be read? for (i = 0; i < 10; ++i) f(i); = ??? I mean, I know what a for loop is and how it works. My problem is only that I don't know what the English word "for" exactly means in for loops.

    Read the article

  • Should a poll framework be closed sourced

    - by samquo
    I was having a chat with a coworker who is working on a polling app and framework. He was asking technical questions and I suggested he open source the application to get more quality opinions from developers who are interested in this problem and are willing to give it heavy though. He has a different point of view which I think is still valid so I want to open this question for discussion here. He says he believes something like a polling framework should not be open sourced because it will reduce its security and validity as people reveal loopholes through which they can cheat. Can't say I completely disagree. I see a somewhat valid point there, but I always believed that solutions by a group of people are almost always better than a solution thought by a single person asking a small number of coworkers, no matter how smart that person is. Again I'm willing to accept that maybe some types of applications are different. Does anyone have an argument in his favor? I'd really like to present your responses to him.

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >