Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 148/563 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • The worst anti-patterns you have came across.

    - by ?????????
    What are the worst anti-patterns you have came across in your career as a programmer? I'm mostly involved in java, although it is probably language-independent. I think the worst of it is what I call the main anti-pattern. It means program consisting of single, extremely big class (sometimes accompanied with a pair of little classes) which contains all logic. Typically with a big loop in which all business logic is contained, sometimes having tens of thousands of lines of code.

    Read the article

  • What is the best policy for allowing clients to change email?

    - by Steve Konves
    We are developing a web application with a fairly standard registration process which requires a client/user to verify their email address before they are allowed to use the site. The site also allows users to change their email address after verification (with a re-type email field, as well). What are the pros and cons of having the user re-verify their email. Is this even needed? EDIT: Summary of answers and comments below: "Over-verification annoys people, so don't use it unless critical Use a "re-type email" field to prevent typos Beware of overwriting known good data with potentially good data Send email to old for notification; to new for verification Don't assume that the user still has access to the old email Identify impact of incorrect email if account is compromised

    Read the article

  • Eclipse: always keep files updated

    - by AK01
    I keep lots of files/editors open in Eclipse. I also love using git stash and other git commands that essentially change the contents of my open files. Is there an Eclipse feature or plugin that will always keep the contents of my open files up to date and live? Currently if I put focus in an out of sync editor, I get an awkwardly worded dialog that I have to parse carefully every time. I wish it would just keep me synced like Textmate does.

    Read the article

  • How should I architect my Model and Data Access layer objects in my website?

    - by Robin Winslow
    I've been tasked with designing Data layer for a website at work, and I am very interested in architecture of code for the best flexibility, maintainability and readability. I am generally acutely aware of the value in completely separating out my actual Models from the Data Access layer, so that the Models are completely naive when it comes to Data Access. And in this case it's particularly useful to do this as the Models may be built from the Database or may be built from a Soap web service. So it seems to me to make sense to have Factories in my data access layer which create Model objects. So here's what I have so far (in my made-up pseudocode): class DataAccess.ProductsFromXml extends DataAccess.ProductFactory {} class DataAccess.ProductsFromDatabase extends DataAccess.ProductFactory {} These then get used in the controller in a fashion similar to the following: var xmlProductCreator = DataAccess.ProductsFromXml(xmlDataProvider); var databaseProductCreator = DataAccess.ProductsFromXml(xmlDataProvider); // Returns array of Product model objects var XmlProducts = databaseProductCreator.Products(); // Returns array of Product model objects var DbProducts = xmlProductCreator.Products(); So my question is, is this a good structure for my Data Access layer? Is it a good idea to use a Factory for building my Model objects from the data? Do you think I've misunderstood something? And are there any general patterns I should read up on for how to write my data access objects to create my Model objects?

    Read the article

  • Should we write detailed architecture design or just an outline when designing a program?

    - by EpsilonVector
    When I'm doing design for a task, I keep fighting this nagging feeling that aside from being a general outline it's going to be more or less ignored in the end. I'll give you an example: I was writing a frontend for a device that has read/write operations. It made perfect sense in the class diagram to give it a read and a write function. Yet when it came down to actually writing them I realized they were literally the same function with just one line of code changed (read vs write function call), so to avoid code duplication I ended up implementing a do_io function with a parameter that distinguishes between operations. Goodbye original design. This is not a terribly disruptive change, but it happens often and can happen in more critical parts of the program as well, so I can't help but wondering if there's a point to design more detail than a general outline, at least when it comes to the program's architecture (obviously when you are specifying an API you have to spell everything out). This might be just the result of my inexperience in doing design, but on the other hand we have agile methodologies which sort of say "we give up on planning far ahead, everything is going to change in a few days anyway", which is often how I feel. So, how exactly should I "use" design?

    Read the article

  • Qt question: hard-coded shortcuts

    - by miLo
    I have asked this question on several places but I still can't figure it out. What I am trying to do is to have a QKeySequence(Qt::CTRL + Qt::Key_X, Qt::CTRL + Qt::Key_C) in a MainWindow with QTextEdit as a central widget. The problem is that I have a shorcut for Cut(Ctrl+X) and when I press Ctrl+X,Ctrl+C it doesn't work. When the focus is on diffrent widget the shorcut works perfectly. I tried with overriding the QWidget::keyPressEvent and QWidget::event but it is the same. I have one more question: if I have these two shorcuts Ctrl+X and Ctrl+XCtrl+C why I don't receive the signal activatedAmbigiously() when I press Ctrl+X? According to the Qt documentation When a key sequence is being typed at the keyboard, it is said to be ambiguous as long as it matches the start of more than one shortcut.

    Read the article

  • javac -cp : cannot find symbol problem [migrated]

    - by LivingThing
    I have 3 classes CustomerAddress, Customer and CustomerMain. Customer has a import statement : import org.abc.customers.CustomerAddress; While CustomerMain has an import statement : import org.abc.customers.CustomerAddress; import org.abc.customers.Customer; The package for all of these classes are package org.abc.customer Now, this program works fine on eclipse but when i try to compile and run on cmd prompt it would not compile javac CustomerAddress.java compiles fine then since Customer depends on CustomerAddress i give javac -cp . Customer.java but the compiler complains error cannot find symbol CustomerAddress Thanks

    Read the article

  • embedding LEFT OUTER JOIN within INNER JOIN

    - by user3424954
    I am having some problems with one of the question's answered in the book "SQL FOR MERE MORTALS". Here is the problem statement Here is the Database Structure Here is the answer which I am unable to comprehend Here is an answer which looks perfect to me Now the problem with the first answer I am having is: We first use LEFT OUTER JOIN for recipe class and recipes. So it selects all recipe class rows but only matching recipes. Perfecty fine as the question is demanding. Lets call this result set R. Now in the next step when we use INNER JOIN to join RecipieIngridients, it should filter out the rows from R in which Recipie ID doesn't match with the Recipe Id in Recipie Ingredients and hence filtering out the related Recipe class and recipe description also(Since it filters out the entire row of R). So this contradicts with the problem which demands all recipieID and RecipieDescription to be displayed from Recipe_Classes Table in this very step only. How can it be correct. Or Am i Missing some concept.

    Read the article

  • Developing an internet-enabled application as a Kiosk on Windows 7

    - by maple_shaft
    I am finalizing development of a desktop Java application that communicates with an outside web server, and now I need to start seriously considering deployment. This application will run on a large touchscreen all-in-one workstation running Windows 7. It will be located in a public-area and thus must be LOCKED-DOWN Hanibal Lecter style. Early in the project nobody really concerned themselves with this fact just assuming that we can buy some magical software for Windows 7 that will automatically take care of all this, however I am finding now that this looks to be a LOT more complicated than my manager ever thought. I need to: - Lock down the standard hot-keys (ALT+TAB, ALT+CTRL+DEL, etc...) Prevent the user from opening ANY programs other than the kiosk application and its spawned executables Prevent the user from closing the application Start the kiosk application on startup (this can be done without kiosk software) Auto-login to Windows on reboot (Windows Updates, power failure, bratty kid pressing the power button, etc...) Administrator passcode escape sequence for routine maintenance by desktop support professionals. To my dismay I am having a really hard time finding software that contains the whole package and am finding numerous swaths of competing information on the best way to do this. I am not necessarily looking for free or open source software and am willing to pay for software that can help me achieve this. Have any of you ever wrote kiosk software before and if so what approaches have you taken to do this?

    Read the article

  • Making a C# w/ WPF multiple frame text / pseudo-calendar GUI application [closed]

    - by Gregor Samsa
    I am editing a recently asked question and making it specific, taking the advice of some people here. I would like to program of the following simple form: The user can produce X number of resizable frames (analogous to HTML frames). Each frame serves as a simple text editor, which you can type into and save the whole configuration including resized windows and text. The user should be able alternately "freeze" and present the information, and "unfreeze" and edit frames. I want to use C## with WPF, in Microsoft's Visual C#. I do not yet know this language. I am sure I can pick up the syntax, but I would like to ask about some general advice for how to structure such a program. I have never made a GUI program, let alone one that interfaces with a notepad or some basic text editor. Can someone either direct me to a good resource that will teach me how to do the above? Or outline the basic ingredients that such a program will require, keeping in mind that though I know some C and Python, I have no experience with GUIs or advanced programming generally? In particular I don't know how to incorporate this "text editor" aspect of the program, as well as the resizable frames I would greatly appreciate any help.

    Read the article

  • What is the right way to process inconsistent data files?

    - by Tahabi
    I'm working at a company that uses Excel files to store product data, specifically, test results from products before they are shipped out. There are a few thousand spreadsheets with anywhere from 50-100 relevant data points per file. Over the years, the schema for the spreadsheets has changed significantly, but not unidirectionally - in the sense that, changes often get reverted and then re-added in the space of a few dozen to few hundred files. My project is to convert about 8000 of these spreadsheets into a database that can be queried. I'm using MongoDB to deal with the inconsistency in the data, and Python. My question is, what is the "right" or canonical way to deal with the huge variance in my source files? I've written a data structure which stores the data I want for the latest template, which will be the final template used going forward, but that only helps for a few hundred files historically. Brute-forcing a solution would mean writing similar data structures for each version/template - which means potentially writing hundreds of schemas with dozens of fields each. This seems very inefficient, especially when sometimes a change in the template is as little as moving a single line of data one row down or splitting what used to be one data field into two data fields. A slightly more elegant solution I have in mind would be writing schemas for all the variants I can find for pre-defined groups in the source files, and then writing a function to match a particular series of files with a series of variants that matches that set of files. This is because, more often that not, most of the file will remain consistent over a long period, only marred by one or two errant sections, but inside the period, which section is inconsistent, is inconsistent. For example, say a file has four sections with three data fields, which is represented by four Python dictionaries with three keys each. For files 7000-7250, sections 1-3 will be consistent, but section 4 will be shifted one row down. For files 7251-7500, 1-3 are consistent, section 4 is one row down, but a section five appears. For files 7501-7635, sections 1 and 3 will be consistent, but section 2 will have five data fields instead of three, section five disappears, and section 4 is still shifted down one row. For files 7636-7800, section 1 is consistent, section 4 gets shifted back up, section 2 returns to three cells, but section 3 is removed entirely. Files 7800-8000 have everything in order. The proposed function would take the file number and match it to a dictionary representing the data mappings for different variants of each section. For example, a section_four_variants dictionary might have two members, one for the shifted-down version, and one for the normal version, a section_two_variants might have three and five field members, etc. The script would then read the matchings, load the correct mapping, extract the data, and insert it into the database. Is this an accepted/right way to go about solving this problem? Should I structure things differently? I don't know what to search Google for either to see what other solutions might be, though I believe the problem lies in the domain of ETL processing. I also have no formal CS training aside from what I've taught myself over the years. If this is not the right forum for this question, please tell me where to move it, if at all. Any help is most appreciated. Thank you.

    Read the article

  • Should I add parameters to instance methods that use those instance fields as parameters?

    - by john smith optional
    I have an instance method that uses instance fields in its work. I can leave the method without that parameters as they're available to me, or I can add them to the parameter list, thus making my method more "generic" and not reliable on the class. On the other hand, additional parameters will be in parameters list. Which approach is preferable and why? Edit: at the moment I don't know if my method will be public or private. Edit2: clarification: both method and fields are instance level.

    Read the article

  • schedule compliance and keeping technical supports and resolving issues

    - by imays
    I am an entrepreneur of a small software developer company. The flagship product is developed by myself and my company grew up to 14 people. One of pride is that we've never have to be invested or loaned. The core development team is 5 people. 3 are seniors and 2 are juniors. After the first release, we've received many issues from our customers. Most of them are bug issues, customization needs, usage questions and upgrade requests. The issues from customers are incoming many times everyday, so it takes little time or much time of our developers. Because of our product is a software development kit(SDK) so most of questions can be answered only from our developers. And, for resolving bug issues, developers must be involved. Estimating time to resolve bug is hard. I fully understand it. However, our developers insist they cannot set the any due date of each project because they are busy doing technical supports and bug fixes by issues from customers everyday. Of course, they never do overwork. I suggested them an idea to divide the team into two parts: one for focusing on development by milestones, other for doing technical supports and bug fixes without setting due days. Then we could announce release plan officially. After the finish of release, two parts exchange the role for next milestone. However, they say they "NO, because it is impossible to share knowledge and design document fully." They still say they cannot set the release date and they request me to alter the due date flexibly. They does not fix the due date of each milestone. Fortunately, our company is not loaned and invested so we are not chocked. But I think it is bad idea to keep this situation. I know the story of ant and grasshopper. Our customers are tired of waiting forever of our release date. Companies consume limited time and money. If flexible due date without limit could be acceptable, could they accept flexible salary day? What is the root cause of our problem? All that I want is to fix and achieve precisely due date of each milestone without losing frequent technical supports. I think there must be solution for this situation. Please answer me. Thanks in advance. PS. Our tools and ways of project management are Trello, Mantis-like issue tracker, shared calendar software and scrum(collected cards into series of 'small and high completeness' projects).

    Read the article

  • The value of an updated specification

    - by Mr. Jefferson
    I'm at the tail end of a large project (around 5 months of my time, 60,000 lines of code) on which I was the only developer. The specification documents for the project were designed well early on, but as always happens during development, some things have changed. For example: Bugs were discovered and fixed in ways that don't correspond well with the spec Additional needs were discovered and dealt with We found areas where the spec hadn't thought far enough ahead and had to change some implementation Etc. Through all this, the spec was not kept updated, because we were lean on resources (I know that's a questionable excuse). I'm hoping to get the spec updated soon when we have time, but it's a matter of convincing management to spend the effort. I believe it's a good idea, but I need some good reasoning; I can't speak from very much experience because I'm only a year and a half out of school. So here are my questions: What value is there in keeping an updated spec? How often should the spec be updated? With every change made that might affect it, just at the end of development, or something in between? EDIT to clarify: this project was internal, so no external client was involved. It's our product.

    Read the article

  • IL and case-sensitivity

    - by Ali .NET
    Quoted from A Brief Introduction To IL code, CLR, CTS, CLS and JIT In .NET CLS stands for Common Language Specifications. It is a subset of CTS. CLS is a set of rules or guidelines which if followed ensures that code written in one .NET language can be used by another .NET language. For example one rule is that we cannot have member functions with same name with case difference only i.e we should not have add() and Add(). This may work in C# because it is case-sensitive but if try to use that C# code in VB.NET, it is not possible because VB.NET is not case-sensitive. Based on above text I want to confirm two points here: Does the case-sensitivity of IL is a condition for member functions only, and not for member properties? Is it true that C# wouldn't be inter-operable with VB.NET if it didn't take care of the case sensitivity?

    Read the article

  • Is comparing an OO compiler to a SQL compiler/optimizer valid?

    - by Brad
    I'm now doing a lot of SQL development at my new job where as before I was doing Object Oriented desktop app stuff. I keep running across very large scripts (thousands of lines) and wanting to refactor in some way. I am seeing that SQL is a different sort of beast and it's probably fine to have these big scripts for the most part but while explaining this to me people are also insisting that the whole idea of refactoring is bad. That stuff like the .NET compiler are actually burdened by refactored code and that a big wall of code is more efficient and better design than code designed for reuse, readability and scalability. The other argument is that OO compilers are almost dangerously inefficient and don't have efficient memory management or runs too many CPU instructions compared to older "simpler" compilers and compared to SQL. Are these valid complaints? Even if some compiler like a C compiler is modestly more "efficient" (whatever that means on this high of a level without seeing code) would you want to write applications in C over C# or Java? Is comparing an OO compiler to a SQL compiler/optimizer even valid?

    Read the article

  • Is it better to load up a class with methods or extend member functionality in a local subclass?

    - by Calvin Fisher
    Which is better? Class #1: public class SearchClass { public SearchClass (string ProgramName) { /* Searches LocalFile objects, handles exceptions, and puts results into m_Results. */ } DateTime TimeExecuted; bool OperationSuccessful; protected List<LocalFile> m_Results; public ReadOnlyCollection<LocalFile> Results { get { return new ReadOnlyCollection<LocalFile>(m_Results); } } #region Results Filters public DateTime OldestFileModified { get { /* Does what it says. */ } } public ReadOnlyCollection<LocalFile> ResultsWithoutProcessFiles() { return new ReadOnlyCollection<LocalFile> ((from x in m_Results where x.FileTypeID != FileTypeIDs.ProcessFile select x).ToList()); } #endregion } Or class #2: public class SearchClass { public SearchClass (string ProgramName) { /* Searches LocalFile objects, handles exceptions, and puts results into m_Results. */ } DateTime TimeExecuted; bool OperationSuccessful; protected List<LocalFile> m_Results; public ReadOnlyCollection<LocalFile> Results { get { return new ReadOnlyCollection<LocalFile>(m_Results); } } public class SearchResults : ReadOnlyCollection<LocalFile> { public SearchResults(IList<LocalFile> iList) : base(iList) { } #region Results Filters public DateTime OldestFileModified { get { /* Does what it says. */ } } public ReadOnlyCollection<LocalFile> ResultsWithoutProcessFiles() { return new ReadOnlyCollection<LocalFile> ((from x in this where x.FileTypeID != FileTypeIDs.ProcessFile select x).ToList()); } #endregion } } ...with the implication that OperationSuccessful is accompanied by a number of more interesting properties on how the operation went, and OldestFileModified and ResultsWithoutProcessFiles() also have several more siblings in the Results Filters section.

    Read the article

  • What is the technical reason that so many social media sites don't allow you to edit your text?

    - by Edward Tanguay
    A common complaint I hear about Facebook, Twitter, Ning and other social sites is that once a comment or post is made, it can't be edited. I think this goes against one of the key goals of user experience: giving the user agency, or the ability to control what he does in the software. Even on Stackexchange sites, you can only edit the comments for a certain amount of time. Is the inability for so many web apps to not allow users to edit their writing a technical shortcoming or a "feature by design"?

    Read the article

  • How can we make agile enjoyable for developers that like to personally, independently own large chunks from start to finish

    - by Kris
    We’re roughly midway through our transition from waterfall to agile using scrum; we’ve changed from large teams in technology/discipline silos to smaller cross-functional teams. As expected, the change to agile doesn’t suit everyone. There are a handful of developers that are having a difficult time adjusting to agile. I really want to keep them engaged and challenged, and ultimately enjoying coming to work each day. These are smart, happy, motivated people that I respect on both a personal and a professional level. The basic issue is this: Some developers are primarily motivated by the joy of taking a piece of difficult work, thinking through a design, thinking through potential issues, then solving the problem piece by piece, with only minimal interaction with others, over an extended period of time. They generally complete work to a high level of quality and in a timely way; their work is maintainable and fits with the overall architecture. Transitioning to a cross-functional team that values interaction and shared responsibility for work, and delivery of working functionality within shorter intervals, the teams evolve such that the entire team knocks that difficult problem over. Many people find this to be a positive change; someone that loves to take a problem and own it independently from start to finish loses the opportunity for work like that. This is not an issue with people being open to change. Certainly we’ve seen a few people that don’t like change, but in the cases I’m concerned about, the individuals are good performers, genuinely open to change, they make an effort, they see how the rest of the team is changing and they want to fit in. It’s not a case of someone being difficult or obstructionist, or wanting to hoard the juiciest work. They just don’t find joy in work like they used to. I’m sure we can’t be the only place that hasn’t bumped up on this. How have others approached this? If you’re a developer that is motivated by personally owning a big chunk of work from end to end, and you’ve adjusted to a different way of working, what did it for you?

    Read the article

  • How should an API use http basic authentication

    - by user1626384
    When an API requires that a client authenticates to it, i've seen two different scenarios used and I am wondering which case I should use for my situation. Example 1. An API is offered by a company to allow third parties to authenticate with a token and secret using HTTP Basic. Example 2. An API accepts a username and password via HTTP Basic to authenticate an end user. Generally they get a token back for future requests. My Setup: I will have an JSON API that I use as my backend for a mobile and web app. It seems like good practice for both the mobile and web app to send along a token and secret so only these two apps can access the API blocking any other third party. But the mobile and web app allow users to login and submit posts, view their data, etc. So I would want them to login via HTTP Basic as well on each request. Do I somehow use a combination of both these methods or only send the end user credentials (username and token) on each request? If I only send the end user credentials, do I store them in a cookie on the client?

    Read the article

  • How do I store the OAuth v1 consumer key and secret for an open source desktop Twitter client without revealing it to the user?

    - by Justin Dearing
    I want to make a thick-client, desktop, open source twitter client. I happen to be using .NET as my language and Twitterizer as my OAuth/Twitter wrapper, and my app will likely be released as open source. To get an OAuth token, four pieces of information are required: Access Token (twitter user name) Access Secret (twitter password) Consumer Key Consumer Secret The second two pieces of information are not to be shared, like a PGP private key. However, due to the way the OAuth authorization flow is designed, these need to be on the native app. Even if the application was not open source, and the consumer key/secret were encrypted, a reasonably skilled user could gain access to the consumer key/secret pair. So my question is, how do I get around this problem? What is the proper strategy for a desktop Twitter client to protect its consumer key and secret?

    Read the article

  • How do you do ASP.Net performance testing?

    - by John
    Our team is in need of a performance testing process. We use ASP.Net (both web forms and MVC) and performance testing is not currently built into our projects. We occasionally do some ad-hoc analysis, such as checking the load on the server or SQL Server Profiler, but we don't have a true beginning to end, built into the project performance testing methodology. Where is a good place to start? I'm interested in both: Process - General knowledge, including best practices. Essential list of tools. I'm aware of a few tools, such as what's built into the pricier versions of VS 2010 and JetBrains products, though I haven't used them.

    Read the article

  • Security of logging people in automatically from another app?

    - by Simon
    I have 2 apps. They both have accounts, and each account has users. These apps are going to share the same users and accounts and they will always be in sync. I want to be able to login automatically from one app to the other. So my solution is to generate a login_key, for example: 2sa7439e-a570-ac21-a2ao-z1qia9ca6g25 once a day. And provide a automated login link to the other app... for example if the user clicks on: https://account_name.securityhole.io/login/2sa7439e-a570-ac21-a2ao-z1qia9ca6g25/user/123 They are logged in automatically, session created. So here we have 3 things that a intruder has to get right in order to gain access; account name, login key, and the user id. Bad idea? Or should I can down the path of making one app an oauth provider? Or is there a better way?

    Read the article

  • Is your Xcode4 stable?

    - by Eonil
    I have upgraded to Xcode4, and I'm experiencing unbelievable situation. Xcode4 crashes per 5 minute. Incredibly slow. Almost impossible to use. Maybe the problem is my hardware configuration. I'm using MacBook Air 3rd with 2GB ram with SSD. It was just fine with Xcode3, but now, it consumes all of memory and crashes too often. Does your Xcode4 stable? If so, please let me know what's your hardware configuration. I want to know whether this problem is caused by hardware configuration or not to decide buy a new mac.

    Read the article

  • NoSQL as file meta database

    - by fga
    I am trying to implement a virtual file system structure in front of an object storage (Openstack). For availability reasons we initially chose Cassandra, however while designing file system data model, it looked like a tree structure similar to a relational model. Here is the dilemma for availability and partition tolerance we need NoSQL, but our data model is relational. The intended file system must be able to handle filtered search based on date, name etc. as fast as possible. So what path should i take? Stick to relational with some indexing mechanism backed by 3 rd tools like Apache Solr or dig deeper into NoSQL and find a suitable model and database satisfying the model? P.S: Currently from NoSQL Cassandra or MongoDB are choices proposed by my colleagues.

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >