Search Results

Search found 25198 results on 1008 pages for 'non programmers'.

Page 472/1008 | < Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >

  • How can i bring pace to my Learning Graph?

    - by MSU
    I have been learning programming, mostly C# and .net stuff. And i have target to become a fulltime .NET developer. But i am feeling that learning Graph is very slow, i have been learning C# programming, doing some codes everyday, but how i can learn very fast and increase my skills rapidly. I know there should be a balace of coding and reading, as without reading i can't code and without coding i can't increase my skills. SO, I am requesting here suggestiong from experts on how i bring more pace to my learning graph, i intend to give 4-6 hours daily for this and on weekends 10+ hours ..

    Read the article

  • The clock problem - to if or not to if?

    - by trejder
    Let's say, we have a simple digital clock. To "power" it, we use a routine executed every second. We update seconds part in it. But, what about minutes and hours part? What is better / more professional / offers better performance: Ignore all checking and update hour, minute and seconds part each time, every second. Use if + a variable for checking, if 60 (or 3600) seconds passed and update minute / hour part only at that precise moments. This leads us to a question, what is better -- unnecessary drawings (first approach) or extra ifs? I've just spotted a Javascript digital clock, one of millions similar on one of billions pages. And I noticed that all three parts (hours, minutes and seconds) are updated every second, though first changes its value only once per 3600 seconds and second once per 60 seconds. I'm not to experienced developer, so I might me wrong. But everything, what I've learnt up until now, tells me, that if are far better then executing drawing / refreshing sequences only to draw the same content.

    Read the article

  • How to make my Oracle update/insert action through Java faster?

    - by gunbuster363
    I am facing a problem in my company that is - our program's speed is not fast enough. To be more specific, we are telecommunication company and this program handle call/internet serfing transaction made by every mobile phone users in our city. Because the amount of download content made by the iphone users is just too much, our program cannot handle them fast enough. The situation is, the amount of transaction made by users are double of the transaction processed by our program. Most of the running time of the program are dominated by DB transactions. I've search through the internet and browsed some sites ( for example: http://www.javaperformancetuning.com/tips/rawtips.shtml ) talking about Java performace in DB, but I cannot find a suggestion suitable for us. These advices are not applicable/already used, for instance: 1. Use prepared statements. Use parameterized SQL Already used prepared statement. Each time will use different parameter by clear parameters and set parameters. 2. Tune the SQL to minimize the data returned (e.g. not 'SELECT *'). Sure, already used. 3. Use connection pooling. We hold a single connection during the program's execution. And I doubt that pooling cannot solve the problem because our program act as 1 user, so there are no problem for concurrent access to DB. If anyone of you think pooling is good, please tell me why. Thanks. 4. Try to combine queries and batch updates. Cannot do it. Every query/insert/update is depend on the database's information. For example, we look up the DB for the client's information, if we cannot find his usage, we insert the usage into DB, otherwise we do update. 5. Close resources (Connections, Statements, ResultSets) when finished Sure. 6. Select the fastest JDBC driver. I don't know. I've search on the internet about the type of driver available and I am very confused. We use oracle.jdbc.driver.OracleDriver and we use thin instead of oci, that's all I know. In addition, our program is a two-tier way ( java <- oracle ) 7. Turn off auto-commit already done that. Looking forwards to any help.

    Read the article

  • When does a programmer know when a new job is not right?

    - by Mysterion
    I believe that the interview process is a selling of both parties - what can the employee offer the employer and vice versa. Assuming an individual has been careful in selecting their new employer (via thorough questioning in the interview process), however when they arrive at the job they find the employer has not been honest about certain aspects of the job. Examples of this dishonesty could include: The employee making it clear that technical excellence is an important factor, which is promised by the employer, but is not fully delivered or a good technical structure does not exist. The employee states they want to work on well architected and short (lets say less than 1 yr) long projects, yet when they start they find they are placed on a poorly architected older project. The employee being told of a pair programming environment to get him up to speed on the project, but being left to his own devices/questioning on arrival. The employee is promised a culture that encourages innovation and technical excellence but finds that this is not the case (eg. using technology for knowledge retention is laughed at). I know that a lot of famous developers feel that you make the place you work at. Is it realistic for a new employee with limited experience in the industry (say less than 5 years) to be able to join the company and change attitudes or even challenge the employer on the perceived dishonesty? Should they stay in this job or cut their losses?

    Read the article

  • What modern alternatives to Numerical Recipes exist?

    - by Stewart
    In the past, the Numerical Recipes book was considered the gold standard reference for numerical algorithms. The earliest Fortran Edition was followed by editions in C and C++ and others, bringing it then more up-to-date. Through these, it provided reference code for the state-of-the-art algorithms of the day. Older editions are available online for free nowadays. Unfortunately, I think it is now mostly useful only as a historic tome. The "software engineering" practises seem to me to be outdated, and the actual content hasn't kept pace with the literature. What comprehensive yet approachable references should the modern programmer look at instead?

    Read the article

  • What will be correct answer to "why is NoSQL faster than SQL" on interview?

    - by Cynede
    It's just nonsense for me personally. I can't see any performance boost by using NoSQL instead of SQL. Maybe SQL over NoSQL, yes but not in that way. I think that if I answer "I have no idea" or something like that it will be bad answer because this doesn't really look like a fake-question, it's a serious question on interview but how should be looking correct answer on this question? Or am I missing something about NoSQL?

    Read the article

  • How to determine the right amount of up front design?

    - by Gian
    Software developers occasionally are called upon to write fairly complex bits of software under tight deadlines. Often, it seems like the quickest thing to do is to simply start coding, and solve the problems as they arise. However, this approach can come back to bite you—often costing time or money in the long run! How do we determine the right amount of up front design work? If your work environment actively discourages you from thinking about things up front, how do you handle that? How can we manage risk if we eschew up-front thinking (by choice or under duress) and figure out the problems as they arise? Does the amount of up front design depend entirely on the size or complexity of the task, or is it based on something else?

    Read the article

  • When should code favour optimization over readability and ease-of-use?

    - by jmlane
    I am in the process of designing a small library, where one of my design goals is that the API should be as close to the domain language as possible. While working on the design, I've noticed that there are some cases in the code where a more intuitive, readable attribute/method call requires some functionally unnecessary encapsulation. Since the final product will not necessarily require high performance, I am unconcerned about making the decision to favour ease-of-use in my current project over the most efficient implementation of the code in question. I know not to assume readability and ease-of-use are paramount in all expected use-cases, such as when performance is required. I would like to know if there are more general reasons that argue for a design preferring more efficient implementations—even if only marginally so?

    Read the article

  • Entity Framework and layer separation

    - by Thomas
    I'm trying to work a bit with Entity Framework and I got a question regarding the separation of layers. I usually use the UI - BLL - DAL approach and I'm wondering how to use EF here. My DAL would usually be something like GetPerson(id) { // some sql return new Person(...) } BLL: GetPerson(id) { Return personDL.GetPerson(id) } UI: Person p = personBL.GetPerson(id) My question now is: since EF creates my model and DAL, is it a good idea to wrap EF inside my own DAL or is it just a waste of time? If I don't need to wrap EF would I still place my Model.esmx inside its own class library or would it be fine to just place it inside my BLL and work some there? I can't really see the reason to wrap EF inside my own DAL but I want to know what other people are doing. So instead of having the above, I would leave out the DAL and just do: BLL: GetPerson(id) { using (TestEntities context = new TestEntities()) { var result = from p in context.Persons.Where(p => p.Id = id) select p; } } What to do?

    Read the article

  • EE vs Computer Science: Effect on Developers' Approaches, Styles?

    - by DarenW
    Are there any systematic differences between software developers (sw engineers, architect, whatever job title) with an electronics or other engineering background, compared to those who entered the profession through computer science? By electronics background, I mean an EE degree, or a self-taught electronics tinkerer, other types of engineers and experimental physicists. I'm wondering if coming into the software-making professions from a strong knowledge of flip flops, tristate buffers, clock edge rise times and so forth, usually leads to a distinct approach to problems, mindsets, or superior skills at certain specialties and lack of skills at others, when compared to the computer science types who are full of concepts like abstract data types, object orientation, database normalization, who speak of "closures" in programming languages - things that make little sense to the soldering iron crowd until they learn enough programming. The real world, I'm sure, offers a wild range of individual exceptions, but for the most part, can you say there are overall differences? Would these have hiring implications e.g. (to make up something) "never hire an electron wrangler to do database design"? Could knowing about any differences help job seekers find something appropriate more effectively? Or provide enlightenment or some practical advice for those who find themselves misfits in a particular job role? (Btw, I've never taken any computer science classes; my impression of exactly what they cover is fuzzy. I'm an electronics/physics/art type, myself.)

    Read the article

  • Better way to search for text in two columns

    - by David
    Here is the scenario. I am making a custom blogging software for my site. I am implementing a search feature. It's not very sophisticated - basically it just takes the search phrase entered and runs this query: $query="SELECT * FROM `blog` WHERE `title` LIKE '%$q%' OR `post` LIKE '%$q%'"; Which is meant to simply search the title and post body for the phrase entered. Is there a better way to do that, keeping in mind how long it would take to run the query on up to 100 rows, each with a post length of up to 1500 characters? I have considered using a LIMIT statement to (sometimes) restrict the number of rows that the query would examine. Good idea?

    Read the article

  • How do you write straight to the point documentation without looking sloppy and informal?

    - by James
    I'm currently at a contract position and am looking to add to the documentation of the projects I worked on, to assist the next hiree taking over my projects. The documentation I received was overly technical (i.e. references code right away, references replacing certain values on certain lines, no high level description at all) How do I write documentation in simple plain English that is of actual benefit without looking sloppy? I find it difficult in areas such as outlining a system's flaws without coming off as judgmental, but still emphasize the severity of how detrimental some of the flaws are.

    Read the article

  • Steps after SQL Injection detected

    - by Zukas
    I've come across SQL injection vulnerabilities on my companies ecommerce page. It was fairly poorly put together. I believe I have prevented future attempts however we are getting calls about fraudulent credit card charges on our site and others. This leads me to believe that someone was able to get a list of our credit card numbers. What doesn't make sense is that we don't store that information and we use Authorize.net for the transaction. If someone was able to get the CC#s, what should I do next? Inform ALL of our customers that someone broken into our system and stole their information? I have a feeling that will be bad for business.

    Read the article

  • Why to let / not let developers test their own work

    - by pyvi
    I want to gather some arguments as to why letting a developer testing his/her own work as the last step before the product goes into production is a bad idea, because unfortunately, my place of work sometimes does this (the last time this came up, the argument boiled down to most people being too busy with other things and not having the time to get another person familiar with that part of the program - it's very specialised software). There are test plans in this case (though not always), but I am very much in favor of making a person who didn't make the changes that are tested actually doing the final testing. So I am asking if you could provide me with a good and solid list of arguments I can bring up the next time this is discussed. Or to provide counter-arguments, in case you think this is perfectly fine especially when there are formal test cases to test.

    Read the article

  • digital magazine publishing engine licensing question

    - by nosarious
    I have a publishing engine I have been developing for thirty months but find myself being unable to work on it during my masters degree. I would like to make it open source in the interim to get others to use it and improve how it works. I would like to consider a licensing system that allows for multiple instances of the software for singular users (ie, a newspaper/magazine or zine hosting the code on their own). I would like to limit it from becoming the basis of a larger magazine hosting service right now because it is intended to be an integral part of a much larger publishing ecosystem which allows for the creation, dissemination and collection of publications as a free or very inexpensive service. Right now there is no license associated with it, which is why I am not posting a link here. (This system was developed to counter implied censorship for digital magazines and remove costly and confusing 'barriers to entry' for creators wishing to make interactive digital content. It is intended to be useful for free, but I would like to prevent people taking the code and using it to take advantage of others. It needs a bit of work to separate the content from the page itself to allow the access of multiple which I cannot develop right now) Any help or suggestion on how to handle licensing this code for contributions and use would be appreciated, and if anyone would like to see examples or the github I would be happy to send it.

    Read the article

  • functional requirements - use wording based on verbs?

    - by yas
    Question: Should the functional requirements in a requirements doc use wording based on verbs? Context: School assignment, working in a team, working through the SDLC. The requirements doc has been done and we are now into design. Problem: The requirements doc has an enumerated list of what I'd call features of the app - the functional requirements. In that list are things that I'd think of as "how's" rather than "what's" and now, trying to work on design, I feel like a part of design has been prematurely dictated. I've not done this before! To me, I should be dealing strictly with things that describe "what." Example of current: Pretend that the job is to make an omelet. Listing: crack the egg, break into bowl, scramble, etc.; crosses over the line into the territory of how. Along that track, so does wording like: create, generate, list, calculate, determine, validate, etc. - verbs, basically. Right now, I have a list of requirements that are partially rooted in verbs. My idea of a requirements doc for an omelet would be more like: has two eggs, x ounces of ham, x ounces of bacon, x ounces of montery-jack cheese, x ounces of cilantro, etc. - nothing but what (nouns). I might have, and could have, spoken up before finalizing the requirements doc if I'd had any experience.

    Read the article

  • Converting large files in python

    - by Cenoc
    I have a few files that are ~64GB in size that I think I would like to convert to hdf5 format. I was wondering what the best approach for doing so would be? Reading line-by-line seems to take more than 4 hours, so I was thinking of using multiprocessing in sequence, but was hoping for some direction on what would be the most efficient way without resorting to hadoop. Any help would be very much appreciated. (and thank you in advance) EDIT: Right now I'm just doing a for line in fd: approach. After that right now I just check to make sure I'm picking out the right sort of data, which is very short; I'm not writing anywhere, and it's taking around 4 hours to complete with that. I can't read blocks of data because the blocks in this weird file format I'm reading are not standard, it switches between three different sizes... and you can only tell which by reading the first few characters of the block.

    Read the article

  • Why do XML namespace URIs use the http scheme?

    - by svick
    A XML namespace should be a URI, but it can use any URI scheme, including those that are not URLs. Then why do all widely used XML namespaces use the http scheme (e.g. http://www.w3.org/XML/1998/namespace), considering that trying to use the URI as an URL by retrieving that document using the HTTP protocol is not guaranteed to do anything useful (and often doesn't)? I understand the usefulness of DNS domains in namespace names to help guarantee uniqueness. But that doesn't require the http scheme, there could be a separate scheme for that (something like namespace:w3.org/XML/1998/namespace). This would avoid the confusion between URIs and URLs, while maintaining domain-based uniqueness.

    Read the article

  • Are there legitimate reasons for returning exception objects instead of throwing them?

    - by stakx
    This question is intended to apply to any OO programming language that supports exception handling; I am using C# for illustrative purposes only. Exceptions are usually intended to be raised when an problem arises that the code cannot immediately handle, and then to be caught in a catch clause in a different location (usually an outer stack frame). Q: Are there any legitimate situations where exceptions are not thrown and caught, but simply returned from a method and then passed around as error objects? This question came up for me because .NET 4's System.IObserver<T>.OnError method suggests just that: exceptions being passed around as error objects. Let's look at another scenario, validation. Let's say I am following conventional wisdom, and that I am therefore distinguishing between an error object type IValidationError and a separate exception type ValidationException that is used to report unexpected errors: partial interface IValidationError { } abstract partial class ValidationException : System.Exception { public abstract IValidationError[] ValidationErrors { get; } } (The System.Component.DataAnnotations namespace does something quite similar.) These types could be employed as follows: partial interface IFoo { } // an immutable type partial interface IFooBuilder // mutable counterpart to prepare instances of above type { bool IsValid(out IValidationError[] validationErrors); // true if no validation error occurs IFoo Build(); // throws ValidationException if !IsValid(…) } Now I am wondering, could I not simplify the above to this: partial class ValidationError : System.Exception { } // = IValidationError + ValidationException partial interface IFoo { } // (unchanged) partial interface IFooBuilder { bool IsValid(out ValidationError[] validationErrors); IFoo Build(); // may throw ValidationError or sth. like AggregateException<ValidationError> } Q: What are the advantages and disadvantages of these two differing approaches?

    Read the article

  • Is there a massive other side to software development which I've somehow missed, revolving entirely around Microsoft?

    - by Aerovistae
    I'm still a beginning programmer; I've been at it for 2 years. I've learned to work with a few languages, a bit of web development technologies, a handful of libraries, frameworks, and IDEs. But over the past two years (and long before I even started, really), I keep hearing references to these...things. A million of them. Things such as C#, ADO, SOAP, ASP, ASP.NET, the .NET framework, CLR, F#, etc etc. And I've read their Wikipedia articles, in-depth, multiple times, and they all mention a million other things on that list, but I just can't seem to grasp what it all is. The only thing I've taken away with any certainty is that Microsoft is behind all of it. It sounds almost like a conspiracy. Are all these technologies just for developing on the Windows platform? What is .NET? Do some software developers dedicate their entire career just to that side of things? Why would I want to get into it, and what advantage does...whatever it is...have over all the other technologies there are? I hope this makes sense. It's a broad question, but inside it there's a very specific question asking about something I don't know the name of. Hopefully you can grasp my confusion.

    Read the article

  • Why Java as a First Language?

    - by dsimcha
    Why is Java so popular as a first language to teach beginners? To me it seems like a terrible choice: It's statically typed. Static typing isn't useful unless you care a lot about either performance or scaling to large projects. It requires tons of boilerplate to get the simplest code up and running. Try explaining "Hello, world" to someone who's never programmed before. It only handles the middle levels of abstraction well and is single-paradigm, thus leaving out a lot of important concepts. You can't program at a very low level (pointers, manual memory management) or a very high level, (metaprogramming, macros) in it. In general, Java's biggest strength (i.e. the reason people use it despite the shortcomings of the language per se) is its libraries and tool support, which is probably the least important attribute for a beginner language. In fact, while useful in the real world these may negatives from a pedagogical perspective as they can discourage learning to write code from scratch.

    Read the article

  • What is the formal definition of a meta package?

    - by kojiro
    There are several examples of packaging where an application package is built, named, described, even licensed, but contains only setup code and dependencies -- it has no first-class runtime software of its own. I would call this "meta-packaging". This seems to be particularly popular in the open-source world, including examples like kde-meta (Gentoo Portage), Plone, and I'm sure lots of others. I can see how it's a useful practice, but despite it existing as a practice, I couldn't find a formal definition of either "meta-packaging" or "meta-egg" (Python) in searching the web. Is that not the correct term? If it is, is it such common-sense that it needs no formal definition? If not, what is the correct way to put it?

    Read the article

  • What is the value of checking in failing unit tests?

    - by Adam W.
    While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • Passing variables from PHP to C++

    - by Alex
    I’m new to this so I’m sorry if my question is trivial. I have the following situation: I need to call a program from PHP and pass some vars and/or sets of key-value pairs to it. Now, my question is: how do I pass these vars, through arguments to the called function (e.g. exec("/path/to/program flag1 flag2 [key1=A,key2=B]");)? Or is there a better method to achieve this? Somebody suggested me to write them into a txt file and pass the path to it to as an argument instead (e.g. exec("/path/to/program path_to_txt_file);), but I’m not to excited about this method.

    Read the article

  • Is Moving Entity Framework objects over a webservice really the best way?

    - by aceinthehole
    I've inherited a .NET project that has close to 2 thousand clients out in the field that need to push data periodically up to a central repository. The clients wake up and attempt to push the data up via a series of WCF webservices where they are passing each entity framework entity as parameter. Once the service receives this object, it preforms some business logic on the data, and then turns around and sticks it in it's own database that mirrors the database on the client machines. The trick is, is that this data is being transmitted over a metered connection, which is very expensive. So optimizing the data is a serious priority. Now, we are using a custom encoder that compresses the data (and decompresses it on the other end) while it is being transmitted, and this is reducing the data footprint. However, the amount of data that the clients are using, seem ridiculously large, given the amount of information that is actually being transmitted. It seems me that entity framework itself may be to blame. I'm suspecting that the objects are very large when serialized to be sent over wire, with a lot context information and who knows what else, when what we really need is just the 'new' inserts. Is using the entity framework and WCF services as we have done so far the correct way, architecturally, of approaching this n-tiered, asynchronous, push only problem? Or is there a different approach, that could optimize the data use?

    Read the article

< Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >