Search Results

Search found 15103 results on 605 pages for 'programmers notepad'.

Page 284/605 | < Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >

  • How to unit test image processing code?

    - by rold2007
    I'm working in image processing (mainly OCR) and I wonder how I should integrate unit tests in my development. I'm already using unit tests for more "common" type of code but when dealing with image processing code I'm not sure how to deal with it. This kind of code always need some image data input/output and mocking this is not obvious. For now I'm mostly doing integration tests but they take a while to run and I would like some ideas on how to break down this kind of code into unit tests so that I can run them more quickly.

    Read the article

  • Why to say, my function is of IFly type rather than saying it's Airplane type

    - by Vishwas Gagrani
    Say, I have two classes: Airplane and Bird, both of them fly. Both implement the interface IFly. IFly declares a function StartFlying(). Thus both Airplane and Bird have to define the function, and use it as per their requirement. Now when I make a manual for class reference, what should I write for the function StartFlying? 1) StartFlying is a function of type IFly . 2) StartFlying is a function of type Airplane 3) StartFlying is a function of type Bird. My opinion is 2 and 3 are more informative. But what i see is that class references use the 1st one. They say what interface the function is declared in. Problem is, I really don't get any usable information from knowing StartFlying is IFly type. However, knowing that StartFlying is a function inside Airplane and Bird, is more informative, as I can decide which instance (Airplane or Bird ) to use. Any lights on this: how saying StartFlying is a function of type IFly, can help a programmer understanding how to use the function?

    Read the article

  • Are there any concerns with using a static read-only unit of work so that it behaves like a cache?

    - by Rowan Freeman
    Related question: How do I cache data that rarely changes? I'm making an ASP.NET MVC4 application. On every request the security details about the user will need to be checked with the area/controller/action that they are accessing to see if they are allowed to view it. The security information is stored in the database. For example: User Permission UserPermission Action ActionPermission A "Permission" is a token that is applied to an MVC action to indicate that the token is required in order to access the action. Once a user is given the permission (via the UserPermission table) then they have the token and can therefore access the action. I've been looking in to how to cache this data (since it rarely changes) so that I'm only querying in-memory data and not hitting a database (which is a considerable performance hit at the moment). I've tried storing things in lists, using a caching provider but I either run in to problems or performance doesn't improve. One problem that I constantly run in to is that I'm using lazy loading and dynamic proxies with EntityFramework. This means that even if I ToList() everything and store them somewhere static, the relationships are never populated. For example, User.Permissions is an ICollection but it's always null. I don't want to Include() everything because I'm trying to keep things simple and generic (and easy to modify). One thing I know is that an EntityFramework DbContext is a unit of work that acts with 1st-level caching. That is, for the duration of the unit of work, everything that is accessed is cached in memory. I want to create a read-only DbContext that will exist indefinitely and will only be used to read about permission data. Upon testing this it worked perfectly; my page load times went from 200ms+ to 20ms. I can easily force the data to refresh at certain intervals or simply leave it to refresh when the application pool is recycled. Basically it will behave like a cache. Note that the rest of the application will interact with other contexts that exist per request as normal. Is there any disadvantage to this approach? Could I be doing something different?

    Read the article

  • Are XML Comments Necessary Documentation?

    - by Bob Horn
    I used to be a fan of requiring XML comments for documentation. I've since changed my mind for two main reasons: Like good code, methods should be self-explanatory. In practice, most XML comments are useless noise that provide no additional value. Many times we simply use GhostDoc to generate generic comments, and this is what I mean by useless noise: /// <summary> /// Gets or sets the unit of measure. /// </summary> /// <value> /// The unit of measure. /// </value> public string UnitOfMeasure { get; set; } To me, that's obvious. Having said that, if there were special instructions to include, then we should absolutely use XML comments. I like this excerpt from this article: Sometimes, you will need to write comments. But, it should be the exception not the rule. Comments should only be used when they are expressing something that cannot be expressed in code. If you want to write elegant code, strive to eliminate comments and instead write self-documenting code. Am I wrong to think we should only be using XML comments when the code isn't enough to explain itself on its own? I believe this is a good example where XML comments make pretty code look ugly. It takes a class like this... public class RawMaterialLabel : EntityBase { public long Id { get; set; } public string ManufacturerId { get; set; } public string PartNumber { get; set; } public string Quantity { get; set; } public string UnitOfMeasure { get; set; } public string LotNumber { get; set; } public string SublotNumber { get; set; } public int LabelSerialNumber { get; set; } public string PurchaseOrderNumber { get; set; } public string PurchaseOrderLineNumber { get; set; } public DateTime ManufacturingDate { get; set; } public string LastModifiedUser { get; set; } public DateTime LastModifiedTime { get; set; } public Binary VersionNumber { get; set; } public ICollection<LotEquipmentScan> LotEquipmentScans { get; private set; } } ... And turns it into this: /// <summary> /// Container for properties of a raw material label /// </summary> public class RawMaterialLabel : EntityBase { /// <summary> /// Gets or sets the id. /// </summary> /// <value> /// The id. /// </value> public long Id { get; set; } /// <summary> /// Gets or sets the manufacturer id. /// </summary> /// <value> /// The manufacturer id. /// </value> public string ManufacturerId { get; set; } /// <summary> /// Gets or sets the part number. /// </summary> /// <value> /// The part number. /// </value> public string PartNumber { get; set; } /// <summary> /// Gets or sets the quantity. /// </summary> /// <value> /// The quantity. /// </value> public string Quantity { get; set; } /// <summary> /// Gets or sets the unit of measure. /// </summary> /// <value> /// The unit of measure. /// </value> public string UnitOfMeasure { get; set; } /// <summary> /// Gets or sets the lot number. /// </summary> /// <value> /// The lot number. /// </value> public string LotNumber { get; set; } /// <summary> /// Gets or sets the sublot number. /// </summary> /// <value> /// The sublot number. /// </value> public string SublotNumber { get; set; } /// <summary> /// Gets or sets the label serial number. /// </summary> /// <value> /// The label serial number. /// </value> public int LabelSerialNumber { get; set; } /// <summary> /// Gets or sets the purchase order number. /// </summary> /// <value> /// The purchase order number. /// </value> public string PurchaseOrderNumber { get; set; } /// <summary> /// Gets or sets the purchase order line number. /// </summary> /// <value> /// The purchase order line number. /// </value> public string PurchaseOrderLineNumber { get; set; } /// <summary> /// Gets or sets the manufacturing date. /// </summary> /// <value> /// The manufacturing date. /// </value> public DateTime ManufacturingDate { get; set; } /// <summary> /// Gets or sets the last modified user. /// </summary> /// <value> /// The last modified user. /// </value> public string LastModifiedUser { get; set; } /// <summary> /// Gets or sets the last modified time. /// </summary> /// <value> /// The last modified time. /// </value> public DateTime LastModifiedTime { get; set; } /// <summary> /// Gets or sets the version number. /// </summary> /// <value> /// The version number. /// </value> public Binary VersionNumber { get; set; } /// <summary> /// Gets the lot equipment scans. /// </summary> /// <value> /// The lot equipment scans. /// </value> public ICollection<LotEquipmentScan> LotEquipmentScans { get; private set; } }

    Read the article

  • How are requirements determined in open source software projects?

    - by Aron Lindberg
    In corporate in-house software development it is common for requirements to be determined through a formal process resulting in the creation of a number of requirements documents. In open source software development, this often seems to be absent. Hence, my question is: how are requirements determined in open source software projects? By "determining requirements" I simply mean "figuring out what features etc. should be developed as part of a specific software".

    Read the article

  • Assignment of roles in communication when sides could try to cheat

    - by 9000
    Assume two nodes in a peer-to-peer network initiating a communication. In this communication, one node has to serve as a "sender", another as a "receiver" (role names are arbitrary here). I'd like the nodes to assert either role with approximately equal probability. That is, in N communications with various other nodes a given node would assume the "sender" role roughly N/2 times. Since there's no third-party arbiter available, nodes should agree on their roles by exchanging messages. The catch is that we can encounter a rogue node which would try to become the "receiver" in most or all cases, and coax the other side to always serve as a "sender". I'm looking for an algorithm to assign roles to sides of communication so that no side could get a predetermined role with high probability. It's OK for the side which is trying to cheat to fail to communicate.

    Read the article

  • Software design for non object oriented paradigm

    - by Dean
    I'm currently working on a project where I'm writing the firmware for an electronic system in C, and have been asked to produce documentation on the development/evolution of the software for the embedded devices. Having developed software in the object oriented paradigm I know to use UML to document the software such as class diagrams with objects, however this does not work for documenting the development of my embedded system. So what should I produce to document the development of my firmware?

    Read the article

  • Most effective work habit for coding? [on hold]

    - by Cris
    Working on a big solo project (~15,000 LOC), I am encountering the following phenomenon: I seem to work best when I program in short bursts of 10-15 minutes. Right now I am working on a section which is a complete first time for me architecturally and if I have any architectural issues that emerge when doing the implementation, I seem to be able to best serve these by taking a total break. Then, later, sketching out the ideas on some paper. And when I feel I have sufficient clarity, then going back to code. This iterates until that architectural issue for that section is resolved. This seems quite counter intuitive: that I can progress more quickly by coding less, and taking more breaks. I am nearing the end of the sections which are "first times" for me, and about to dive into stuff which I am much more familiar and am wondering if this counter intuitive efficiency will continue. So my question is: even for regular coding of sections one is familiar with, which don't require constant re-clarification of the best architecture, is more progress to be attained by taking more breaks and coding in bursts?

    Read the article

  • Are Remote commit hooks in subversion possible?

    - by John Hamelink
    Hi there, my current setup is as follows: We have a Linux samba share that contains all the repository folders (with the hooks folder inside, amongst the others) All the developers have the share mapped as a network drive, and import to a local directory (normally C:\Server\RepositoryName) where they work on their files. All the machines accessing the drive (unfortunately) run windows. What I'm aiming to do is to have a hook on the Linux server that detects when a commit has been made, by which project, the revision number, the name of the developer who committed, etc. I looked into the hooks files, but they seem to be ran by the client. Is there a way to monitor svn changes and collect the relevant information from the Linux server?

    Read the article

  • How do you navigate and refactor code written in a dynamic language?

    - by Philippe Beaudoin
    I love that writing Python, Ruby or Javascript requires so little boilerplate. I love simple functional constructs. I love the clean and simple syntax. However, there are three things I'm really bad at when developing a large software in a dynamic language: Navigating the code Identifying the interfaces of the objects I'm using Refactoring efficiently I have been trying simple editors (i.e. Vim) as well as IDE (Eclipse + PyDev) but in both cases I feel like I have to commit a lot more to memory and/or to constantly "grep" and read through the code to identify the interfaces. As for refactoring, for example changing method names, it becomes hugely dependent on the quality of my unit tests. And if I try to isolate my unit tests by "cutting them off" the rest of the application, then there is no guarantee that my stub's interface stays up to date with the object I'm stubbing. I'm sure there are workarounds for these problems. How do you work efficiently in Python, Ruby or Javascript?

    Read the article

  • Intellectual-Property Question

    - by Roger J. J.
    Like almost everyone here, I have a handfull of scripts and software that I have developed and am enthused about. I will be looking for my first job as a software designer / coder. It seems natural that I will be eager to please my employer and use scripts or similar methods that I have developed and worked for me in the past to please my employer. It seems certain that many things that I code will look very similar to things I have coded in the past. I don't understand how to document and articulate to an employer that this code base was mine before I got here and this will continue to be mine when I leave. Surely, this is a common issue, but none of the various searches I've done on the net have produced an answer to this question. How is this situation commonly dealt with in the industry? I feel like there should be a digital version of sending myself a 'certified letter' with my code/software/scripts contained. I'm not trying to protect my code from others using it; I am trying to protect my right to continue using my code base that I have developed prior to to gaining employment with an employer.

    Read the article

  • Building a Redundant / Distributed Application

    - by MattW
    This is more of a "point me in the right direction" question. My team of three and I have built a hosted web app that queues and routes customer chat requests to available customer service agents (It does other things as well, but this is enough background to illustrate the issue). The basic dev architecture today is: a single page ajax web UI (ASP.NET MVC) with floating chat windows (think Gmail) a backend Windows service to queue and route the chat requests this service also logs the chats, calculates service levels, etc a Comet server product that routes data between the web frontend and the backend Windows service this also helps us detect which Agents are still connected (online) And our hardware architecture today is: 2 servers to host the web UI portion of the application a load balancer to route requests to the 2 different web app servers a third server to host the SQL Server DB and the backend Windows service responsible for queuing / delivering chats So as it stands today, one of the web app servers could go down and we would be ok. However, if something would happen to the SQL Server / Windows Service server we would be boned. My question - how can I make this backend Windows service logic be able to be spread across multiple machines (distributed)? The Windows service is written to accept requests from the Comet server, check for available Agents, and route the chat to those agents. How can I make this more distributed? How can I make it so that I can distribute the work of the backend Windows service can be spread across multiple machines for redundancy and uptime purposes? Will I need to re-write it with distributed computing in mind? I should also note that I am hosting all of this on Rackspace Cloud instances - so maybe it is something I should be less concerned about? Thanks in advance for any help!

    Read the article

  • C# replacing out parameters with struct

    - by Jonathan
    I'm encountering a lot of methods in my project that have a bunch of out parameters embedded in them and its making it cumbersome to call the methods as I have to start declaring the variables before calling the methods. As such, I would like to refactor the code to return a struct instead and was wondering if this is a good idea. One of the examples from an interface: void CalculateFinancialReturnTotals(FinancialReturn fr, out decimal expenses, out decimal revenue, out decimal levyA, out decimal levyB, out decimal profit, out decimal turnover, out string message) and if I was to refactor that, i would be putting all the out parameters in the struct such that the method signature is much simpler as below. [structName] CalculateFinancialReturnTotals(FinancialReturn fr); Please advise.

    Read the article

  • Has any language become greatly popular for something other than its intended purpose?

    - by Jon Purdy
    Take this scenario: A programmer creates a language to solve some problem. He then releases this language to help others solve problems like it. Another programmer discovers it's actually much better for some different category of problems. By virtue of this new application, the language then becomes popular for that application primarily. Are there any instances of this actually occurring? Put another way, does the intended purpose of a language have any bearing on how it's actually used, or whether it becomes popular? Is it even important that a language have an advertised purpose?

    Read the article

  • Is it possible to determine whether my web site is being accessed as a trusted site?

    - by Sameer
    I am working on site which have a lot of configuration and security settings and I have to check either clients browser is on trusted zone or not using JavaScript. Is it possible to determine whether my web site is being accessed as a trusted site? The reason I'd like to do this is that some functions won't work unless the site is being accessed as a trusted site, and I'd like to be able to warn users. Is there any solution ?

    Read the article

  • Language Niches and Niche Libraries

    - by Roman A. Taycher
    "Everyone Knows" ... ... that c is widely used for low level programs in large part because operating system/device apis are usually in c. ... that Java is widely used for enterprise applications in large part because of enterprise libraries and ide support. ... that ruby is widely used for webapps thanks in large part because of rails and its library ecosytem But lets go into to details what are the specific niches and subniches. Especially with respect to libraries. Where might you embed lua for application scripting versus python. Where would you use Java vs C#. Which languages do different scientists use? Also which languages have libraries for these subniches? Things like bioperl/scipy/Incanter. Please no flamewars about how nice each language or environment is. This is where they used. Also no complaints about marketing/PHBs. (Manually migrated) I asked this question again after it was closed on stackoverflow.com

    Read the article

  • Clarification of "avoid if-else" advice [duplicate]

    - by deviDave
    This question already has an answer here: Elegant ways to handle if(if else) else 21 answers The experts in clean code advise not to use if/else since it's creating an unreadable code. They suggest rather using IF and not to wait till the end of a method without real need. Now, this if/else advice confuses me. Do they say that I should not use if/else at all (!) or to avoid if/else nesting? Also, if they refer to the nesting of if/else, should I not do even a single nesting or I should limit it to max 2 nestings (as some recommends)? When I say single nesting, I mean this: if (...) { if (...) { } else { } } else { } EDIT Also tools like Resharper will suggest reformatting if/else statements. It usually concerts them to if stand-alone statement, and sometimes even to ternary expression.

    Read the article

  • Looking for new language and new technology [closed]

    - by Basim
    back when Microsoft relased .Net in 2002 or whatever, when I look at that time I say to myself what I if I picked one of Microsoft language in that time and still work on it, of course I will be professional by now. I am looking for a new language that is going up and will be big thing in the next 5-10 years, so in that time i can see the big picture and I know that I'm one of the few people who started from the beginning with X programming language or technology. My interest is web development.

    Read the article

  • What's the most useful 10% of UML and is there a quick tutorial on it?

    - by Hanno Fietz
    I want my scribbles of a program's design and behaviour to become more streamlined and have a common language with other developers. I looked at UML and in principle it seems to be what I'm looking for, just way overkill. The information I found online also seems very bloated and academic. Is there a no-bullshit, 15-minutes introduction to the handful of UML symbols I'll need when discussing the architecture of some garden variety software on a whiteboard with my colleagues?

    Read the article

  • Hard Copies VS Soft Copies

    - by Garet Claborn
    Where do you draw the line and say, "OK, I'm actually going to print out this piece of code, spec, formula, or other info and carry it around but these pieces can stay on disk." Well, more importantly why do you draw the line there? I've encountered this a number of times and have some sort of vague conceptions beyond "oh now I'm REALLY stuck, better print this out." I've also found some quicksheets of basic specs to be handy. Really though, I have no particular logic behind what is useful to physically have available in the design and development process. I have a great pile of 'stuff' papers that seemed at least partially relevant at the time, but I only really use about a third of them ever and often end up wishing I had different info on hand. Edit: So this is what I'm hearing in a nutshell: Major parts of the design pattern Common, fairly static and prominently useful code (reference or specs) Some representation of data useful in collaborating or sharing with team Extreme cases of tough problem solving Overwhelmingly,almost never print anything.

    Read the article

  • Having MSc or Experience worth in industrial environments?

    - by Abimaran
    I'm a fresh graduate in Electronic & Telecommunication field, and in our University, we can have major and minor fields in the relevant subjects. So, I majored in telecommunication and minored in Software Engineering. As I learned programing long before, Now I'm passionate in SE and programming. And, I want drive into the SE field. And, It came to know that, in industries, most of them expecting the candidates to have the experience, or having a MSc in the related field. [I'm referring my surrounding environment, not all the industries]. My Question, How do they consider those MSc and experience guys in the industries? Thanks!

    Read the article

  • Current trends in Random Access Memory

    - by Nutel
    As I know for now because of laws of Physics there will be not any tangible improvements in CPU cycles per second for the nearest future. However because of Von Neumann bottleneck it seems to not be an issue for non-server applications. So what about RAM, is there any upcoming technologies that promise to improve memory speed or we are stack with the current situation till quantum computers will come out from labs?

    Read the article

  • Standard -server to server- and -browser to server- authentication method

    - by jeruki
    I have server with some resources; until now all these resources were requested through a browser by a human user, and the authentication was made with an username/password method, that generates a cookie with a token (to have the session open for some time). Right now the system requires that other servers make GET requests to this resource server but they have to authenticate to get them. We have been using a list of authorized IPs but having two authentication methods makes the code more complex. My questions are: Is there any standard method or pattern to authenticate human users and servers using the same code? If there is not, are the methods I'm using now the right ones or is there a better / more standard way to accomplish what I need? Thanks in advance for any suggestion.

    Read the article

  • Should I expect my team to have more than a basic proficiency with our source control system?

    - by Joshua Smith
    My company switched from Subversion to Git about three months ago. We had weeks of advance notice prior to the switch. Since I'd never used Git before (or any other DVCS), I read Pro Git and spent a little time spinning up my own repositories and playing around, so that when we switched I'd be able to keep working with minimal pain. Now I'm the 'Git guy' by default. With a couple of exceptions, most of my team still has no idea how Git works. For example, they still think of branches as complete copies of the source code, and even go so far as to clone the repo into multiple folders (one per branch). They generally look at Git as a scary black box. Given the fundamental nature of source control in our daily work (not to mention the ridiculous amount of power Git affords us), I'm of the opinion that any dev who doesn't achieve a certain level of proficiency with it is a liability. Should I expect my team to have at least some understanding of how Git works internally, and how to use it beyond the most basic pull/merge/push operations? Or am I just making something out of nothing?

    Read the article

  • what exactly is system programming?

    - by kentjh
    I have never understood what system programming meant. The usual definition given is "...doing something close to the Os or extending Os features...". Does using Windows API directly rather than some libraries to say do file i/o make it system programming? Was writing Android OS system programming? If I write something that would expose linux kernel through a console like app on Android am I doing system programming? If I am writing software to control a washing machine am I writing system programming? I am a beginner in programming and this is confusing me to no end. Please explain contrasting it with "application programming".

    Read the article

< Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >