Search Results

Search found 15103 results on 605 pages for 'programmers notepad'.

Page 261/605 | < Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >

  • Basics of ERP for dummies

    - by DarenW
    A situation has arisen where (if I don't scream and run away) I will be involved in an ERP system. This project will be using OpenERP specifically. My background is entirely science/engineering/music/games/art/whatever. I've never set foot in the realm of business systems or anything describable with the word "enterprise". What is a good introduction to the whole ERP concept, OpenERP and business systems in general suitable for those with flat zero experience in that world? The ideal intro would explain, from no assumptions, what the main ideas are, terminology, they style of work and thinking of people in that world, and maybe some concrete suggestions how one can tinker around with a copy of OpenERP to gain basic familiarity.

    Read the article

  • How to add a subclass to a viewcontroller in storyboard

    - by Ken Barlo
    Here's what I've created I created a view controller element (in storyboard) I created a new viewcontroller .h and .m file Here's my issue Can't seem to figure out how to get the content that I've added to the .m file to show up on the view controller element in the storyboard once the app is launched and the selection for that view controller element is made in the iPhone simulator. Although I am able to see content that is already added to a .m file on the home screen and activity screen when the selection is made in the iPhone simulator. Hope that makes sense, if not please ask and I'll be more than happy to provide more info.

    Read the article

  • How far has a bug pushed you? [closed]

    - by Darknight
    When debugging (hard to find) bugs, I know I've personally gotten so frustrated as to lash out on the keyboard and shout profanities at the monitor. I have repeatability witnessed co-workers throw their computer mouse off the table in anger and frustration. What is the furthest a bastard of bug has ever pushed you? EDIT: Hehehe :D it would seem this bug, er I mean post has pushed the guys to close it... Oh well, very very interesting answers anyway.

    Read the article

  • Best language or tool for automating tedious manual tasks

    - by Jon Hopkins
    We all have tasks that come up from time to time that we think we'd be better off scripting or automating than doing manually. Obviously some tools or languages are better for this than others - no-one (in their right mind) is doing a one off job of cross referencing a bunch of text lists their PM has just given them in assembler for instance. What one tool or language would you recommend for the sort of general quick and dirty jobs you get asked to do where time (rather than elegance) is of the essence? Background: I'm a former programmer, now development manager PM, looking to learn a new language for fun. If I'm going to learn something for fun I'd like it to be useful and this sort of use case is the most likely to come up.

    Read the article

  • Should a stack trace be in the error message presented to the user?

    - by Vilx-
    I've got a bit of an argument at my workplace and I'm trying to figure out who is right, and what is the right thing to do. Context: an intranet web application that our customers use for accounting and other ERP stuff. I'm of the opinion that an error message presented to the user (when things crash) should include as much information as possible, including the stack trace. Of course, it has to start with a nice "An Error has occurred, please submit the below information to the developers" in large, friendly letters. My reasoning is that a screenshot of the crashed application will often be the only easily available source of information. Sure, you can try to get a hold of the client's systems administrator(s), attempt to explain where your log files are, etc, but that will probably be slow and painful (talking to the client representatives mostly is). Also, having an immediate and full information is extremely useful in development, where you don't have to go hunting through the log files to find what you need on every exception. (But that could be solved with a configuration switch.) Unfortunately there has been some kind of "Security audit" (no idea how they did that without the sources... but whatever), and they complained about the full exception messages citing them as a security threat. Naturally, the clients (at least one that I know of) has taken this at face value and now demands that the messages be cleaned. I fail to see how a potential attacker could use a stack trace to figure anything out he couldn't have figured out before. Are there any examples, any documented proof of anyone ever doing that? I think that we should fight this foolish idea, but perhaps I'm the fool here, so... Who's right?

    Read the article

  • Why use an OO approach instead of a giant "switch" statement?

    - by James P. Wright
    I am working in a .Net, C# shop and I have a coworker that keeps insisting that we should use giant Switch statements in our code with lots of "Cases" rather than more object oriented approaches. His argument consistently goes back to the fact that a Switch statement compiles to a "cpu jump table" and is therefore the fastest option (even though in other things our team is told that we don't care about speed). I honestly don't have an argument against this...because I don't know what the heck he's talking about. Is he right? Is he just talking out his ass? Just trying to learn here.

    Read the article

  • Mono is frequently used to say "Yes, .NET is cross-platform". How valid is that claim?

    - by Thorbjørn Ravn Andersen
    In What would you choose for your project between .NET and Java at this point in time? I say that I would consider the "Will you always deploy to Windows?" the single most important (EDIT: technical) decision to make up front in a new web project, and if the answer is "no", I would recommend Java instead of .NET. A very common counter-argument is that "If we ever want to run on Linux/OS X/Whatever, we'll just run Mono", which is a very compelling argument on the surface, but I don't agree for several reasons. OpenJDK and all the vendor supplied JVM's have passed the official Sun TCK ensuring things work correctly. I am not aware of Mono passing a Microsoft TCK. Mono trails the .NET releases. What .NET-level is currently fully supported? Does all GUI elements (WinForms?) work correctly in Mono? Businesses may not want to depend on Open Source frameworks as the official plan B. I am aware that with the new governance of Java by Oracle, the future is unsafe, but e.g. IBM provides JDK's for many platforms, including Linux. They are just not open sourced. So, under which circumstances is Mono a valid business strategy for .NET-applications? Edit: Mark H summarized it as: "If the claim is that "I have a windows application written in .NET, it should run on mono", then not, it's not a valid claim - but Mono has made efforts to make porting such applications simpler.".

    Read the article

  • Difference between bug, defect and flaw

    - by Hossein
    I was reading "Software Security: Building Security In" and in the first chapter I faced with 3 terms: bug, defect and flaw. The author gave a definition for each of them but I couldn't completely understand these. Can someone give me some examples for each term? What is a defect and what is a flaw? I think I know what bug is, a bug is a malfunction of a part of system which produces undesirable result, be it crashing on a wrong input or miscalculating a series of computations. Can someone elaborate more and correct me if I am wrong in this? UPDATE To be more precise in the book I mentioned above, they (the words) are presented in a way to make a distinction, that's why I am asking to know more. In that book there are some examples denoting which sample belongs to what and which category. For example: Buffer overflow is said to be a bug and issues in method overriding (subclassing issues) is being related to flaw category. Again race condition handling issues are considered bugs and Error-handling problems (fails open) are told to be flaws! I want more elaboration on these regards.

    Read the article

  • How to avoid mediocre CV.

    - by QriousCat
    Though in every project we (testers) face different set challenges, when it comes to CV, more or less we have same responsibilities. For example responsibilities like understanding requirements, preparing and executing test cases, creating defects, liaising with dev, BA teams will be repeated for every project we involve. If we keep writing same responsibilities for every role, CV becomes mediocre and a yarn. In fact most of the testing resumes I have come across are like that. How do I avoid repetition of responsibilities in my resume and make it more interesting? If this is not the correct forum for this question let me know. Thanks in advance for your suggestions.

    Read the article

  • How can I share my python scripts with my less python-savvy business person partner?

    - by Alex
    I'm taking financial mathematics as an elective, and I'm working with real life finance industry worker type people. It's actually kind of fun. When I pulled out a macbook at one of our meetings, I had four lifelong windows users look at me like I had three heads. Anyway, I'm helping with design and simulation of our trading strategy, and I wrote a little thing using matplotlib to visualize historical stock data. However, these guys don't know how to use git, or install python, or deal with path-related package management things. I need to be able to send my scripts to them to use, and I need to do it with absolutely minimal effort on their part. I was thinking something on the lines of py2exe, but I'd like to hear some advice before I go ahead.

    Read the article

  • How do I explain the importance of NUNIT Test cases to my Colleagues [duplicate]

    - by JNL
    This question already has an answer here: How to explain the value of unit testing 6 answers I am currently working in Software Development for applications including lot of Mathematical Calculations. As a result there are lot of test cases that we need to consider. We donot have any NUNIT Test case system, I am wonderring how should I get the advantages of implementing the NUNIT testing in front of my colleagues and my boss. I am pretty sure, it would be of great help for our team. Any help regarding the same, will be higly appreciated.

    Read the article

  • How to detect if an app was already installed before

    - by Dante
    How do software applications keep track of whether the user already installed the application before in it's Windows system? Say you install app X, trial version, remove it, then re install it, and when you run it again it detects you had already installed it before. If you uninstall and clean all registry information it shouldn't know you had already installed it before... Disclaimer: I'm not trying to "hack" any application, just thinking about how this is implemented.

    Read the article

  • Different ways of solving problems in code.

    - by Erin
    I now program in C# for a living but before that I programmed in python for 5 years. I have found that I write C# very differently than most examples I see on the web. Rather then writing things like: foreach (string bar in foo) { //bar has something doen to it here } I write code that looks like this. foo.ForEach( c => c.someActionhere() ) Or var result = foo.Select( c => { //Some code here to transform the item. }).ToList(); I think my using code like above came from my love of map and reduce in python - while not exactly the same thing, the concepts are close. Now it's time for my question. What concepts do you take and move with you from language to language; that allow you to solve a problem in a way that is not the normal accepted solution in that language?

    Read the article

  • Dynamically load and call delegates based on source data

    - by makerofthings7
    Assume I have a stream of records that need to have some computation. Records will have a combination of these functions run Sum, Aggregate, Sum over the last 90 seconds, or ignore. A data record looks like this: Date;Data;ID Question Assuming that ID is an int of some kind, and that int corresponds to a matrix of some delegates to run, how should I use C# to dynamically build that launch map? I'm sure this idea exists... it is used in Windows Forms which has many delegates/events, most of which will never actually be invoked in a real application. The sample below includes a few delegates I want to run (sum, count, and print) but I don't know how to make the quantity of delegates fire based on the source data. (say print the evens, and sum the odds in this sample) using System; using System.Threading; using System.Collections.Generic; internal static class TestThreadpool { delegate int TestDelegate(int parameter); private static void Main() { try { // this approach works is void is returned. //ThreadPool.QueueUserWorkItem(new WaitCallback(PrintOut), "Hello"); int c = 0; int w = 0; ThreadPool.GetMaxThreads(out w, out c); bool rrr =ThreadPool.SetMinThreads(w, c); Console.WriteLine(rrr); // perhaps the above needs time to set up6 Thread.Sleep(1000); DateTime ttt = DateTime.UtcNow; TestDelegate d = new TestDelegate(PrintOut); List<IAsyncResult> arDict = new List<IAsyncResult>(); int count = 1000000; for (int i = 0; i < count; i++) { IAsyncResult ar = d.BeginInvoke(i, new AsyncCallback(Callback), d); arDict.Add(ar); } for (int i = 0; i < count; i++) { int result = d.EndInvoke(arDict[i]); } // Give the callback time to execute - otherwise the app // may terminate before it is called //Thread.Sleep(1000); var res = DateTime.UtcNow - ttt; Console.WriteLine("Main program done----- Total time --> " + res.TotalMilliseconds); } catch (Exception e) { Console.WriteLine(e); } Console.ReadKey(true); } static int PrintOut(int parameter) { // Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " Delegate PRINTOUT waited and printed this:"+parameter); var tmp = parameter * parameter; return tmp; } static int Sum(int parameter) { Thread.Sleep(5000); // Pretend to do some math... maybe save a summary to disk on a separate thread return parameter; } static int Count(int parameter) { Thread.Sleep(5000); // Pretend to do some math... maybe save a summary to disk on a separate thread return parameter; } static void Callback(IAsyncResult ar) { TestDelegate d = (TestDelegate)ar.AsyncState; //Console.WriteLine("Callback is delayed and returned") ;//d.EndInvoke(ar)); } }

    Read the article

  • Actor library / framework for C++

    - by Giorgio
    In the C++ project I am working for we would like to use something like Scala actors and remote actors (see e.g. this tutorial). Being able to use remote actors (actors living in different processes, possibly on different machines and communicating via TCP/IP) has higher priority for us because we have an application consisting of several processes deployed on different machines. Being able to use several actors living in the same process (possibly different threads) is also interesting, but has lower priority for the moment. On wikipedia I have found some links to actor libraries for C++ and I have started to look at Theron. Before I dive too deep into the details and build an extended example with Theron, I wanted to ask if anybody has experience with any of these libraries and which one they would recommend.

    Read the article

  • Japanese Multiplication simulation - is a program actually capable of improving calculation speed?

    - by jt0dd
    On SuperUser, I asked a (possibly silly) question about processors using mathematical shortcuts and would like to have a look at the possibility at the software application of that concept. I'd like to write a simulation of Japanese Multiplication to get benchmarks on large calculations utilizing the shortcut vs traditional CPU multiplication. I'm curious as to whether it makes sense to try this. My Question: I'd like to know whether or not a software math shortcut, as described above is actually a shortcut at all. This is a question of programming concept. By utilizing the simulation of Japanese Multiplication, is a program actually capable of improving calculation speed? Or am I doomed from the start? The answer to this question isn't required to determine whether or not the experiment will succeed, but rather whether or not it's logically possible for such a thing to occur in any program, using this concept as an example. My theory is that since addition is computed faster than multiplication, a simulation of Japanese multiplication may actually allow a program to multiply (large) numbers faster than the CPU arithmetic unit can. I think this would be a very interesting finding, if it proves to be true. If, in the multiplication of numbers of any immense size, the shortcut were to calculate the result via less instructions (or faster) than traditional ALU multiplication, I would consider the experiment a success.

    Read the article

  • Started wrong with a project. Should I start over?

    - by solidsnake
    I'm a beginner web developer (one year of experience). A couple of weeks after graduating, I got offered a job to build a web application for a company whose owner is not much of a tech guy. He recruited me to avoid theft of his idea, the high cost of development charged by a service company, and to have someone young he can trust onboard to maintain the project for the long run (I came to these conclusions by myself long after being hired). Cocky as I was back then, with a diploma in computer science, I accepted the offer thinking I can build anything. I was calling the shots. After some research I settled on PHP, and started with plain PHP, no objects, just ugly procedural code. Two months later, everything was getting messy, and it was hard to make any progress. The web application is huge. So I decided to check out an MVC framework that would make my life easier. That's where I stumbled upon the cool kid in the PHP community: Laravel. I loved it, it was easy to learn, and I started coding right away. My code looked cleaner, more organized. It looked very good. But again the web application was huge. The company was pressuring me to deliver the first version, which they wanted to deploy, obviously, and start seeking customers. Because Laravel was fun to work with, it made me remember why I chose this industry in the first place - something I forgot while stuck in the shitty education system. So I started working on small projects at night, reading about methodologies and best practice. I revisited OOP, moved on to object-oriented design and analysis, and read Uncle Bob's book Clean Code. This helped me realize that I really knew nothing. I did not know how to build software THE RIGHT WAY. But at this point it was too late, and now I'm almost done. My code is not clean at all, just spaghetti code, a real pain to fix a bug, all the logic is in the controllers, and there is little object oriented design. I'm having this persistent thought that I have to rewrite the whole project. However, I can't do it... They keep asking when is it going to be all done. I can not imagine this code deployed on a server. Plus I still know nothing about code efficiency and the web application's performance. On one hand, the company is waiting for the product and can not wait anymore. On the other hand I can't see myself going any further with the actual code. I could finish up, wrap it up and deploy, but god only knows what might happen when people start using it. What do you think I should do?

    Read the article

  • Software developer needs Validation for VA Chap 31 to purchase Macbook Pro vs. PC [closed]

    - by David
    I am currently attending college with a path of software development and working towards my BS thanks to VA Chap 31. My old original Macbook Pro is near dead and no longer upgradable on the software or hardware side. The VA has offered to purchase a PC laptop for me (Because my syllabi says computer required), but I do not want to go backwards. I have a lot invested in OS X software and Mac peripherals, not to mention I prefer to program in an Apple environment. PC vs. Mac costs are so drastically different that I must validate my request for a new Macbook Pro. In my request to the VA, I stated the above and some other topics but they requested more validation. Can anyone recommend issues, reasons, etc. to help me validate this purchase by the VA for school? Thanks in advance for your help, David

    Read the article

  • How to provide value?

    - by Francisco Garcia
    Before I became a consultant all I cared about was becoming a highly skilled programmer. Now I believe that what my clients need is not a great hacker, coder, architect... or whatever. I am more and more convinced every day that there is something of greater value. Everywhere I go I discover practices where I used to roll my eyes in despair. I saw the software industry with pink glasses and laughed or cried at them depending on my mood. I was so convinced everything could be done better. Now I believe that what my clients desperately need is finding a balance between good engineering practices and desperate project execution. Although a great design can make a project cheap to maintain thought many years, usually it is more important to produce quick fast and cheap, just to see if the project can succeed. Before that, it does not really matters that much if the design is cheap to maintain, after that, it might be too late to improve things. They need people who get involved, who do some clandestine improvements into the project without their manager approval/consent/knowledge... because they are never given time for some tasks we all know are important. Not all good things can be done, some of them must come out of freewill, and some of them must be discussed in order to educate colleagues, managers, clients and ourselves. Now my big question is. What exactly are the skills and practices aside from great coding that can provide real value to the economical success of software projects? (and not the software architecture alone)

    Read the article

  • Write own messaging system vs. utilize existing ones

    - by A.Rashad
    We are trying to have our own startup, with a middleware application to glue small applications with enterprise legacy systems. for such middle-ware to function properly, we will need some sort of messaging system to make different components talk to each other in a reliable way. the alternatives are: use an existing messaging system, such as 0MQ, jBOSS, WebSphere MQ, etc. build our own messaging system the way we see the problem I am more biased towards the later option for the following reasons: to have more control over our final product to avoid any licensing problems later on to learn about messaging while writing the code to invent something new, that might cost us lots of $$$ if reused an existing system What would you do if in my shoes?

    Read the article

  • MCTS certification (Windows Communication Foundation Development)

    - by Pinchy
    Hi guys! I seriously need some advice on getting MCTS certified (Windows Communication Foundation Development) I just cannot go to a MS certification courses as they are very expensive here and far from my hometown. I want to self educate myself and I don't know where to start with. My problem is finding good study materials and sample exam questions. I haven't taken any Microsoft exams before so I have got no idea what they would ask me on the exam (70-513). Can anyone give me some ideas on how to start from scratch? Any answer will be much appreciated. Thanks

    Read the article

  • Will dolphins die if I use REST "as CRUD"?

    - by l0l0l0l0l
    Recently I moved to Laravel and I was surprised on how good setting the controllers as RESTful is, it made routes and my code cleaner. I'm kinda new on web development and never used REST before since all my clients' projects are basically CRUD operations. There's any cool buzzword to this "approach" or I'm just stupid for doing it? I don't plan to follow any REST patterns, just to make my life easier and code cleaner. Basicallly just GET/POST, the other ones are not native anyway so (emulated on hidden form value).

    Read the article

  • What is the difference between development and R&D?

    - by MainMa
    I was asked by a colleague to explain clearly the difference between ordinary development and research and development (R&D) and was unable to do it. After reading Wikipedia, I still don't have the precise answer. According to Wikipedia (slightly modified): There are two primary models: In one model, the primary function is to develop new products; in the other model, the primary function is to discover and create new knowledge about scientific and technological topics for the purpose of uncovering and enabling development of valuable new products, processes, and services. The first model is confusing. Does it mean that development (not R&D) consists exclusively in adding new features to a product, solving bugs and doing maintenance? What if something which was previously developed as a new feature becomes a separate product? The second model is less confusing, but still, how to qualify whether something is new knowledge or existent knowledge which is just rediscovered? Later, Wikipedia adds that ordinary development is different from R&D because of its: nearly immediate profit or immediate improvement. It's still not clear enough. How to qualify "nearly immediate profit"? What if a task has an immediate profit but requires heavy research? Or if it is basic but has uncertain profit, like the enforcement of a common style over the codebase? For example, does it belong to development or R&D to: Develop an engine which abstracts the access to the database, simplifying and shortening enormously the code of other applications (existent or ones which will be written in future) which should access to the database? Establish a new service-oriented architecture for the entire organization of company resources, in order to move from a bunch of separate and autonomous applications to a set of well-organized, interconnected web services, like what is used by Amazon? Design a new communication protocol to allow faster replication of data between two data centers of the company? Conceive a new type of software testing while working on a specific product, knowing that this type of testing will improve/simplify the testing process? Prove that Functional programming is more appropriate than OOP for a specific application, based on evidence, logic and previous experience? Enhance the existent application by adding gestures on tactile screens, after doing studies and testing that shows that those gestures improve the productivity of the users by a ratio of at least 1.4 for a precise set of tasks? Find a way to strongly enhance the Power usage effectiveness (PUE) of a data center? Create a Domain-Specific Language (DSL)? In short, how could I determine whether I'm doing R&D while working on something?

    Read the article

  • What is a good way to share internal helpers?

    - by toplel32
    All my projects share the same base library that I have build up over quite some time. It contains utilities and static helper classes to assist them where .NET doesn't exactly offer what I want. Originally all the helpers were written mainly to serve an internal purpose and it has to stay that way, but sometimes they prove very useful to other assemblies. Now making them public in a reliable way is more complicated than most would think, for example all methods that assume nullable types must now contain argument checking while not charging internal utilities with the price of doing so. The price might be negligible, but it is far from right. While refactoring, I have revised this case multiple times and I've come up with the following solutions so far: Have an internal and public class for each helper The internal class contains the actual code while the public class serves as an access point which does argument checking. Cons: The internal class requires a prefix to avoid ambiguity (the best presentation should be reserved for public types) It isn't possible to discriminate methods that don't need argument checking   Have one class that contains both internal and public members (as conventionally implemented in .NET framework). At first, this might sound like the best possible solution, but it has the same first unpleasant con as solution 1. Cons: Internal methods require a prefix to avoid ambiguity   Have an internal class which is implemented by the public class that overrides any members that require argument checking. Cons: Is non-static, atleast one instantiation is required. This doesn't really fit into the helper class idea, since it generally consists of independent fragments of code, it should not require instantiation. Non-static methods are also slower by a negligible degree, which doesn't really justify this option either. There is one general and unavoidable consequence, alot of maintenance is necessary because every internal member will require a public counterpart. A note on solution 1: The first consequence can be avoided by putting both classes in different namespaces, for example you can have the real helper in the root namespace and the public helper in a namespace called "Helpers".

    Read the article

  • Library Organization in .NET

    - by Greg Ros
    I've written a .NET bitwise operations library as part of my projects (stuff ranging from get MSB set to some more complicated bitwise transformations) and I mean to release it as free software. I'm a bit confused about a design aspect of the library, though. Many of the methods/transformations in the library come with different endianness. A simple example is a getBitAt method that regards index 0 as the least significant bit, or the most significant bit, depending on the version used. In practice, I've found that using separate functions for different endianness results in much more comprehensible and reusable code than assuming all operations are little-endian or something. I'm really stumped regarding how best to package the library. Should I have methods that have LE and BE versions take an enum parameter in their signature, e.g. Endianness.Little, Endianness.Big? Should I have different static classes with identically named methods? such as MSB.GetBit and LSB.GetBit On a much wider note, is there a standard I could use in cases like this? Some guide? Is my design issue trivial? I have a perfectionist bent, and I sometimes get stuck on tricky design issues like this... Note: I've sort of realized I'm using endianness somewhat colloquially to refer to the order/place value of digital component parts (be they bits, bytes, or words) in a larger whole, in any setting. I'm not talking about machine-level endianness or serial transmission endianness. Just about place-value semantics in general. So there isn't a context of targeting different machines/transmission techniques or something.

    Read the article

< Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >