Search Results

Search found 15103 results on 605 pages for 'programmers notepad'.

Page 348/605 | < Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >

  • Can I minify Javascript that requires copyright notice?

    - by Nathan Long
    I guess this is actually a legal question, but it relates to software. I'm about to include a JS plugin in a project. The comments include: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Is using this in my web site "redistribution?" If I minify this to conserve bandwidth, I assume it will strip all comments. If the answer to #1 is yes, doesn't that imply I'm legally not allowed to minify it? (That would stink, since I was planning to auto-minify all JS as part of the deploy process.)

    Read the article

  • Is programming as a profession in a race to the bottom?

    - by q303
    It seems to me that the programming industry is in a race to the bottom. If we take the practices of: Not taking time to implement best practices Using other's people code as much as possible (custom code as a liability) Using increasingly higher level languages to improve productivity GUI based development "tools" that greatly simplify "programming" and do not require people to understand the plumbing behind the code These things imply to me that we are in a race to becoming like any other office worker. It is in the employer's interest for things to not require skill (easier to replace), for things to be prebuilt (less project time). My point here is that a) is there a misalignment between skill and the economic interests of the employer? and b) if there is, how do you mitigate it to enforce professional standards?

    Read the article

  • What significant lost advances on side tracks should be revived in the main stream of software?

    - by C.W.Holeman II
    In reading Alan Kay's question on Significant new inventions I was coming up with answers that were not new ideas but old ones that have been passed by in the main stream. So, what significant lost advances that happened on a side track should be revived in the main stream of software? These would be ideas that worked well in their context but a different context appeared and became dominant the context, even though it lacked the idea. It maybe that in the new dominant context one just cannot do something or it requires great effort to be exerted.

    Read the article

  • Published Windows Phone 7 apps: good for the Resume/CV?

    - by pearcewg
    I'm a long time Microsoft developer who has recently started publishing Windows Phone 7 apps to beef up my current C#/.NET skills, and get more direct exposure to WPF/Silverlight, and of course because it is new and cool. So far I've published over 10 apps successfully. Is this a good thing to put on my resume? Does it appear to show a grasp of the latest Microsoft technologies? Any downside seen by potential employers? Would you put this on your resume, if looking for a full time professional Software Engineering position?

    Read the article

  • What is the single most effective thing you did to improve your programming skills?

    - by Oded
    Looking back at my career and life as a programmer, there were plenty of different ways I improved my programming skills - reading code, writing code, reading books, listening to podcasts, watching screencasts and more. My question is: What is the most effective thing you have done that improved your programming skills? What would you recommend to others that want to improve? I do expect varied answers here and no single "one size fits all" answer - I would like to know what worked for different people. Edit: Wow - what great answers! Keep 'em coming people!!!

    Read the article

  • Open Source Project all dressed up but nowhere to go...

    - by Calanus
    Over the past 2 years myself and a colleague have built an online statistical analysis application using a mixture of silverlight, wcf and R. I (a c# programmer) wrote all the silverlight and wcf stuff whilst my colleague (a statistician) came up with the stats algorithms and wrote the R code. Now we think that this app is fairly unique - a rich gui online statistics application that is much more intuitive than all the other online stat apps that I've seen. But despite this we don't really know where to go with the project, mainly for the following reasons: 1) Its fairly complicated stuff - without the mix of programing and stats skills it would be difficult for anyone to "get into" the project and contribute. 2) We are stalled by a lack of a proper place to host the site. Currently it sits on the family windows 7 media centre, not exactly the best place to host it as it could interfere with the missus trying to watch Corrie/Friends/Oprah etc. Soo, anyone got any ideas on how to move forward with this? I guess that my strength is programing not marketing so despite working hard at this for the past couple of years I feel that I've reached a dead end! Also, does anyone know of any free windows hosting for open source projects? If I could find a proper place to put the app I might feel re-energised about the whole thing. The source code is on codeplex at: http://silverstats.codeplex.com, whilst the app is currently hosted at http://silverstats.co.uk

    Read the article

  • What open source POSIX compliance test suites are available?

    - by Richard Pennington
    I'm working on a small open source project, ELLCC, that uses clang/LLVM as a cross compiler for various target processors. For the runtime environment, I'm using the NetBSD libraries and porting them to target Linux and standalone systems. I want to run a POSIX compliance test suite on the code. I've found the Open POSIX Test Suite, which looks like a good start, but it hasn't been updated since 2005. I've done some preliminary testing (with gcc and ecc under Linux), and it looks like it needs a few updates for modern compilers. My questions are: Does the Open POSIX Test Suite live on somewhere in a more up to date form? Are there other open source alternatives?

    Read the article

  • Why did the web win the space of remote applications and X not?

    - by Martin Josefsson
    The X Window System is 25 years old, it had it's birthday yesterday (on the 15'th). As you probably are aware of, one of it's most important features is the separation of the server side and the client side in a way that neither Microsoft's, Apples or Wayland's windowing systems have. Back in the days (sorry for the ambiguous phrasing) many believed X would dominate over other ways to make windows because of this separation of server and client, allowing the application to be ran on a server somewhere else while the user clicks and types on her own computer at home. This use obviously still exists, but is marginalized at best. When we write and use programs that run on a server we almost always use the web with it's html/css/js. Why did the web win, and X not? The technologies used for the web (said html/css/js) are a mess. Combined with all the back-end-frameworks (Rails, Django and all) it really is a jungle to navigate thru. Still the web thrives with creativity and progress, while remote X apps do not.

    Read the article

  • How to decide how backward-compatible my new Mac OS X application should be?

    - by haimg
    I'm currently contemplating writing an OS X version of my Windows software. My Windows application still supports Windows XP, and I know that if I drop support for it now, our customers will cry bloody murder. I'm new to OS X development, and as I learn the technology, APIs, etc., I realized that if I'm going to provide comparable level of backward compatibility (e.g. down to OS X 10.5), I would not be able to use many things that look very useful and relevant in my case (ARC, XPC communications, many others). This is quite different from Windows, in my opinion, where there are very little changed between Windows XP and Windows 7 from desktop application developer's standpoint. So, on one hand, it seems like a complete waste to stick to 2007 or 2009-level API in 2012. On the other hand, according to NetMarketShare report and Stat Owl report Mac OS X 10.5 and 10.6 market share is still 11% and 35%-40% respectively. However, I'm not sure if these older OS users are my target audience (buyers of software utilities) if they didn't bother to upgrade their OS... My question: Are there any other reasons I should take into account when deciding if I target 10.5 or 10.6 or 10.7 for a new application?

    Read the article

  • Query something and return the reason if nothing has been found

    - by Daniel Hilgarth
    Assume I have a Query - as in CQS that is supposed to return a single value. Let's assume that the case that no value is found is not exceptional, so no exception will be thrown in this case. Instead, null is returned. However, if no value has been found, I need to act according to the reason why no value has been found. Assuming that the Query knows the reason, how would I communicate it to the caller of the Query? A simple solution would be not return the value directly but a container object that contains the value and the reason: public class QueryResult { public TValue Value { get; private set; } public TReason ReasonForNoValue { get; private set; } } But that feels clumsy, because if a value is found, ReasonForNoValue makes no sense and if no value has been found, Value makes no sense. What other options do I have to communicate the reason? What do you think of one event per reason? For reference: This is going to be implemented in C#.

    Read the article

  • Make a flowchart to demonstrate closure behavior

    - by thomas
    I saw below test question the other day in which the author's used a flow chart to represent the logic of loops. And I got to thinking it would be interesting to do this with some more complex logic. For example, the closure in this IIFE sort of boggles me. while (i <= qty_of_gets) { // needs an IIFE (function(i) promise = promise.then(function(){ return $.get("queries/html/" + product_id + i + ".php"); }); }(i++)); } I wonder if seeing a flowchart representation of what happens in it could be more elucidating. Could such a thing be done? Would it be helpful? Or just messy? I haven't the foggiest clue where to start, but thought maybe someone would like to take a stab. Probably all the ajax could go and it could just be a simple return within the IIFE.

    Read the article

  • Service Layer - how broad should it be, and should it also be on the local application?

    - by BornToCode
    Background: I need to build a desktop application with some operations (CRUD and more) (=winforms), I need to make another application which will re-use some of the functions of the main application (=webforms). I understood that using service layer is the best approach here. If I understood correctly the service should be calling the function on the BL layer (correct me if I'm wrong) The dilemma: In my main winform UI - should I call the functions from the BL, or from the service? (please explain why) Should I create a service for every single function on the BL even if I need some of the functions only in one UI? for example - should I create services for all the CRUD operations, even though I need to re-use only update operation in the webform? YOUR HELP IS MUCH APPRECIATED

    Read the article

  • How to efficiently protect part of an application with a license

    - by Patrick
    I am working on an application that has many functional parts. When a customer buys the application, he buys the standard functionality, but he can also buy some additional elements of the application for an additional price. All of the elements are part of the same application executable. A license key is used to indicate which of the elements should be accessible in the application. Some of the elements can be easily disabled if the user didn't pay for it. These are typically the modules that you can access via the application's menu. However, some elements give more problems: What if a part of the data model is related to an optional part? Do I build up these data structures in my application so the rest of my application can just assume they're always there? Or do I don't build them, and add checks in the rest of may application? What if some optional part is still useful to perform some internal tasks, but I don't want to expose it to the user externally? What if the marketing responsible wants to make a standard part now an optional part? In all of my application I assume that that part is present, but if it becomes optional, I should add checks on it everywhere in the application. I have some ideas on how to solve some of the problems (e.g. interfaces with dual implementations: one working implementation, and one that is activated if the optional part is not activated). Do you know of any patterns that can be used to solve this kind of problem? Or do you have any suggestions on how to handle this licensing problem? Thanks.

    Read the article

  • Web Services and code lists

    - by 0x0me
    Our team heavily discuss the issues how to handle code list in a web service definition. The design goal is to describe a provider API to query a system using various values. Some of them are catalogs resp. code lists. A catalog or code list is a set of key value pairs. There are different systems (at least 3) maintaining possibly different code lists. Each system should implement the provider API, whereas each system might have different code list for the same business entity eg. think of colors. One system know [(1,'red'),(2,'green')] and another one knows [(1,'lightgreen'),(2,'darkgreen'),(3,'red')] etc. The access to the different provider API implementations will be encapsulated by a query service, but there is already one candidate which might use at least one provider API directly. The current options to design the API discussed are: use an abstract code list in the interface definition: the web service interface defines a well known set of code list which are expected to be used for querying and returning data. Each API provider implementation has to mapped the request and response values from those abstract codelist to the system specific one. let the query component handle the code list: the encapsulating query service knows the code list set of each provider API implementation and takes care of mapping the input and output to the system specific code lists of the queried system. do not use code lists in the query definition at all: Just query code lists by a plain string and let the provider API implementation figure out the right value. This might lead to a loose of information and possibly many false positives, due to the fact that the input string could not be canonical mapped to a code list value (eg. green - lightgreen or green - darkgreen or both) What are your experiences resp. solutions to such a problem? Could you give any recommendation?

    Read the article

  • What is use of universal character names in identifiers in C++11

    - by Jan Hudec
    The new C++ standard specifies universal character names, written as \uNNNN and \UNNNNNNNN and representing the characters with unicode codepoints NNNN/NNNNNNNN. This is useful with string literals, especially since explicitly UTF-8, UTF-16 and UCS-4 string literals are also defined. However, the universal character literals are also allowed in identifiers. What is the motivation behind that? The syntax is obviously totally unreadable, the identifiers may be mangled for the linker and it's not like there was any standard function to retrieve symbols by name anyway. So why would anybody actually use an identifier with universal character literals in it?

    Read the article

  • What are some good courses to take my programming to the next level?

    - by absentx
    I am in search of either some in person, or online training that could take my coding to the next level. I am looking to attack two specific areas: Javascript: While I have been getting by with javascript for three or four years, I still feel like it takes a back seat to my other programming. I use Jquery a lot but would prefer to be proficient in pure JS also. PHP: I feel pretty proficient at PHP but I know there is only room to improve. Here I am interested in something that can teach me the more advanced aspects of the language, improve my code writing and perhaps cover object oriented php in depth also. I have looked into Netcom's training courses before but I can't tell if there advanced webmaster professional would be a good fit or not. Seems kind of like a force fed course but I am interested in it because I am looking for something in the one to two week range that is targeted at what I am looking for. I have zero experience with any type of online courses in terms of programming. It appears lots are available, but I am not sure on the quality.

    Read the article

  • Removing occurrences of characters in a string

    - by DmainEvent
    I am reading this book, programming Interviews exposed by John Wiley and sons and in chapter 6 they are discussing removing all instances of characters in a src string using a removal string... so removeChars(string str, string remove) In there writeup they sey the steps to accomplish this are to have a boolean lookup array with all values initially set to false, then loop through each character in remove setting the corresponding value in the lookup array to true (note: this could also be a hash if the possible character set where huge like Unicode-16 or something like that or if str and remove are both relatively small... < 100 characters I suppose). You then iterate through the str with a source and destination index, copying each character only if its corresponding value in the lookup array is false... Which makes sense... I don't understand the code that they use however... They have for(src = 0; src < len; ++src){ flags[r[src]] == true; } which is turning the flag value at the remove string indexed at src to true... so if you start out with PLEASE HELP as your str and LEA as your remove you will be setting in your flag table at 0,1,2... t|t|t but after that you will get an out of bounds exception because r doesn't have have anything greater than 2 in it... even using there example you get an out of bounds exception... Am is there code example unworkable? Entire function string removeChars( string str, string remove ){ char[] s = str.toCharArray(); char[] r = remove.toCharArray(); bool[] flags = new bool[128]; // assumes ASCII! int len = s.Length; int src, dst; // Set flags for characters to be removed for( src = 0; src < len; ++src ){ flags[r[src]] = true; } src = 0; dst = 0; // Now loop through all the characters, // copying only if they aren’t flagged while( src < len ){ if( !flags[ (int)s[src] ] ){ s[dst++] = s[src]; } ++src; } return new string( s, 0, dst ); } as you can see, r comes from the remove string. So in my example the remove string has only a size of 3 while my str string has a size of 11. len is equal to the length of the str string. So it would be 11. How can I loop through the r string since it is only size 3? I haven't compiled the code so I can loop through it, but just looking at it I know it won't work. I am thinking they wanted to loop through the r string... in other words they got the length of the wrong string here.

    Read the article

  • Notification framework for object lifecycle

    - by rlandster
    I am looking for an application, framework, or library that would help us with "object life-cycle management". There are many things that are created for users, departments, and services that, all too often, are left unmanaged. Some examples: user accounts groups SSL certificates access rights databases software license provisionings storage list-serve accounts These objects are created and managed by a wide variety of applications and systems. Typically, a user (person) requests (either explicitly or implicitly) one of these objects. A centralized management tool would help us manage such administration chores as: What objects does user X currently own/manage? Move the ownership of object P to user X; move all objects owned by user X (who was just been fired) to user Y. For all objects of type T that have expired be sure the objects have been disabled or deleted by their provider. How many active (expired, about-to-expire) objects of type P are there? Send periodic notifications to all users who own active objects of type P reminding them of what they own. There is a security alert for objects of type P; send a notification to all users who own these types of objects to take a specific remedial action. Delete or disable a set of objects based on expiration (or some other criteria). These objects are directly managed through their own applications (Active Directory, MySql, file systems, etc.) and may even have their own notification systems, but I want to centralize this into an "object management system". The OMS should allow the association with an external identity provider that defines who the users and groups are (e.g., LDAP, Active Directory) creation of objects association of an object to a specific user and/or group association with an expiration date creation of flexible reporting including letting users know what objects they currently own and their expiration dates integration with an external object "provider" via a plug-in We could write something from scratch, but I am hoping there is something already out there that will help, either an entire application or a set of libraries that provide much of what is needed. Any ideas?

    Read the article

  • Am I an idealist?

    - by ereOn
    This is not only a question, this is also a call for help. Since I started my career as a programmer, I always tried to learn from my mistakes. I worked hard to learn best-practices and while I don't consider myself a C++ expert, I still believe I'm not a beginner either. I was recently hired into a company for C++ development. There I was told that my way to work was "against the rules" and that I would have to change my mind. Here are the topics I disagree with my hierarchy (their words): "You should not use separate header files for your different classes. One big header file is both easier to read and faster to compile." "Trying to use different headers is counter-productive : use the same super-set of headers everywhere, and enforce the use #pragma hdrstop to hasten compilation" "You may not use Boost or any other library that uses nested directories to organize its files. Our build-machine doesn't work with nested directories. Moreover, you don't need Boost to create great software." One might think I'm somehow exaggerated things, but the sad truth is that I didn't. That's their actual words. I believe that having separate files enhance maintainability and code-correctness and can fasten compilation time by the use of the proper includes. Have you been in a similar situation? What should I do? I feel like it's actually impossible for me to work that way and day after day, my frustration grows.

    Read the article

  • Should I go back to the same company ?

    - by vinoth
    Hi , I quit the company I was working for(lets call it XYZ) and joined another company . When I quit the company I had very little Software development experience . I thought the rest of the world is a better place . So I complained about the word quality and all that while i quit . One year has taught me a lot of things and I feel XYZ is a much better place (in terms of freedom and decision making in work) . Is it ok to go back ? I am thinking a lot whether to go or not because I quit complaining the nature of work and now I am going back for the same thing . Also I am kind of not very sure to go to other places because , the work and quality are not predictable (I am might become disappointed again ) . Have any of guys been in the same situation before ?

    Read the article

  • returning a heap block by reference in c++

    - by basicR
    I was trying to brush up my c++ skills. I got 2 functions: concat_HeapVal() returns the output heap variable by value concat_HeapRef() returns the output heap variable by reference When main() runs it will be on stack,s1 and s2 will be on stack, I pass the value by ref only and in each of the below functions, I create a variable on heap and concat them. When concat_HeapVal() is called it returns me the correct output. When concat_HeapRef() is called it returns me some memory address (wrong output). Why? I use new operator in both the functions. Hence it allocates on heap. So when I return by reference, heap will still be VALID even when my main() stack memory goes out of scope. So it's left to OS to cleanup the memory. Right? string& concat_HeapRef(const string& s1, const string& s2) { string *temp = new string(); temp->append(s1); temp->append(s2); return *temp; } string* concat_HeapVal(const string& s1, const string& s2) { string *temp = new string(); temp->append(s1); temp->append(s2); return temp; } int main() { string s1,s2; string heapOPRef; string *heapOPVal; cout<<"String Conact Experimentations\n"; cout<<"Enter s-1 : "; cin>>s1; cout<<"Enter s-2 : "; cin>>s2; heapOPRef = concat_HeapRef(s1,s2); heapOPVal = concat_HeapVal(s1,s2); cout<<heapOPRef<<" "<<heapOPVal<<" "<<endl; return -9; }

    Read the article

  • What's typical in terms of royalties? [closed]

    - by Matt Phillips
    I'm a developer negotiating compensation for a commercialized version of some data analysis software I wrote (see my profile if you like). This is a completely new experience for me. I want per-unit royalties, but I don't have the slightest idea what the standard amount is. I also want to be compensated for my time, so that's an upfront R&D cost for the company I'm negotiating with, but distribution cost to them is presumably virtually nothing once it's out there. But then there's support costs. What sorts of deals have you folks negotiated?

    Read the article

  • Lesser-known Github features that I'm missing out on with Bitbucket? [closed]

    - by Ghopper21
    I've been using Bitbucket for my small-team development projects, with the assumption that it is more-or-less a Github clone with pricing that is better for my situation and support for Mercurial (which I don't need). However, I'm seeing there are material-if-not-overwhelming differences, e.g. Github's appealing and useful branches page versus Bitbucket's overly simple branch drop-down list. This makes me wonder: what else am I missing out on? What are the lesser known Github features that folks like me using Bitbucket to save money are missing out on? EDIT: following closure, I've asked for advice on making this question productive over at meta. See here.

    Read the article

  • A list of the most important areas to examine when moving a project from x86 to x64?

    - by aking1012
    I know to check for/use asserts and carefully examine any assembly components, but I didn't know if anyone out there has a fairly comprehensive or industry standard check-list of specific things at which to look? I am looking more at C and C++. note: There are some really helpful answers, I'm just leaving the question open for a couple days in case some folks only check questions that don't have accepted answers.

    Read the article

  • Just getting started in Spring and my preference is XML config over annotations. Correct or not?

    - by John Munsch
    After having read through some of the Spring docs my inclination is towards using a XML config file rather than annotations on the classes themselves. My reasoning is that by doing so you avoid tying your POJOs to a particular framework. Based on your experience with Spring, are there any advantages that XML configuration have over an annotation based configuration, and if not what are the disadvantages?

    Read the article

< Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >