Search Results

Search found 16554 results on 663 pages for 'programmers identity'.

Page 215/663 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • How many questions is it appropriate to ask as an intern?

    - by Casey Patton
    So, I just started an internship, and I'm worried that I'm asking too many questions. I've been assigned a mentor who has been assigning me projects and helping me learn all the company's technologies and methodologies. However, there's so much new material for me to learn while doing this project that I have a lot of questions. I generally ask questions over instant messages or E-mail (those are the primary modes of communication for my company). I'm trying to be careful not to ask too many questions: I don't want to come off as annoying or dumb. How many questions is appropriate to ask? Once an hour? More? Less? Keep in mind, my mentor is also a fellow programmer that has his own responsibilities.

    Read the article

  • C# String.format extension method

    - by Paul Roe
    With the addtion of Extension methods to C# we've seen a lot of them crop up in our group. One debate revolves around extension methods like this one: public static class StringExt { /// <summary> /// Shortcut for string.Format. /// </summary> /// <param name="str"></param> /// <param name="args"></param> /// <returns></returns> public static string Format(this string str, params object[] args) { if (str == null) return null; return string.Format(str, args); } } Does this extension method break any programming best practices that you can name? Would you use it anyway, if not why? If I renamed the function to "F" but left the xml comments would that be epic fail or just a wonderful savings of keystrokes?

    Read the article

  • Opinions on logging in multiprocess applications

    - by chkorn
    We have written an application that spawns at least 9 parallel processes. All processes generate a lot of logging information. Currently we are using Pythons QueueHandler to consolidate all logs into one file. Unfortunately this sometimes results in very messy files which make them hard to read (e.g. Track what exactly is going on in one thread). Do you think it is a viable option to separate all messages into dedicated files, or is this going to make things even more messy due to the high number of files? What are your general experiences when writing log files for multiprocessed/multithreaded applications?

    Read the article

  • Why is using C++ libraries so complicated?

    - by Pius
    First of all, I want to note I love C++ and I'm one of those people who thinks it is easier to code in C++ than Java. Except for one tiny thing: libraries. In Java you can simply add some jar to the build path and you're done. In C++ you usually have to set multiple paths for the header files and the library itself. In some cases, you even have to use special build flags. I have mainly used Visual Studio, Code Blocks and no IDE at all. All 3 options do not differ much when talking about using external libraries. I wonder why was there made no simpler alternative for this? Like having a special .zip file that has everything you need in one place so the IDE can do all the work for you setting up the build flags. Is there any technical barrier for this?

    Read the article

  • why not use unmanaged safe code in c#

    - by user613326
    There is an option in c# to execute code unchecked. It's generally not advised to do so, as managed code is much safer and it overcomes a lot of problems. However I am wondering, if you're sure your code won't cause errors, and you know how to handle memory then why (if you like fast code) follow the general advice? I am wondering this since I wrote a program for a video camera, which required some extremely fast bitmap manipulation. I made some fast graphical algorithms myself, and they work excellent on the bitmaps using unmanaged code. Now I wonder in general, if you're sure you don't have memory leaks, or risks of crashes, why not use unmanaged code more often ? PS my background: I kinda rolled into this programming world and I work alone (I do so for a few years) and so I hope this software design question isn't that strange. I don't really have other people out there like a teacher to ask such things.

    Read the article

  • Develop secureness first or as a later step?

    - by MattyD
    The question Do you actively think about security when coding? asks about security mindset while programming. Obviously, a developer does need to think about security while coding — SQL injection, password security, etc. However, as far as the real, fully-formed security, especially the tricky problems that may not be immediately obvious, should I be concerned with tackling these throughout the development process, or should it be a step of its own in later development? I was listening to a podcast on Security Now and they mentioned about how a lot of the of the security problems found in Flash were because when Flash was first developed it wasn't built with security in mind (because it didn't need to) — therefore Flash has major security flaws at its core. I know that no one would want to actively disagree with "think security first" as a best practice, but many companies do not follow best practices. So, what is the correct approach to balance between needing to get the product done and developing it securely?

    Read the article

  • TTS on App Engine

    - by yati sagade
    I have written a small front-end to the Festival TTS system using Python/Django. I wish to deploy it on the Google App Engine cloud. A few questions: My application uses the Festival app 'text2wave'. Will is work on the cloud? I have used Python primitives like subprocess.call() to invoke the aforementioned program. Will that work? If your answer to any or both of (1) and (2) is no, is there a free api on the web that I can use (from the appengine)? I read somewhere about placing calls from Phono to a Voxeo backend, but I'm not sure what that means. I am aware of the Google Translate extension that allows translation using an HTTP GET (REST) request, but here the text is limited to 100 chars. Bad. Plus, they may take it down any point of time.

    Read the article

  • Is there a SUPPORTED way to run .NET 4.0 applications natively on a Mac?

    - by Dan
    What, if any, are the Microsoft supported options for running C#/.NET 4.0 code natively on the Mac? Yes, I know about Mono, but among other things, it lags Microsoft. And Silverlight only works in a web browser. A VMWare-type solution won't cut it either. Here's the subjective part (and might get this closed): is there any semi-authoritative answer to why Microsoft just doesn't support .NET on the Mac itself? It would seem like they could Silverlight and/or buy Mono and quickly be there. No need for native Visual Studio; cross-compiling and remote debugging is fine. The reason is that where I work there is a growing amount of Uncertainty about the future which is causing a lot more development to be done in C++ instead of C#; brand new projects are chosing to use C++. Nobody wants to tell management 18–24 months from now "sorry" should the Mac (or iPad) become a requirement. C++ is seen as the safer option, even if it (arguably) means a loss in productivity today.

    Read the article

  • Is there a SUPPORTED way to run .NET 4.0 applications natively on a Mac?

    - by Dan
    What, if any, are the (Microsoft) supported options for running C#/.NET 4.0 code natively on the Mac? Yes, I know about Mono, but among other things, it lags Microsoft. And Silverlight only works in a web browser. A VMWare-type solution won't cut it either. Here's the subjective part (and might get this closed): is there any semi-authoritative answer to why Microsoft just doesn't support .NET on the Mac itself? It would seem like they could Silverlight and/or buy Mono and quickly be there. No need for native Visual Studio; cross-compiling and remote debugging is fine. The reason is that where I work there is a growing amount of Uncertainty about the future which is causing a lot more development to be done in C++ instead of C#; brand new projects are chosing to use C++. Nobody wants to tell management 18–24 months from now "sorry" should the Mac (or iPad) become a requirement. C++ is seen as the safer option, even if it (arguably) means a loss in productivity today.

    Read the article

  • Do I deserve a promotion/higher salary?

    - by anonCoder
    I'm a software developer and have been working at my current employer for almost 2 years. I joined straight out of university, so this is my first real full-time job. I was employed as a junior developer with no real responsibilities. In the last year, I have been given more responsiibility. I am the official contact person at my company for a number of clients. I have represented the company by myself in off-site meetings with clients. My software development role has grown. I now have specialised knowledge in certain tools/products/technologies that no one else here does. My problem is that I am still officially a junior developer, and still earning less than I feel I am worth. Am I being taken advantage of? How long should I reasonably expect to stay a junior developer before I expect a promotion of some kind? What would you do in my situation?

    Read the article

  • Generating Deep Arrays: Shallow to Deep, Deep to Shallow or Bad idea?

    - by MobyD
    I'm working on an array structure that will be used as the data source for a report template in a web app. The data comes from relatively complex SQL queries that return one or many rows as one dimensional associative arrays. In the case of many, they are turned into two dimensional indexed array. The data is complex and in some cases there is a lot of it. To save trips to the database (which are extremely expensive in this scenario) I'm attempting to get all of the basic arrays (1 and 2 dimension raw database data) and put them, conditionally, into a single, five level deep array. Organizing the data in PHP seems like a better idea than by using where statements in the SQL. Array Structure Array of years( year => array of types( types => array of information( total => value, table => array of data( index => db array ) ) ) ) My first question is, is this a bad idea. Are arrays like this appropriate for this situation? If this would work, how should I go about populating it? My initial thought was shallow to deep, but the more I work on this, the more I realize that it'd be very difficult to abstract out the conditionals that determine where each item goes in the array. So it seems that starting from the most deeply nested data may be the approach I should take. If this is array abuse, what alternatives exist?

    Read the article

  • How do you decide what kind of database to use?

    - by Jason Baker
    I really dislike the name "NoSQL", because it isn't very descriptive. It tells me what the databases aren't where I'm more interested in what the databases are. I really think that this category really encompasses several categories of database. I'm just trying to get a general idea of what job each particular database is the best tool for. A few assumptions I'd like to make (and would ask you to make): Assume that you have the capability to hire any number of brilliant engineers who are equally experienced with every database technology that has ever existed. Assume you have the technical infrastructure to support any given database (including available servers and sysadmins who can support said database). Assume that each database has the best support possible for free. Assume you have 100% buy-in from management. Assume you have an infinite amount of money to throw at the problem. Now, I realize that the above assumptions eliminate a lot of valid considerations that are involved in choosing a database, but my focus is on figuring out what database is best for the job on a purely technical level. So, given the above assumptions, the question is: what jobs are each database (including both SQL and NoSQL) the best tool for and why?

    Read the article

  • How important is my job title?

    - by Relayer
    Hi all, I work on two internal, mission critical applications. Let's keep it simple and call them "Foo" and "Bar". Nobody outside of the company has ever heard of them - like I said, they're internal apps. Until now my jobtitle has just been "Software Developer". I've recently discovered that my jobtitle is being changed to "Foo and Bar Developer". I'm a little worried that, should I leave the company, I'll have trouble finding a new job because of my weird job title. My question is this: How important is my job title compared to everything else on my CV (or resume, if you're American)? Am I likely to be rejected by box-ticking HR people who don't realise that "Foo and Bar Developer" is the same as "Software Developer"? Thanks in advance.

    Read the article

  • History of open source software

    - by Victor Sorokin
    I've been always interested, out of the pure self-amusement, in the history of open software used today: who were the people which started it and what were the reasons to start what were design decisions at the start how software evolved over the time Specifically, I'm interested in following software: GCC X Linux kernel Java Of course, there is plenty of information in Internet to google for, but I thought it would be nice to have list of interesting resources at this site. I hope some of visitors of this site have similar interest and can share a link or two they found particularly amusing/interesting. To make this entry more question-like, here's straight question: what are the most interesting/amusing links about history of open source software?

    Read the article

  • Is it typical for a provider of a web services to also provide client libraries?

    - by HDave
    My company is building a corporate Java web-app and we are leaning towards using GWT-RPC as the client-server protocol for performance reasons. However, in the future, we will need to provide an API for other enterprise systems to access our data as well. For this, we were thinking of a SOAP based web service. In my experience it is common for commercial providers of enterprise web applications to provide client libraries (Java, .NET, C#, etc.). Is this generally the case? I ask because if so, then why bother using SOAP or REST or any standard web services protocol at all? Why not just create a client libraries that communicate via GWT-RPC?

    Read the article

  • What version control system can manage all aspects?

    - by Andy Canfield
    A few months ago I dug into Subversion and GIT and was disappointed. They handle SOURCE CODE fine but not other aspects. For example, a web site under version control needs to manage file/directory ownership, file/directory read & write access, Access Control Lists, timestamps, database contents. and external links. Is there a version control system that can do as perfect a reversion as reloading from a month-old backup?

    Read the article

  • Software Life-cycle of Hacking

    - by David Kaczynski
    At my local university, there is a small student computing club of about 20 students. The club has several small teams with specific areas of focus, such as mobile development, robotics, game development, and hacking / security. I am introducing some basic agile development concepts to a couple of the teams, such as user stories, estimating complexity of tasks, and continuous integration for version control and automated builds/testing. I am familiar with some basic development life-cycles, such as waterfall, spiral, RUP, agile, etc., but I am wondering if there is such a thing as a software development life-cycle for hacking / breaching security. Surely, hackers are writing computer code, but what is the life-cycle of that code? I don't think that they would be too concerned with maintenance, as once the breach has been found and patched, the code that exploited that breach is useless. I imagine the life-cycle would be something like: Find gap in security Exploit gap in security Procure payload Utilize payload I propose the following questions: What kind of formal definitions (if any) are there for the development life-cycle of software when the purpose of the product is to breach security?

    Read the article

  • In this slow Job Market, I have no choice to take a Job that uses VB.NET and is going to use C#. Advice?

    - by Xaisoft
    I really don't want to do VB.NET, but I need a Job and I need a Job Fast. The two positions I am looking at both have existing apps in VB.NET, but are looking to convert them to C# and do new development in C#, but as well all know, sometimes this doesn't happen for a while and you get stuck with the main language. My background is in C# and after looking at VB.NET, my head is hurting. Any advice as I tackle a Job like this. As I said, I preferably want stick with C#, but today, one may have no choice, so I have to just take what I get. I am looking for advice on this for those who have experienced it, are experiencing it, and those who have not.

    Read the article

  • synchronization web service methodologies or papers

    - by Grady Player
    I am building a web service (PHP+JSON) to sync with my iphone app. The main goals are: Backup Provide a web view for printing / sorting, manipulating. allow a group sync up and down. I am aware of the logic problems with all of these items, Ie. if one person deletes something, do you persist this change to other users, collisions, etc. I am looking for just any book or scholarly work, or even words of wisdom to address common issues. when to detect changes of data with hashes, vs modified dates, or combination. how do address consolidation of sequential ID's originating on different client nodes (can be sidestepped in my context, but it would be interesting.) dealing with collisions (is there a universally safe way to do so?). general best practices. how to structure the actual data transaction (ask for whole list then detect changes...)

    Read the article

  • Long term plan of attack to learn math?

    - by zhenka
    I am a web-developer with a desire to expand my skill-set to mathematics relevant to programming. As 2nd career, I am stuck in college doing some of the requirements while working. I was hoping the my education will teach me the needed skills to apply math, however I am quickly finding it to be too much easily-testable breadth-based approach very inefficient for the time invested. For example in my calculus 2 class, the only remotely useful mind expanding experience I had was volumes and areas under the curve. The rest was just monotonous glorified algebra, which while comes easy to me, could be done by software like wolfram alpha within seconds. This is not my idea of learning math. So here I am a frustrated student looking for a way to improve my understanding of math in a way that focuses on application, understanding and maximally removed needless tedium. However I cannot find a good long term study strategy with this approach in mind. So for those of like mind, how would you go about learning the necessary math without worrying too much about stuff a computer can do much better?

    Read the article

  • What Programming languages/technologies stack to use when building Facebook like website ?

    - by Blaze Boy
    I'm developing a website idea that will perform the same as Facebook functionality without the applications extensibility. the site will have a client application to perform a task similar to dropbox.com the site will be a social network of some sort of professionalism, it will highlight code of almost all languages has a high speed backend database Now what languages/techniques do I need to use to achieve that?

    Read the article

  • Dealing with the customer / developer culture mismatch on an agile project

    - by Eric Smith
    One of the tenets of agile is ... Customer collaboration over contract negotiation ... another one is ... Individuals and interactions over processes and tools But the way I see it, at least when it comes to interaction with the customer, there is a fundamental problem: How the customer thinks is fundamentally different to how a software engineer thinks That may be a bit of a generalisation, yes. Arguably, there are business domains where this is not necessarily true---these are few and far between though. In many domains though, the typical customer is: Interested in daily operational concerns--short-range tactics ... not strategy; Only concerned with the immediate solution; Generally one-dimensional, non-abstract thinkers; Primarily interested in "getting the job done" as opposed to coming up with a lasting, quality solution. On the other hand, software engineers who practice agile are: Professionals who value quality; Individuals who understand the notion of "more haste less speed" i.e., spending a little more time to do things properly will save lots of time down the road; Generally, very experienced analytical thinkers. So very clearly, there is a natural culture discrepancy that tends to inhibit "customer collaboration". What's the best way to address this?

    Read the article

  • Objective-C Lesson in Class Design

    - by Pota Onasys
    I have the following classes: Teacher Student Class (like a school class) They all extend from KObject that has the following code: - initWithKey - send - processKey Teacher, Student Class all use the functions processKey and initWithKey from KObject parent class. They implement their own version of send. The problem I have is that KObject should not be instantiated ever. It is more like an abstract class, but there is no abstract class concept in objective-c. It is only useful for allowing subclasses to have access to one property and two functions. What can I do so that KObject cannot be instantiated but still allow subclasses to have access to the functions and properties of KObject?

    Read the article

  • Game Trees Conceptual Question

    - by Chris Corbin
    I am struggling to conceptually understand a question in a programming assignment for an algorithms class. The problem is dealing with a fictitious 2 player game, named Easy. The rules of the game are simple; each player may chose one of 4 integers {0-3} after which that integer is not available for the other player. The catch is, a player picks {0} it means they quit. The objective is for Player 1 to get {1} and Player 2 to get {2}, in which case they may win, however if both or neither succeed, then the game ends in a draw. I have been asked to draw the game tree for Easy, showing all nodes, which they explained as 4! = 24. Labeling the edges, which represent moves (selecting a number) and the leaves with who won (1 means Player 1 won, -1 means Player 2 won, and 0 means a tie). I have drawn out a game tree, which I believe is correct, however I am not 100% certain hence I am asking the question. My game tree only has 16 leaves. I am thinking that when a player picks {0}, and then quits, the game tree stops there? I don't see how it is possible to get to 24 leaves? Any help would be greatly appreciate, and if you need more information I would be happy to provide it. Thanks

    Read the article

  • Resources for up-to-date Delphi programming

    - by Dan Kelly
    I'm a developer in a small department and have been using Delphi for the last 10 years. Whilst I've tried to keep up-to-date with movements there are a lot of changes that have occurred between Delphi 7 and (current for us) 2010. Stack Exchange and here have been great for answering the "how do you" questions, but what I'd like is a resource that shows great examples of larger scale programming. For example is there anywhere that hosts examples of well written, multi form applications? Something that can be looked at as a whole to illustrate why things should be done a certain way?

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >