Search Results

Search found 25198 results on 1008 pages for 'non programmers'.

Page 196/1008 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • Has anyone used these database publish tools in Visual Studio 2012/2013 ?

    - by punkouter
    I am trying to figure out if they are actually useful... I have a DBProject and it would be nice if when I publish from my DEV to my TEST it what automatically copy the schema and data from DEV to TEST.. is that what this tab in visual studio is for ? On PROD I would never want to copy the data over but maybe I would change the schema and want to move that schema to PROD.. so is that what this tab is for ? For now we are using WEB DEPLOY but none of these database options.. We are manually using DATA COMPARE to sync changes on the various databases/DBproject... Any advice?

    Read the article

  • Visual Web Developer 2010 Express, automated testing, and SVN

    - by Mr. Jefferson
    We have an HTML designer who is not a developer but needs to modify .aspx files from our ASP.NET 2.0 projects from time to time in order to get CSS to work properly with them. Currently, this involves giving her the .aspx page by itself, which she opens and edits via Visual Studio 2008 (her computer used to be a developer's). I'm considering getting her set up with Visual Web Developer 2010 Express and Subversion access so she can be more independent, but I wanted to make sure VS Express will work properly with what we do. So: Does VWD 2010 Express support automated tests? If no to the above, what happens when it opens a solution file that includes a test project, modifies it, and saves it? Are there any potential snags with setting up AnkhSVN with VWD 2010 Express?

    Read the article

  • Always keep files updated in Eclipse

    - by AK01
    I keep lots of files/editors open in Eclipse. I also love using git stash and other git commands that essentially change the contents of my open files. Is there an Eclipse feature or plugin that will always keep the contents of my open files up to date and live? Currently if I put focus in an out of sync editor, I get an awkwardly worded dialog that I have to parse carefully every time. I wish it would just keep me synced like Textmate does.

    Read the article

  • What do you think about starting a reading group?

    - by Imran Omar Bukhsh
    Greetings everyone I know we as software engineers / developers always need to keep learning. I really wanted to start a reading group where people can be reading books and discussing about whatever they read like at the end of every chapter when everyone has finished by a particular time. We could also have like a min. limit of n pages per day for every person that participates in the group. What do you think? Do advise.

    Read the article

  • Git-based storage and publishing, infrastructure advice

    - by Joel Martinez
    I wanted to get some advice on moving a system to "the cloud" ... specifically, I'm looking to move into some of Windows Azure's managed services, as right now I'm managing a VM. Basically, the system operates on some data stored in a github git repository. I'll describe the current architecture: Current system (all hosted on a single server): GitHub - configured with a webhook pointing at ... ASP.NET MVC application - to accept the webhook from git. It pushes a message onto ... Azure service bus Queue - which is drained by ... Windows Service - pulls the message from the queue and ... Fetches the latest data from the git repository (using GitLib2Sharp) onto the local disk and finally ... Operates on the data in git to produce a static HTML website hosted/served by IIS. The system works really well, actually ... but I would like to get out of the business of managing the VM, and move to using some combination of Azure web and worker roles. But because the system relies so heavily on the git repository on the local filesystem, I'm finding it difficult to figure out how to architect in the cloud. I know you can get file system access, so in theory I could just fetch the repository if there's nothing on disk ... but the performance/responsiveness of the system sort of depends on the repository being available and only having to fetch diffs, which is relatively quick. As opposed to periodically having to fetch the entire (somewhat large) git repository if the web or worker role was recycled, or something. So I would love some advice on how you would architect such a system :) Ultimately, the only real requirement is to be able to serve HTML content that's been produced from the contents of a git repository (in a relatively responsive manner, from a publishing perspective) ... please feel free to ask any clarifying questions if there's something I omitted. Thanks!

    Read the article

  • Web Site Development Software for VirtualBox Ubuntu 11.04

    - by Paul Sonier
    I'm doing some development of a website on a VirtualBox guest running Ubuntu 11.04 (host is running Windows, but I want to do the web development in a Linux environment). My development languages are primarily PHP and Javascript (using Apache and node.js). The question is this: is there a good IDE for work under these conditions that can handle running virtualized? I tried Eclipse, and was not particularly thrilled with the performance; I'm wondering if there's some other way to do this than to do all my text editing in emacs.

    Read the article

  • Should I migrate to MVC3?

    - by eestein
    Hi everyone. I have a MVC2 project, my question is: should I migrate to MVC3? Why? I'd like the opinion of some who already migrated, or at least used MVC3 and MVC2. Already read http://weblogs.asp.net/scottgu/archive/2011/01/13/announcing-release-of-asp-net-mvc-3-iis-express-sql-ce-4-web-farm-framework-orchard-webmatrix.aspx and I already know about the described tool for migrating: http://blogs.msdn.com/b/marcinon/archive/2011/01/13/mvc-3-project-upgrade-tool.aspx What I'd really appreciate is your valuable insight. Best regards.

    Read the article

  • Project Manager that wants to lock in time estimate with a signed contract

    - by sunpech
    At a previous employment, a project manager (PM) wasn't satisfied with the delivery time of the code on a project I was on. I was told by my project lead that that the PM was considering having me sign a contract to lock-in my time estimates I gave for tasks and delivery dates. The situation on the project was that we were working with new technologies, codebase, coding standards, and very prone-to-change requirements. I was learning new things and applying them the best I could on requirements that kept on changing. The requirements throughout the iterations grew by 2-3 times, with my estimate-to-complete growing by roughly 5-8 times. The only things that didn't change were the estimates and delivery dates. Yes, I did end up missing most deadlines. And I was working on some very new technologies that no one else on the entire development team could really help out on because they wouldn't be familiar with it. At least not easily. It seemed to me then, that the PM wanted his numbers to add up-- and thus wanted me to sign a contract to "ensure" that I would always deliver working code on time. I suppose with a signed contract the PM could use it against me if I couldn't deliver on time. I believe what happened next was that other project managers and/or project leads defended me, and didn't let this happen. My question is, should this raise a red flag about the manager? Is it common practice for a manager to lock-in time estimates of a software developer with a signed contract? Or in this case, try to. Please note, I was a full time employee, not an independent consultant. Update: I want to add that I did give new estimates weekly, but it seems the original estimates and delivery dates were what the PM was fixated on.

    Read the article

  • Transitioning to asynchronous programming model

    - by Simone
    our team is mantaining and developing a .NET web service written in C#. We have stress tested the web service's farm and we have evidence that the actual architecture doesn't scale well, as the number of request are constantly increasing. We analyzed Martin Fowler's conclusion in this article, and our team feels that migrating to an asynchronous programming model such as the one described could be the right direction to point to for our service too. My question is: do you think that this "switch" needs a complete rewrite of the application? Has been someone of you been able to adopt APM without rewriting everything and has some insight to share? Thank you in advance

    Read the article

  • Implementing a "state-machine" logic for methods required by an object in C++

    - by user827992
    What I have: 1 hypothetical object/class + other classes and related methods that gives me functionality. What I want: linking this object to 0 to N methods in realtime on request when an event is triggered Each event is related to a single method or a class, so a single event does not necessarily mean "connect this 1 method only" but can also mean "connect all the methods from that class or a group of methods" Avoiding linked lists because I have to browse the entire list to know what methods are linked, because this does not ensure me that the linked methods are kept in a particular order (let's say an alphabetic order by their names or classes), and also because this involve a massive amount of pointers usage. Example: I have an object Employee Jon, Jon acquires knowledge and forgets things pretty easily, so his skills may vary during a period of time, I'm responsible for what Jon can add or remove from his CV, how can I implement this logic?

    Read the article

  • How is technical debt best measured? What metric(s) are most useful?

    - by throp
    If I wanted to help a customer understand the degree of technical debt in his application, what would be the best metric to use? I've stumbled across Erik Doernenburg's code toxicity, and also Sonar's technical debt plugin, but was wondering what others exist. Ideally, I'd like to say "system A has a score of 100 whereas system B has a score of 50, so system A will most likely be more difficult to maintain than system B". Obviously, I understand that boiling down a complex concepts like "technical debt" or "maintainability" into a single number might be misleading or inaccurate (in some cases), however I need a simple way to convey the to a customer (who is not hands-on in the code) roughly how much technical debt is built into their system (relative to other systems), for the goal of building a case for refactoring/unit tests/etc. Again, I'm looking for one single number/graph/visualization, and not a comprehensive list of violations (e.g. CheckStyle, PMD, etc.).

    Read the article

  • Advice on selecting programming languages to concentrate on? (2nd year IT security student)

    - by Tyler J Fisher
    I'm in the process of considering which programming languages I should devote the majority of my coding studies to. I'm a 2nd year CS student, majoring in IT security. What I want to do/work with: Intelligence gathering Relational databases Virus design Snort network IPS Current coding experience (what I'm going to keep): Java - intermediate HTML5 - intermediate SQL (MySQL, Oracle 11g) - basic BASH - basic I'm going to need to learn (at least) one of the following languages in order to be successful in my field. Languages to add (at least 1): Ruby (+Metasploit) C++ (virus design, low-level driver interaction, computationally intensive applications) Python (import ALL the things) My dilemma: If I diversify too broadly, I won't be able to focus on, and improve in a specific niche. Does anyone have any advice as to how I should select a language? What I'm considering + why I'm leaning towards Ruby because of Metasploit support, despite lower efficiency when compared to Python. Any suggestions based on real-world experience? Should I focus on Ruby, Python, or C++? Both Ruby, and Python have been regarded as syntactically similar to Java which my degree is based around. I'm going to be studying C++ in two years as a component of my malicious code class. Thanks, Tyler

    Read the article

  • 32-bit / 64-bit processors - what is that feature officially called?

    - by JW01
    I see talk of CPU's being either 32-bit or 64-bit processors. Information which is often required on download pages But what is that feature officially called. i.e What's the inverse of saying "I have a 64-bit processor"? I want to say: The ??? of my processor is 64 bit What is the correct term to use for ??? I have looked at a random product on the Intel site and I suspect the correct word for this is "Instruction Set", but I'm not sure.

    Read the article

  • Selecting the (right?) technology and environment

    - by Tor
    We are two developers on the edge of starting new web product development. We are both fans of lean start-up approach and would like to practice continuous deployment. Here comes the dilemma - we are both coming from a C# / Windows background and we need to decide between: Stick to .NET and Windows, we will not waste time on learning new technologies and put all our effort in the development. Switch to Ruby on Rails and Linux which has a good reputation of fast ramp up and vast open source support. The negative side is that we will need to put a lot of effort in learning Ruby, Rails and Linux... What would you do? What other considerations should we take?

    Read the article

  • Forking a dual licensed app: How to license on my end?

    - by TheLQ
    I forked a project that was dual licensed under the GPL and a commercial license. Since my code was open source and the GPL being what it is, I started by releasing my app under the GPL. But now I'm thinking about dual licensing the project and can't figure out what to do. Since I have copyright on a majority of the code (most of the code was either rewritten or new), can I just pick a commercial license or do I have to buy the upstream commercial license since I'm technically a "derivative" of the project?

    Read the article

  • Were you a good programmer when you first left university?

    - by dustyprogrammer
    I recently graduated, from university. I have since then joined a development team where I am by far the least experienced developer, with maybe with a couple work terms under my belt, meanwhile the rest of the team is rocking 5-10 years experience. I am/was a very good student and a pretty good programmer when it came to bottled assignments and tests. I have worked on some projects with success. But now I working with a much bigger code-base, and the learning curve is much higher... I was wondering how many other developers started out their careers in teams and left like they sucked. When does this change? How can I speed up the process? My seniors are helping me but I want to be great and show my value now. I don't to start a flame war, this is just a question I have been having and I was hoping to get some advice from other experienced developers, as well as other beginners like me.

    Read the article

  • What is the meaning of the sentence "we wanted it to be compiled so it’s not burning CPU doing the wrong stuff."

    - by user2434
    I was reading this article. It has the following paragraph. And did Scala turn out to be fast? Well, what’s your definition of fast? About as fast as Java. It doesn’t have to be as fast as C or Assembly. Python is not significantly faster than Ruby. We wanted to do more with fewer machines, taking better advantage of concurrency; we wanted it to be compiled so it’s not burning CPU doing the wrong stuff. I am looking for the meaning of the last sentence. How will interpreted language make the CPU do "wrong" stuff ?

    Read the article

  • Architecting Python application consisting of many small scripts

    - by Duke Dougal
    I am building an application which, at the moment, consists of many small Python scripts. Each Python script processes items from one Amazon SQS queue. Emails come into an initial queue and are processed by a script and typically the script will do a small unit of processing (for example, parse email and store some database fields), then an item will be placed on the next queue for further processing, until eventually the email has finished going through the various scripts and queues. What I like about this approach is that it is very loosely coupled. However, I'm not sure how I should implement live. Should I make each script a daemon which is constantly polling it's inbound queue for things to do? Or should there be some overarching orchestration program or process? Or maybe I should not have lots of small Python scripts but one large application? Specific questions: How should I run each of these scripts - as a daemon with some sort or restart monitor to restart them in case they stop for any reason? If yes, should I have some program which orchestrates this? Or is the idea of many small script not a good one, would it make more sense to have a larger python program which contains all the functionality and does all the queue polling and execution of functionality for each queue? What is the current preferred approach to daemonising Python scripts? Broadly I would welcome any comments or opinions on any aspect of this. thanks

    Read the article

  • Should we exclude code for the code coverage analysis?

    - by romaintaz
    I'm working on several applications, mainly legacy ones. Currently, their code coverage is quite low: generally between 10 and 50%. Since several weeks, we have recurrent discussions with the Bangalore teams (main part of the development is made offshore in India) regarding the exclusions of packages or classes for Cobertura (our code coverage tool, even if we are currently migrating to JaCoCo). Their point of view is the following: as they will not write any unit tests on some layers of the application (1), these layers should be simply excluded from the code coverage measure. In others words, they want to limit the code coverage measure to the code that is tested or should be tested. Also, when they work on unit test for a complex class, the benefits - purely in term of code coverage - will be unnoticed due in a large application. Reducing the scope of the code coverage will make this kind of effort more visible... The interest of this approach is that we will have a code coverage measure that indicates the current status of the part of the application we consider as testable. However, my point of view is that we are somehow faking the figures. This solution is an easy way to reach higher level of code coverage without any effort. Another point that bothers me is the following: if we show a coverage increase from one week to another, how can we tell if this good news is due to the good work of the developers, or simply due to new exclusions? In addition, we will not be able to know exactly what is considered in the code coverage measure. For example, if I have a 10,000 lines of code application with 40% of code coverage, I can deduct that 40% of my code base is tested (2). But what happen if we set exclusions? If the code coverage is now 60%, what can I deduct exactly? That 60% of my "important" code base is tested? How can I As far as I am concerned, I prefer to keep the "real" code coverage value, even if we can't be cheerful about it. In addition, thanks to Sonar, we can easily navigate in our code base and know, for any module / package / class, its own code coverage. But of course, the global code coverage will still be low. What is your opinion on that subject? How do you do on your projects? Thanks. (1) These layers are generally related to the UI / Java beans, etc. (2) I know that's not true. In fact, it only means that 40% of my code base

    Read the article

  • Zend - unable to catch exception [closed]

    - by coder3
    This still throw an uncaught exception.. Any insight why this isn't working? protected function login() { $cart = $this->getHelper('GetCurrentCart'); $returnValue = false; if ($this->view->form->isValid($this->_getAllParams())) { $values = $this->view->form->getValues(); try { $this->goreg = $this->goregFactory->create($this->config->goreg->service_url); if ($this->goreg->login($values['username'], $values['password']) && $this->goregSession->isLoggedIn()) { $returnValue = true; } else { echo 'success 1'; } } catch (Exception $e) { echo 'error 1'; } catch (Zend_Exception $e) { echo 'error 2'; } catch (Zend_Http_Client_Exception $e) { echo 'error 3'; } } return $returnValue; }

    Read the article

  • How do you use blank lines in your code ?

    - by Matthieu M.
    There has been a few remarks about white space already in discussion about curly braces placements. I myself tend to sprinkle my code with blank lines in an attempt to segregate things that go together in "logical" groups and hopefully make it easier for the next person to come by to read the code I just produced. In fact, I would say I structure my code like I write: I make paragraphs, no longer than a few lines (definitely shorter than 10), and try to make each paragraph self-contained. For example: in a class, I will group methods that go together, while separating them by a blank line from the next group. if I need to write a comment I'll usually put a blank line before the comment in a method, I make one paragraph per step of the process All in all, I rarely have more than 4/5 lines clustered together, meaning a very sparse code. I don't consider all this white space a waste because I actually use it to structure the code (as I use the indentation in fact), and therefore I feel it worth the screen estate it takes. For example: for (int i = 0; i < 10; ++i) { if (i % 3 == 0) continue; array[i] += 2; } I consider than the two statements have clear distinct purposes and thus deserve to be separated to make it obvious. So, how do you actually use (or not) blank lines in code ?

    Read the article

  • Programmer performance

    - by RSK
    I am a PHP programmer with 1 year of experience. As I am just starting my career, I am learning a lot of things now. I can say I am a little bit of a perfectionist. When I am assigned a problem I start off by Googling. Then, even when I find a solution, I keep trying for a better one until I find 2-3 options. Then I start learning it and choose the best performing solution. Even though I am learning a lot, this process gets me labeled as a low performer. My questions: As a novice, should I continue to use this learning process and not worry about my performance? Should I focus more on my performance and less on how the code performs?

    Read the article

  • Is this simple XOR encrypted communication absolutely secure?

    - by user3123061
    Say Alice have 4GB USB flash memory and Peter also have 4GB USB flash memory. They once meet and save on both of memories two files named alice_to_peter.key (2GB) and peter_to_alice.key (2GB) which is randomly generated bits. Then they never meet again and communicate electronicaly. Alice also maintains variable called alice_pointer and Peter maintains variable called peter_pointer which is both initially set to zero. Then when Alice needs to send message to Peter they do: encrypted_message_to_peter[n] = message_to_peter[n] XOR alice_to_peter.key[alice_pointer + n] Where n i n-th byte of message. Then alice_pointer is attached at begining of the encrypted message and (alice_pointer + encrypted message) is sent to Peter and then alice_pointer is incremented by length of message (and for maximum security can be used part of key erased) Peter receives encrypted_message, reads alice_pointer stored at beginning of message and do this: message_to_peter[n] = encrypted_message_to_peter[n] XOR alice_to_peter.key[alice_pointer + n] And for maximum security after reading of message also erases used part of key. - EDIT: In fact this step with this simple algorithm (without integrity check and authentication) decreases security, see Paulo Ebermann post below. When Peter needs to send message to Alice they do analogical steps with peter_to_alice.key and with peter_pointer. With this trivial schema they can send for next 50 years each day 2GB / (50 * 365) = cca 115kB of encrypted data in both directions. If they need more data to send, they simple use larger memory for keys for example with today 2TB harddiscs (1TB keys) is possible to exchange next 50years 60MB/day ! (thats practicaly lots of data for example with using compression its more than hour of high quality voice communication) It Seems to me there is no way for attacker to read encrypted message without keys even if they have infinitely fast computer. because even with infinitely fast computer with brute force they get ever possible message that can fit to length of message, but this is astronomical amount of messages and attacker dont know which of them is actual message. I am right? Is this communication schema really absolutely secure? And if its secure, has this communication method its own name? (I mean XOR encryption is well-known, but whats name of this concrete practical application with use large memories at both communication sides for keys? I am humbly expecting that this application has been invented someone before me :-) ) Note: If its absolutely secure then its amazing because with today low cost large memories it is practicaly much cheeper way of secure communication than expensive quantum cryptography and with equivalent security! EDIT: I think it will be more and more practical in future with lower a lower cost of memories. It can solve secure communication forever. Today you have no certainty if someone succesfuly atack to existing ciphers one year later and make its often expensive implementations unsecure. In many cases before comunication exist step where communicating sides meets personaly, thats time to generate large keys. I think its perfect for military communication for example for communication with submarines which can have installed harddrive with large keys and military central can have harddrive for each submarine they have. It can be also practical in everyday life for example for control your bank account because when you create your account you meet with bank etc.

    Read the article

  • Is it possible for beginner to learn and develop an application in rails in 4 months?

    - by Parth
    I want to develop a web application or a website using rails. My current knowledge includes 1. HTML 2. CSS 3. C 4. Java And I am currently going through 5th chapter of the well grounded rubyist book by David A. Thomas. I came to know that learning ruby is beneficial for good knowledge of rails. So currently I am going through the basics of ruby. And learning rails in parallel. I want to know if in this scenario is it practically feasible to understand rails and develop an application/website in it within the time frame of 4 months. I need to develop an application which have atleast 3 complexity (complex functionality). Any ideas of good application for rails beginners is welcomed. But the application should be large or if it is small than it should have some complexity. Time is a constraint for me. I would have to develop application for college work but rails technology is my choice as I want to learn it.

    Read the article

  • How to write a user story specific to tasks in this case

    - by vignesh
    We have planned to take up an user story say As a player I want to view the game map to know current standings of my team The sprint is for two weeks. We will be able to complete only HTML in two weeks time, this user story will take 4-6 weeks to be completed as we have a shortage of content designing resources. How can we change this user story so that HTML completion can be considered as a done for this user story and we need to take up the integration of this user story in the next sprint? Is it possible to create two different user stories, one for HTML and other for integration, testing, bug fixing etc?

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >