Search Results

Search found 25198 results on 1008 pages for 'non programmers'.

Page 199/1008 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • How much Ruby should I learn before moving to Rails?

    - by Kevin
    Just a quick question.. I can never get a definitive answer when googling this, either. Some people say you can learn Rails without knowing any Ruby, but at some point you'll run into a brick wall and wish you knew Ruby and will have to go back to learn it..and some say to learn the "basics" of Ruby before learning Rails and it will make your life that much easier.. My current knowledge is low. I'm not a beginner, but I'm not pro, either. I went through the Learn Python The Hard Way online book in about a month, but I stopped once I got to the OOP side of Python (I know booleans, elif/if/else/statements, for loops, while loops, functions) I agree with learning the "basics" of Ruby before learning Rails, but what exactly are the "basics" of Ruby? Would I need to learn the whole OOP side of Ruby before I went on to Rails? Or would I just need to learn the Ruby syntax up to where I learned Python (booleans, elif/if/else/statements, for loops, while loops, functions) before I went on to Rails? Thanks!

    Read the article

  • Why is Python slower than Java but faster than PHP

    - by good_computer
    I have many times seen various benchmarks that show how a bunch of languages perform on a given task. Always these benchmarks reveal that Python is slower then Java and faster than PHP. And I wonder why is that the case. Java, Python, and PHP run inside a virtual machine All three languages convert their programs into their custom byte codes that run on top of OS -- so none is running natively Both Java and Python can be "complied" (.pyc for Python) but the __main__ module for Python is not compiled Python and PHP are dynamically typed and Java statically -- is this the reason Java is faster, and if so, please explain how that affects speed. And, even if the dynamic-vs-static argument is correct, this does not explain why PHP is slower than Python -- because both are dynamic languages. You can see some benchmarks here and here, and here

    Read the article

  • Cheap, Awesome, Programmer-friendly City in Europe for 1 year Study Hiatus?

    - by Gonjasufi
    Next year I'll be 21. I'll have 3 years of professional experience under my belt (with a one year break as a soldier). I'm planning to take 2 to 3 years off. Instead of going to a university I'm planning to work on personal projects and learn on my own. I'm looking for suggestions of great, cheap, programmer-friendly (e.g. lots of cafes, ordered food, parks, blazing fast internet connection, wifi, lots of people that speak English) cities around the world, (and specifically in Europe as I also have european citizenship). If you can supply with an estimate cost of living for that city, or a site for comparisons that will also be great. edit: I'm living in Tel Aviv, ~20 highest cost of living city in the world, so statistically speaking almost all the cities are cheaper.

    Read the article

  • How do I classify using GLCM and SVM Classifier in Matlab?

    - by Gomathi
    I'm on a project of liver tumor segmentation and classification. I used Region Growing and FCM for liver and tumor segmentation respectively. Then, I used Gray Level Co-occurence matrix for texture feature extraction. I have to use Support Vector Machine for Classification. But I don't know how to normalize the feature vectors. Can anyone tell how to program it in Matlab? To the GLCM program, I gave the tumor segmented image as input. Was I correct? If so, I think, then, my output will also be correct. My glcm coding, as far as I have tried is, I = imread('fzliver3.jpg'); GLCM = graycomatrix(I,'Offset',[2 0;0 2]); stats = graycoprops(GLCM,'all') t1= struct2array(stats) I2 = imread('fzliver4.jpg'); GLCM2 = graycomatrix(I2,'Offset',[2 0;0 2]); stats2 = graycoprops(GLCM2,'all') t2= struct2array(stats2) I3 = imread('fzliver5.jpg'); GLCM3 = graycomatrix(I3,'Offset',[2 0;0 2]); stats3 = graycoprops(GLCM3,'all') t3= struct2array(stats3) t=[t1;t2;t3] xmin = min(t); xmax = max(t); scale = xmax-xmin; tf=(x-xmin)/scale Was this a correct implementation? Also, I get an error at the last line. My output is: stats = Contrast: [0.0510 0.0503] Correlation: [0.9513 0.9519] Energy: [0.8988 0.8988] Homogeneity: [0.9930 0.9935] t1 = Columns 1 through 6 0.0510 0.0503 0.9513 0.9519 0.8988 0.8988 Columns 7 through 8 0.9930 0.9935 stats2 = Contrast: [0.0345 0.0339] Correlation: [0.8223 0.8255] Energy: [0.9616 0.9617] Homogeneity: [0.9957 0.9957] t2 = Columns 1 through 6 0.0345 0.0339 0.8223 0.8255 0.9616 0.9617 Columns 7 through 8 0.9957 0.9957 stats3 = Contrast: [0.0230 0.0246] Correlation: [0.7450 0.7270] Energy: [0.9815 0.9813] Homogeneity: [0.9971 0.9970] t3 = Columns 1 through 6 0.0230 0.0246 0.7450 0.7270 0.9815 0.9813 Columns 7 through 8 0.9971 0.9970 t = Columns 1 through 6 0.0510 0.0503 0.9513 0.9519 0.8988 0.8988 0.0345 0.0339 0.8223 0.8255 0.9616 0.9617 0.0230 0.0246 0.7450 0.7270 0.9815 0.9813 Columns 7 through 8 0.9930 0.9935 0.9957 0.9957 0.9971 0.9970 ??? Error using ==> minus Matrix dimensions must agree. The images are:

    Read the article

  • What are must have tools for web development?

    - by Amir Rezaei
    Which are must have tools for web development under windows? It can include tools such as design, coding etc. Update: Please post only one and the best tool in your opinion for each type of tools. For instance post only the name of the best design tool and not a list of them. Update: By tools I don't necessary mean small softwares. I didn't find this question, hope no one has already asked it.

    Read the article

  • Is this simple XOR encrypted communication absolutely secure?

    - by user3123061
    Say Alice have 4GB USB flash memory and Peter also have 4GB USB flash memory. They once meet and save on both of memories two files named alice_to_peter.key (2GB) and peter_to_alice.key (2GB) which is randomly generated bits. Then they never meet again and communicate electronicaly. Alice also maintains variable called alice_pointer and Peter maintains variable called peter_pointer which is both initially set to zero. Then when Alice needs to send message to Peter they do: encrypted_message_to_peter[n] = message_to_peter[n] XOR alice_to_peter.key[alice_pointer + n] Where n i n-th byte of message. Then alice_pointer is attached at begining of the encrypted message and (alice_pointer + encrypted message) is sent to Peter and then alice_pointer is incremented by length of message (and for maximum security can be used part of key erased) Peter receives encrypted_message, reads alice_pointer stored at beginning of message and do this: message_to_peter[n] = encrypted_message_to_peter[n] XOR alice_to_peter.key[alice_pointer + n] And for maximum security after reading of message also erases used part of key. - EDIT: In fact this step with this simple algorithm (without integrity check and authentication) decreases security, see Paulo Ebermann post below. When Peter needs to send message to Alice they do analogical steps with peter_to_alice.key and with peter_pointer. With this trivial schema they can send for next 50 years each day 2GB / (50 * 365) = cca 115kB of encrypted data in both directions. If they need more data to send, they simple use larger memory for keys for example with today 2TB harddiscs (1TB keys) is possible to exchange next 50years 60MB/day ! (thats practicaly lots of data for example with using compression its more than hour of high quality voice communication) It Seems to me there is no way for attacker to read encrypted message without keys even if they have infinitely fast computer. because even with infinitely fast computer with brute force they get ever possible message that can fit to length of message, but this is astronomical amount of messages and attacker dont know which of them is actual message. I am right? Is this communication schema really absolutely secure? And if its secure, has this communication method its own name? (I mean XOR encryption is well-known, but whats name of this concrete practical application with use large memories at both communication sides for keys? I am humbly expecting that this application has been invented someone before me :-) ) Note: If its absolutely secure then its amazing because with today low cost large memories it is practicaly much cheeper way of secure communication than expensive quantum cryptography and with equivalent security! EDIT: I think it will be more and more practical in future with lower a lower cost of memories. It can solve secure communication forever. Today you have no certainty if someone succesfuly atack to existing ciphers one year later and make its often expensive implementations unsecure. In many cases before comunication exist step where communicating sides meets personaly, thats time to generate large keys. I think its perfect for military communication for example for communication with submarines which can have installed harddrive with large keys and military central can have harddrive for each submarine they have. It can be also practical in everyday life for example for control your bank account because when you create your account you meet with bank etc.

    Read the article

  • How much should I charge an hour for freelance iOS development?

    - by Tyler Bell
    I am a fairly competent developer who already holds a job developing iOS applications. This job is through the university which I attend. The producer of the apps that I develop is always trying to set me up with some freelance opportunities to get my work out there and to get me some more work/experience. What is a reasonable price to charge (either hourly or per app)? I'd be working by myself, on my own equipment, from start to finish in the design process. Just wondering what a reasonable price was...I've heard up to $30? Thanks

    Read the article

  • Returning a mock object from a mock object

    - by Songo
    I'm trying to return an object when mocking a parser class. This is the test code using PHPUnit 3.7 //set up the result object that I want to be returned from the call to parse method $parserResult= new ParserResult(); $parserResult->setSegment('some string'); //set up the stub Parser object $stubParser=$this->getMock('Parser'); $stubParser->expects($this->any()) ->method('parse') ->will($this->returnValue($parserResult)); //injecting the stub to my client class $fileHeaderParser= new FileWriter($stubParser); $output=$fileParser->writeStringToFile(); Inside my writeStringToFile() method I'm using $parserResult like this: writeStringToFile(){ //Some code... $parserResult=$parser->parse(); $segment=$parserResult->getSegment();//that's why I set the segment in the test. } Should I mock ParserResult in the first place, so that the mock returns a mock? Is it good design for mocks to return mocks? Is there a better approach to do this all?!

    Read the article

  • Is SugarCRM really adequate for custom development (or adequate at all)? [closed]

    - by dukeofgaming
    Have you used SugarCRM for custom development successfully?, if so, have you done it programmatically or through the Module Builder? Were you successful? If not, why? I used SugarCRM for a project about two years ago, I ran into errors from the very installation, having to hack the actual installation file to deploy the software in the server and other erros that I can't recall now. Two years after, I'm picking it up for a project once again. I'm feeling like I should have developed the whole thing from scratch myself. Some examples: I couldn't install it in the server (again). I had to install it locally, then copy the files and database over to the server and manually edit the config file. Constantly getting deployment errors from the module builder. One reason is SugarCRM keeps creating a record in the upgrade_history table for a file that does not exist, I keep deleting such record and it keeps coming back corrupt. I get other deployment errors, but have not figured them out. then I have to rollback all files and database to try again. I deleted a custom module with relationships, the relationships stayed in the other modules and cannot be deleted anymore, PHP warnings all over the place. Quick create for custom modules does not appear, hack needed. Its whole cache directory is a joke, permanent data/files are stored there. The module builder interface disappears required fields. Edit the wrong thing, module builder won't deploy again, then pray Quick Repair and/or Rebuild Relationships do the trick. My impression of SugarCRM now is that, regardless of its pretty exterior and apparent functionality, it is a very low quality piece of software. This even scared me more: http://amplicate.com/hate/sugarcrm; a quote: I wis this info had been available when I tried to implement it 2 years ago... I searched high and low and the only info I found was positive. Yes, it's a piece of crap. The community edition was full of bugs... nothing worked. Essentially I got fired for implementing it. I'm glad though, because now I work for myself, am much happier and make more money... so, I should really thank SugarCRM for sucking so much I guess! I figured that perhaps some of you have had similar experiences, and have either sticked with SugarCRM or moved on to another solution. I'm very interested in knowing what your resolutions were -or your current situations are- to make up my own mind, since the project I'm working on is long term and I'm feeling SugarCRM will be more an obstacle than an aid. After further failed attempts to continue using this software I continued to stumble upon dead-ends when using the module editor, I could only recover from this errors by using version control. We are now moving on to a custom implementation using Symfony; perhaps if we were using it with its out-of-the-box modules we would have sticked with it.

    Read the article

  • Visual Studio 2013 - Express for Web vs Professional [duplicate]

    - by TimS
    This question already has an answer here: Visual Studio 2012 - Express vs Professional 2 answers What are the main differences and limitations between Visual Studio 2013 Express and Visual Studio 2013 Professional? I'm specifically interested in information related to the Web edition. I need to be able to develop ASP.Net applications, Windows Services and console applications - not Desktop or Phone apps. Microsoft seems to hide this information well and I can only seem to find information relating to 2012 products and earlier.

    Read the article

  • How to become a better programmer in 2011?

    - by Anish Patel
    Not strictly a Stack Overflow thing, but I thought I'd get it out there and ask the question. What are you as a programmer going to do to improve in 2011? The things I am planning to do are as follows: Learn Functional Programming Write 100 blog posts Take a bunch of Microsoft exams (70-433, 70-511, 70-513, 70-515, 70-516, 70-518, 70-519) Contribute to an open source project Lets hope the motivation lasts all year!

    Read the article

  • Large invoice database structure and rendering

    - by user132624
    Our client has a MS SQL database that has 1 million customer invoice records in it. Using the database, our client wants its customers to be able to log into a frontend web site and then be able to view, modify and download their company’s invoices. Given the size of the database and the large number of customers who may log into the web site at any time, we are concerned about data base engine performance and web page invoice rendering performance. The 1 million invoice database is for just 90 days sales, so we will remove invoices over 90 days old from the database. Most of the invoices have multiple line items. We can easily convert our invoices into various data formats so for example it is easy for us to convert to and from SQL to XML with related schema and XSLT. Any data conversion would be done on another server so as not to burden the web interface server. We have tentatively decided to run the web site on a .NET Framework IIS web server using MS SQL on MS Azure. How would you suggest we structure our database for best performance? For example, should we put all the invoices of all customers located within the same 5 digit or 6 digit zip codes into the same table? Or could we set up a separate home directory for each customer on IIS and place each customer’s invoices in each customer’s home directory in XML format? And secondly what would you suggest would be the best method to render customer invoices on a web page and allow customers to modify for best performance? The ADO.net XML Data Set looks intriguing to us as a method, but we have never used it.

    Read the article

  • How to detect two moving shapes overlapped?

    - by user1389813
    Given a list of circles with its coordinates (x and y) that are moving every second in different direction (South-East, South-West, North-East and North-West), and the circle will change direction if it hits the wall sort of like bouncing, so how do we detect if any of them collide or overlap with each other ? I am not sure if we can use some data structures like a Binary Search Tree because since all the coordinates vary every seconds, so the tree will have to re-build accordingly. Or can we use Vertical Sweep Line Algorithm each time ? Any ideas on how to do this in a efficient way ?

    Read the article

  • How could RDBMSes be considered a fad?

    - by StuperUser
    Completing my Computing A-level in 2003 and getting a degree in Computing in 2007, and learning my trade in a company with a lot of SQL usage, I was brought up on the idea of Relational Databases being used for storage. So, despite being relatively new to development, I was taken-aback to read a comment (on Is LinqPad site quote "Tired of querying in antiquated SQL?" accurate? ) that said: [Some devs] despise [SQL] and think that it and RDBMS are a fad Obviously, a competent dev will use the right tool for the right job and won't create a relational database when e.g. flat file or another solution for storage is appropriate, but RDBMs are useful in a massive number of circumstances, so how could they be considered a fad?

    Read the article

  • Merge sort versus quick sort performance

    - by Giorgio
    I have implemented merge sort and quick sort using C (GCC 4.4.3 on Ubuntu 10.04 running on a 4 GB RAM laptop with an Intel DUO CPU at 2GHz) and I wanted to compare the performance of the two algorithms. The prototypes of the sorting functions are: void merge_sort(const char **lines, int start, int end); void quick_sort(const char **lines, int start, int end); i.e. both take an array of pointers to strings and sort the elements with index i : start <= i <= end. I have produced some files containing random strings with length on average 4.5 characters. The test files range from 100 lines to 10000000 lines. I was a bit surprised by the results because, even though I know that merge sort has complexity O(n log(n)) while quick sort is O(n^2), I have often read that on average quick sort should be as fast as merge sort. However, my results are the following. Up to 10000 strings, both algorithms perform equally well. For 10000 strings, both require about 0.007 seconds. For 100000 strings, merge sort is slightly faster with 0.095 s against 0.121 s. For 1000000 strings merge sort takes 1.287 s against 5.233 s of quick sort. For 5000000 strings merge sort takes 7.582 s against 118.240 s of quick sort. For 10000000 strings merge sort takes 16.305 s against 1202.918 s of quick sort. So my question is: are my results as expected, meaning that quick sort is comparable in speed to merge sort for small inputs but, as the size of the input data grows, the fact that its complexity is quadratic will become evident? Here is a sketch of what I did. In the merge sort implementation, the partitioning consists in calling merge sort recursively, i.e. merge_sort(lines, start, (start + end) / 2); merge_sort(lines, 1 + (start + end) / 2, end); Merging of the two sorted sub-array is performed by reading the data from the array lines and writing it to a global temporary array of pointers (this global array is allocate only once). After each merge the pointers are copied back to the original array. So the strings are stored once but I need twice as much memory for the pointers. For quick sort, the partition function chooses the last element of the array to sort as the pivot and scans the previous elements in one loop. After it has produced a partition of the type start ... {elements <= pivot} ... pivotIndex ... {elements > pivot} ... end it calls itself recursively: quick_sort(lines, start, pivotIndex - 1); quick_sort(lines, pivotIndex + 1, end); Note that this quick sort implementation sorts the array in-place and does not require additional memory, therefore it is more memory efficient than the merge sort implementation. So my question is: is there a better way to implement quick sort that is worthwhile trying out? If I improve the quick sort implementation and perform more tests on different data sets (computing the average of the running times on different data sets) can I expect a better performance of quick sort wrt merge sort? EDIT Thank you for your answers. My implementation is in-place and is based on the pseudo-code I have found on wikipedia in Section In-place version: function partition(array, 'left', 'right', 'pivotIndex') where I choose the last element in the range to be sorted as a pivot, i.e. pivotIndex := right. I have checked the code over and over again and it seems correct to me. In order to rule out the case that I am using the wrong implementation I have uploaded the source code on github (in case you would like to take a look at it). Your answers seem to suggest that I am using the wrong test data. I will look into it and try out different test data sets. I will report as soon as I have some results.

    Read the article

  • Best Books of C

    - by Patrick
    Hi, I realy want to get high skills in C programming and I know that the best and only way is hard work and lots of practice. though I found so many tutorials and books available on the net about learning the C language. I'm just looking for one or two good books in C that I can learn from and get high skills in C. Anyone knows such a great book/books for C programming pls? (sorry for replication if the question exists already in the forum) Regards!

    Read the article

  • Advice Needed: Developers blocked by waiting on code to merge from another branch using GitFlow

    - by fogwolf
    Our team just made the switch from FogBugz & Kiln/Mercurial to Jira & Stash/Git. We are using the Git Flow model for branching, adding subtask branches off of feature branches (relating to Jira subtasks of Jira features). We are using Stash to assign a reviewer when we create a pull request to merge back into the parent branch (usually develop but for subtasks back into the feature branch). The problem we're finding is that even with the best planning and breakdown of feature cases, when multiple developers are working together on the same feature, say on the front-end and back-end, if they are working on interdependent code that is in separate branches one developer ends up blocking the other. We've tried pulling between each others' branches as we develop. We've also tried creating local integration branches each developer can pull from multiple branches to test the integration as they develop. Finally, and this seems to work possibly the best for us so far, though with a bit more overhead, we have tried creating an integration branch off of the feature branch right off the bat. When a subtask branch (off of the feature branch) is ready for a pull request and code review, we also manually merge those change sets into this feature integration branch. Then all interested developers are able to pull from that integration branch into other dependent subtask branches. This prevents anyone from waiting for any branch they are dependent upon to pass code review. I know this isn't necessarily a Git issue - it has to do with working on interdependent code in multiple branches, mixed with our own work process and culture. If we didn't have the strict code-review policy for develop (true integration branch) then developer 1 could merge to develop for developer 2 to pull from. Another complication is that we are also required to do some preliminary testing as part of the code review process before handing the feature off to QA.This means that even if front-end developer 1 is pulling directly from back-end developer 2's branch as they go, if back-end developer 2 finishes and his/her pull request is sitting in code review for a week, then front-end developer 2 technically can't create his pull request/code review because his/her code reviewer can't test because back-end developer 2's code hasn't been merged into develop yet. Bottom line is we're finding ourselves in a much more serial rather than parallel approach in these instance, depending on which route we go, and would like to find a process to use to avoid this. Last thing I'll mention is we realize by sharing code across branches that haven't been code reviewed and finalized yet we are in essence using the beta code of others. To a certain extent I don't think we can avoid that and are willing to accept that to a degree. Anyway, any ideas, input, etc... greatly appreciated. Thanks!

    Read the article

  • Breaking down CS courses for freshmen

    - by Avinash
    I'm a student putting together a slide geared towards freshmen level students who are trying to understand what the importance of various classes in the CS curriculum are. Would it be safe to say that this list is fairly accurate? Data structures: how to store stuff in programs Discrete math: how to think logically Bits & bytes: how to ‘speak’ the machine’s language Advanced data structures: how to store stuff in more ways Algorithms: how to compute things efficiently Operating systems: how to do manage different processes/threads Thanks!

    Read the article

  • What's the best book for coding conventions?

    - by Joschua
    What's the best book about coding conventions (and perhaps design patterns), that you highly recommend (at best code samples in Python, C++ or Java)? It would be good, if the book (or just another) also covers the topics project management and agile software development if appropriate (for example how projects fail through spaghetti code). I will accept the answer with the book(s) (maximum two books per answer, please), that looks the most interesting, because the reading might take a while :)

    Read the article

  • libssh2 and simultaneous connections

    - by Florian Margaine
    I'm writing a node.js C++ module using the C library libssh2. The module is supposed to be a bridge to connect to SSH over HTTPS. Right now, I'm still in the design/learning phase of v8 API and C++, and I have a design question: libssh2 is a C library, all its methods are global. From what I see in the examples, libssh2 can only handle one connection at a time. If I want to allow simultaneous connections to different SSH servers, do I have to fork a process to completely separate the libssh2 "instances", or is forking a thread enough? I don't know enough of the separation limit used there. Any idea on how to handle this is appreciated.

    Read the article

  • Recursion VS memory allocation

    - by Vladimir Kishlaly
    Which approach is most popular in real-world examples: recursion or iteration? For example, simple tree preorder traversal with recursion: void preorderTraversal( Node root ){ if( root == null ) return; root.printValue(); preorderTraversal( root.getLeft() ); preorderTraversal( root.getRight() ); } and with iteration (using stack): Push the root node on the stack While the stack is not empty Pop a node Print its value If right child exists, push the node's right child If left child exists, push the node's left child In the first example we have recursive method calls, but in the second - new ancillary data structure. Complexity is similar in both cases - O(n). So, the main question is memory footprint requirement?

    Read the article

  • I need help with some terminology

    - by Christine
    I'm not a programmer; I'm a freelance writer and researcher. I have a client who'd looking for stats on certain "threats" to the apps market. One of them is cowboy coding. (I know what that means; that's not my question.) Specifically, he wants to see numbers regarding how many apps have failed/crashed/removed because of errors made by, in essence, sloppy coding. (I'm not here to debate the merits of cowboy coding, and whether or not it is sloppy; work with me here.) I've used every possible search term/phrase I can think of, but I can't find any hard numbers, just anecdotal evidence. Have any of you seen any reports that have this kind of data?

    Read the article

  • Selling your services when you use uncommon technologies

    - by speeder
    I took a look in Stackoverflow most popular profiles, and then I did the same in several other sites, and then I took a look in job postings in several boards, mostly out of curiosity, because I noticed this: If you work with Java, .NET or other managed languages, or you work with stuff that is popular for web development (Ruby, JavaScript, etc...) you can get lots of points on Stackoverflow, find lots of jobs and clients, find forums, friends, colleagues, etc... But how a programmer of uncommon languages (Lua, pure C, Lisp, D, ADA, Haskell, etc...) find information, sell his services, and so on? EDIT: This also applies to fields: You work with web, corporate software, database, etc... it is great... You dislike those previous 3, noone ever will hire your services...

    Read the article

  • Using Scrum on small projects where Owner doesn't want to be involved

    - by Andrej Mohar
    Recently I've been reading and learning quite a lot about scrum and I like it a lot. However, I do have a couple of likely scenarios in my head to which I don't know the solution. So let's say that I might want to organize an agile team of (for instance) four web developers (one of them UI/UX designer). This team would operate on scrum principles. Initially we would probably be working on projects like landing pages for ordinary people's small businesses, like renting apartments, selling cookies... Such customers simply can't be set with Product Owner role (IMHO), because they usually expect to hire a company, give them the overall project goal with some details, and then expect the job to be done (including a lot of decision making) with as little of their involvement as possible (in their opinion, they have more important things to do). Let's say I'd like to engage myself in a developer/scrum master role (I know that even that is debatable, being a team member and scrum master at once), so I simply shouldn't take the role of the product owner as well. So as for my questions: If I'm my company's business owner, do I simply need to be a product owner as well (do these roles include each other)? Can I employ a sales person which might have the product owner role? Would it be better if it is an experienced developer instead of a sales person? Is this even a smart move? Lastly, is there another agile approach that might better suit my position? EDIT: Thank you everyone for good inputs. I added some comments, any aditional info will be greatly appreciated.

    Read the article

  • Can I configure a visual difference view with the notifications provided by TFS?

    - by John Kaster
    I have TFS sending me alerts whenever someone on my team checks in code. (I had to create notification rules for every project, but that's just a sidebar complaint in this question.) These alerts provided some information on who checked in the files when, and what files have changed, with urls to view details in a browser. The thing that baffles me is that I can't just click on the source file and see a visual diff of the changes. There's no link that will auto-launch a diff in Visual Studio (using a custom protocol) from there either. Is there a way to configure TFS to provide a visual diff of the changes to the file that was checked in via this notification UI?

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >