Search Results

Search found 14169 results on 567 pages for 'parallel programming'.

Page 5/567 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Game programming : C# or C++?

    - by Chronix
    Ok so the faq says : "What language should I learn next? (Unless you have a specific requirement and don't know which language meets that requirement.)" so I guess this is not against the rules of questions. So, I've decided what I really want is to do Game Programming. So the question is, as a 18 years old who wants to learn self taught programming, what is the most suited programming language between C# and C++? ( I should state that I don't care about unix because I believe windows will be still the most used os ) I know the basics of C++, but none about C#. I know that C++ has more tutorials guides, dlls and stuff like that, while C# doesn't, but it's far easier to learn and to use for a single person to develop a program, but I've read even if the program is obfuscated it's easy to get the source of it. Well, I kinda want to focus on basic a lot first, I must say that I have a preference for C++ just because it feels more suitable to me, but I must consider that I work alone.. So even if I like it, it may not be the best thing to do. I am really not sure of which one to go for, i've read a lot of threads on various websites and it looks like C# is becoming more popular than C++. But yeah, that said, I also specified I want to do Game Programming. So I need to know some better points than just "C# is easier because of .net memory handling while C++ isnt" because I couldn't just find it. I hope the thread won't be closed because I've read the faq and I have a specific requirement. Thank you, and if you need more details you're free to ask me, but I think just by saying game programming and that I'd work alone should be enough!

    Read the article

  • Why is verbosity bad for a programming language?

    - by frowing
    I have seen many people around complaining about verbosity in programming languages. I find that, within some bounds, the more verbose a programming language is, the better it is to understand. I think that verbosity also reinforces writing clearer APIs for that particular language. The only disadvantage I can think of is that it makes you type more, but I mean, most people use IDEs that do all that work for you. so, What are possible downsides to a verbose programming language?

    Read the article

  • Game programming : C# or C++?

    - by Chronix
    Ok so the faq says : "What language should I learn next? (Unless you have a specific requirement and don't know which language meets that requirement.)" so I guess this is not against the rules of questions. So, I've decided what I really want is to do Game Programming. So the question is, as a 18 years old who wants to learn self taught programming, what is the most suited programming language between C# and C++? ( I should state that I don't care about unix because I believe windows will be still the most used os ) I know the basics of C++, but none about C#. I know that C++ has more tutorials guides, dlls and stuff like that, while C# doesn't, but it's far easier to learn and to use for a single person to develop a program, but I've read even if the program is obfuscated it's easy to get the source of it. Well, I kinda want to focus on basic a lot first, I must say that I have a preference for C++ just because it feels more suitable to me, but I must consider that I work alone.. So even if I like it, it may not be the best thing to do. I am really not sure of which one to go for, i've read a lot of threads on various websites and it looks like C# is becoming more popular than C++. But yeah, that said, I also specified I want to do Game Programming. So I need to know some better points than just "C# is easier because of .net memory handling while C++ isnt" because I couldn't just find it. I hope the thread won't be closed because I've read the faq and I have a specific requirement. Thank you, and if you need more details you're free to ask me, but I think just by saying game programming and that I'd work alone should be enough!

    Read the article

  • How to start learning Programming? [on hold]

    - by user107080
    Most of the time people ask which programming language to learn which I know is not valid question to ask here and I don't care about 'which' here, I care about 'How'. So, how do I start learning programming from scratch? in other words, what are the steps for absolute beginner to learn programming the right way? I'm sure buying a book and reading it is not the only way in this case. Again, I don't want to ask which but I want also to know if there are specific languages that will make my start as solid as possible because I know that some can hurt my mindset of programming point of view.

    Read the article

  • Reinventing the Wheel, why should I?

    - by Mercfh
    So I have this problem, it may be my OCD (i have OCD it's not severe.....but It makes me very..lets say specific about certain things, programming being one of them) or it may be the fact that I graduated college and still feel "meh" at programming. Reading This made me think "OH thats me!" but thats not really my main problem. My big problem is....anytime im using a high level language/API/etc. I always think to myself that im not really "programming". I know I know...it sounds stupid. But Like I feel like....if i can't figure out how to do it at the lowest level then Im not really "understanding" it. I do this for just about every new technology I learn. I look at the lowest level and try to understand it. Sometimes I do.....most of the time I don't, I mean i've only really been programming for 4 years (at college, if you even call it programming.....our university's program was "meh"). For instance I do a little bit of embedded programming (with the Atmel AVR 8bits/Arduino stuff). And I can't bring myself to use the C compiler, even though it's 8 million times easier than using assembly......it's stupid I know... Anyone else feel like this, I think it's just my OCD that makes me feel this way....but has anyone else ever felt like they need to go down to the lowest level of the language to even be satisfied with using it? I apologize for the very very odd question, but I think it really hinders me in getting deep seeded into a programming language and making a real application of my own. (it's silly I know)

    Read the article

  • How difficult is it to change from Embedded programming to a high level programming [on hold]

    - by anudeep shetty
    I have a background in Computer Science. I worked on Embedded programming on Linux file systems, after I finished my Bachelor's degree, for over a year. After that I pursued my masters where most of my course choices involved working on web, java and databases. Now I have an offer to work with a company that is offering a job to work on the OS level. The company is pretty good but I am feeling that my Masters has gone to waste. I wanted to know is it common that a Computer Science major works on low-level coding and is there a possibility that I can work in this company for some years and then move onto an opportunity where I can work on high-level coding? Also is working on low-level programming a safe choice in terms of job opportunities?

    Read the article

  • Ways to earn money through programming and/or programming knowledge [closed]

    - by Jason Swett
    It occurred to me today that it might be useful to make a list of all the ways to earn money through either actual programming or just programming knowledge. I imagine it's probably a finite list as long as you stick to a reasonable level of granularity. Here's what I have so far: Trading your time for money (i.e. having a job or being a freelancer) Building your own software product (a full-fledged startup or a tiny mobile app or whatever) Giving talks at conferences and meetups Teaching students in a classroom Writing a book or blog (these are products, but non-software products) I've probably missed at least a few. What else is there? (I'm not sure whether this is an appropriate question, by the way. I think I would select the best answer based on how practical/original/interesting/numerous your suggestions are.)

    Read the article

  • Scalable / Parallel Large Graph Analysis Library?

    - by Joel Hoff
    I am looking for good recommendations for scalable and/or parallel large graph analysis libraries in various languages. The problems I am working on involve significant computational analysis of graphs/networks with 1-100 million nodes and 10 million to 1+ billion edges. The largest SMP computer I am using has 256 GB memory, but I also have access to an HPC cluster with 1000 cores, 2 TB aggregate memory, and MPI for communication. I am primarily looking for scalable, high-performance graph libraries that could be used in either single or multi-threaded scenarios, but parallel analysis libraries based on MPI or a similar protocol for communication and/or distributed memory are also of interest for high-end problems. Target programming languages include C++, C, Java, and Python. My research to-date has come up with the following possible solutions for these languages: C++ -- The most viable solutions appear to be the Boost Graph Library and Parallel Boost Graph Library. I have looked briefly at MTGL, but it is currently slanted more toward massively multithreaded hardware architectures like the Cray XMT. C - igraph and SNAP (Small-world Network Analysis and Partitioning); latter uses OpenMP for parallelism on SMP systems. Java - I have found no parallel libraries here yet, but JGraphT and perhaps JUNG are leading contenders in the non-parallel space. Python - igraph and NetworkX look like the most solid options, though neither is parallel. There used to be Python bindings for BGL, but these are now unsupported; last release in 2005 looks stale now. Other topics here on SO that I've looked at have discussed graph libraries in C++, Java, Python, and other languages. However, none of these topics focused significantly on scalability. Does anyone have recommendations they can offer based on experience with any of the above or other library packages when applied to large graph analysis problems? Performance, scalability, and code stability/maturity are my primary concerns. Most of the specialized algorithms will be developed by my team with the exception of any graph-oriented parallel communication or distributed memory frameworks (where the graph state is distributed across a cluster).

    Read the article

  • SQLAuthority News – Download Whitepaper – Understanding and Controlling Parallel Query Processing in SQL Server

    - by pinaldave
    My recently article SQL SERVER – Reducing CXPACKET Wait Stats for High Transactional Database has received many good comments regarding MAXDOP 1 and MAXDOP 0. I really enjoyed reading the comments as the comments are received from industry leaders and gurus. I was further researching on the subject and I end up on following white paper written by Microsoft. Understanding and Controlling Parallel Query Processing in SQL Server Data warehousing and general reporting applications tend to be CPU intensive because they need to read and process a large number of rows. To facilitate quick data processing for queries that touch a large amount of data, Microsoft SQL Server exploits the power of multiple logical processors to provide parallel query processing operations such as parallel scans. Through extensive testing, we have learned that, for most large queries that are executed in a parallel fashion, SQL Server can deliver linear or nearly linear response time speedup as the number of logical processors increases. However, some queries in high parallelism scenarios perform suboptimally. There are also some parallelism issues that can occur in a multi-user parallel query workload. This white paper describes parallel performance problems you might encounter when you run such queries and workloads, and it explains why these issues occur. In addition, it presents how data warehouse developers can detect these issues, and how they can work around them or mitigate them. To review the document, please download the Understanding and Controlling Parallel Query Processing in SQL Server Word document. Note: Above abstract has been taken from here. The real question is what does the parallel queries has made life of DBA much simpler or is it looked at with potential issue related to degradation of the performance? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • Parallel port blocking

    - by asalamon74
    I have a legacy Java program which handles a special card printer by sending binary data to the LPT1 port (no printer driver is involved, the Java program creates the binary stream). The program was working correctly with the client's old computer. The Java program sent all the bytes to the printer and after sending the last byte the program was not blocked. It took an other minute to finish the card printing, but the user was able to continue the work with the program. After changing the client's computer (but not the printer, or the Java program), the program does not finish the task till the card is ready, it is blocked until the last second. It seems to me that LPT1 has a different behavior now than was before. Is it possible to change this in Windows? I've checked BIOS for parallel port settings: The parallel port is set to EPP+ECP (but also tried the other two options: Bidirectional, Output only). Maybe some kind of parallel port buffer is too small? How can I increase it?

    Read the article

  • Read non-blocking from multiple fifos in parallel

    - by Ole Tange
    I sometimes sit with a bunch of output fifos from programs that run in parallel. I would like to merge these fifos. The naïve solution is: cat fifo* > output But this requires the first fifo to complete before reading the first byte from the second fifo, and this will block the parallel running programs. Another way is: (cat fifo1 & cat fifo2 & ... ) > output But this may mix the output thus getting half-lines in output. When reading from multiple fifos, there must be some rules for merging the files. Typically doing it on a line by line basis is enough for me, so I am looking for something that does: parallel_non_blocking_cat fifo* > output which will read from all fifos in parallel and merge the output on with a full line at a time. I can see it is not hard to write that program. All you need to do is: open all fifos do a blocking select on all of them read nonblocking from the fifo which has data into the buffer for that fifo if the buffer contains a full line (or record) then print out the line if all fifos are closed/eof: exit goto 2 So my question is not: can it be done? My question is: Is it done already and can I just install a tool that does this?

    Read the article

  • Finding a new programming language for web development?

    - by Xeoncross
    I'm wondering if there are any un-biased resources that give good, specific overviews of programming languages and their intended goals. I would like to learn a new language, but visiting the sites of each language isn't working. Each one talks about how great it is without much mention of it's weaknesses or specific goals. Ruby is a dynamic, open source programming language with a focus on simplicity and productivity. Python is a programming language that lets you work more quickly and integrate your systems more effectively. Having been a PHP developer for years, Vic Cherubini sums up my plight well: I knew PHP well, had my own framework, and could work quickly to get something up and running. I programmed like this throughout the MVC revolution. I got better and better jobs (read: better paying, better title) as a PHP developer, but all along the way realizing that the code I wrote on my own time was great, and the code I worked with at work was horrible. Like, worse than horrible. Atrocious. OS Commerce level bad. Having side projects kept me sane, because the code I worked with at work made me miserable. This is why I'm retiring from PHP for my side projects and new programming ventures. I'm spent with PHP. Exhausted, if you will. I've reached a level where I think I'm at the top with it as a language and if I don't move on to a new language soon, I'll be done completely with programming and I do not want that. Languages I've looked at include JavaScript (for node.js), Ruby, Python, & Erlang. I've even thought about Scala or C++. The problem is figuring out which ones are built to handle my needs the best. So where can I go to skip the hype and get real information about the maturity of a platform, the size of the community, and the strengths & weaknesses of that language. If I know these then picking a language to continue my web development should be easy.

    Read the article

  • Parallel computing in .net

    - by HotTester
    Since the launch of .net 4.0 a new term that has got into lime light is parallel computing. Does parallel computing provide us some benefits or its just another concept or feature. Further is .net really going to utilize it in applications ? Further is parallel computing different from parallel programming ? Kindly throw some light on the issue in perspective of .net and some examples would be helpful. Thanks...

    Read the article

  • pitfalls/disadvantages of functional programming

    - by CrazyJugglerDrummer
    When would you NOT want to use functional programming? What is it not so good at? I am more looking for disadvantages of the paradigm as a whole, not things like "not widely used", or "no good debugger available". Those answers may be correct as of now, but they deal with FP being a new concept (an unavoidable issue) and not any inherent qualities. Related: pitfalls of object oriented programming advantages of functional programming

    Read the article

  • Evolution of mainstream programming languages: simplicity versus complexity.

    - by Giorgio
    I had posted this question on http://stackoverflow.com but I was suggested that it may be more appropriate to post it on this forum. I did a quick search on this site and it seems to me that this question has not been asked yet. Please give me a hint if the topic has been raised already by someone else. Update I have rephrased this question, removed personal opinions and made it shorter. I hope in this way it is better suited for this forum. By looking at the recent development of Java (Java 7) and C++ (C++0x) I see that new features are added to these languages. For sure this makes it easier to use certain programming idioms, adding to the productivity of developers. On the other hand, there might be the following risks A language becomes too big, complex, and difficult to understand. It lacks coherence in the design, e.g. if it mixes different paradigms like object-orientation and functional programming, which might not fit well together. Questions: what is more important to you as a developer: to have a rich language that captures a large collection of programming idioms or to have a small language that aims at coherence and simplicity (of course, with a good deal of libraries and tools accompanying it)? Or is it possible to have both? With respect to these issues: How do you judge the current evolutions of main-stream programming languages like Java or C++? Are they becoming too complex, less intuitive? Do they have enough features? Do they need more? Are they still easy enough to understand and use?

    Read the article

  • STL algorithms and concurrent programming

    - by Andrew
    Hello everyone, Can any of STL algorithms/container operations like std::fill, std::transform be executed in parallel if I enable OpenMP for my compiler? I am working with MSVC 2008 at the moment. Or maybe there are other ways to make it concurrent? Thanks.

    Read the article

  • Parallelism in .NET – Part 11, Divide and Conquer via Parallel.Invoke

    - by Reed
    Many algorithms are easily written to work via recursion.  For example, most data-oriented tasks where a tree of data must be processed are much more easily handled by starting at the root, and recursively “walking” the tree.  Some algorithms work this way on flat data structures, such as arrays, as well.  This is a form of divide and conquer: an algorithm design which is based around breaking up a set of work recursively, “dividing” the total work in each recursive step, and “conquering” the work when the remaining work is small enough to be solved easily. Recursive algorithms, especially ones based on a form of divide and conquer, are often a very good candidate for parallelization. This is apparent from a common sense standpoint.  Since we’re dividing up the total work in the algorithm, we have an obvious, built-in partitioning scheme.  Once partitioned, the data can be worked upon independently, so there is good, clean isolation of data. Implementing this type of algorithm is fairly simple.  The Parallel class in .NET 4 includes a method suited for this type of operation: Parallel.Invoke.  This method works by taking any number of delegates defined as an Action, and operating them all in parallel.  The method returns when every delegate has completed: Parallel.Invoke( () => { Console.WriteLine("Action 1 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 2 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 3 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); } ); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Running this simple example demonstrates the ease of using this method.  For example, on my system, I get three separate thread IDs when running the above code.  By allowing any number of delegates to be executed directly, concurrently, the Parallel.Invoke method provides us an easy way to parallelize any algorithm based on divide and conquer.  We can divide our work in each step, and execute each task in parallel, recursively. For example, suppose we wanted to implement our own quicksort routine.  The quicksort algorithm can be designed based on divide and conquer.  In each iteration, we pick a pivot point, and use that to partition the total array.  We swap the elements around the pivot, then recursively sort the lists on each side of the pivot.  For example, let’s look at this simple, sequential implementation of quicksort: public static void QuickSort<T>(T[] array) where T : IComparable<T> { QuickSortInternal(array, 0, array.Length - 1); } private static void QuickSortInternal<T>(T[] array, int left, int right) where T : IComparable<T> { if (left >= right) { return; } SwapElements(array, left, (left + right) / 2); int last = left; for (int current = left + 1; current <= right; ++current) { if (array[current].CompareTo(array[left]) < 0) { ++last; SwapElements(array, last, current); } } SwapElements(array, left, last); QuickSortInternal(array, left, last - 1); QuickSortInternal(array, last + 1, right); } static void SwapElements<T>(T[] array, int i, int j) { T temp = array[i]; array[i] = array[j]; array[j] = temp; } Here, we implement the quicksort algorithm in a very common, divide and conquer approach.  Running this against the built-in Array.Sort routine shows that we get the exact same answers (although the framework’s sort routine is slightly faster).  On my system, for example, I can use framework’s sort to sort ten million random doubles in about 7.3s, and this implementation takes about 9.3s on average. Looking at this routine, though, there is a clear opportunity to parallelize.  At the end of QuickSortInternal, we recursively call into QuickSortInternal with each partition of the array after the pivot is chosen.  This can be rewritten to use Parallel.Invoke by simply changing it to: // Code above is unchanged... SwapElements(array, left, last); Parallel.Invoke( () => QuickSortInternal(array, left, last - 1), () => QuickSortInternal(array, last + 1, right) ); } This routine will now run in parallel.  When executing, we now see the CPU usage across all cores spike while it executes.  However, there is a significant problem here – by parallelizing this routine, we took it from an execution time of 9.3s to an execution time of approximately 14 seconds!  We’re using more resources as seen in the CPU usage, but the overall result is a dramatic slowdown in overall processing time. This occurs because parallelization adds overhead.  Each time we split this array, we spawn two new tasks to parallelize this algorithm!  This is far, far too many tasks for our cores to operate upon at a single time.  In effect, we’re “over-parallelizing” this routine.  This is a common problem when working with divide and conquer algorithms, and leads to an important observation: When parallelizing a recursive routine, take special care not to add more tasks than necessary to fully utilize your system. This can be done with a few different approaches, in this case.  Typically, the way to handle this is to stop parallelizing the routine at a certain point, and revert back to the serial approach.  Since the first few recursions will all still be parallelized, our “deeper” recursive tasks will be running in parallel, and can take full advantage of the machine.  This also dramatically reduces the overhead added by parallelizing, since we’re only adding overhead for the first few recursive calls.  There are two basic approaches we can take here.  The first approach would be to look at the total work size, and if it’s smaller than a specific threshold, revert to our serial implementation.  In this case, we could just check right-left, and if it’s under a threshold, call the methods directly instead of using Parallel.Invoke. The second approach is to track how “deep” in the “tree” we are currently at, and if we are below some number of levels, stop parallelizing.  This approach is a more general-purpose approach, since it works on routines which parse trees as well as routines working off of a single array, but may not work as well if a poor partitioning strategy is chosen or the tree is not balanced evenly. This can be written very easily.  If we pass a maxDepth parameter into our internal routine, we can restrict the amount of times we parallelize by changing the recursive call to: // Code above is unchanged... SwapElements(array, left, last); if (maxDepth < 1) { QuickSortInternal(array, left, last - 1, maxDepth); QuickSortInternal(array, last + 1, right, maxDepth); } else { --maxDepth; Parallel.Invoke( () => QuickSortInternal(array, left, last - 1, maxDepth), () => QuickSortInternal(array, last + 1, right, maxDepth)); } We no longer allow this to parallelize indefinitely – only to a specific depth, at which time we revert to a serial implementation.  By starting the routine with a maxDepth equal to Environment.ProcessorCount, we can restrict the total amount of parallel operations significantly, but still provide adequate work for each processing core. With this final change, my timings are much better.  On average, I get the following timings: Framework via Array.Sort: 7.3 seconds Serial Quicksort Implementation: 9.3 seconds Naive Parallel Implementation: 14 seconds Parallel Implementation Restricting Depth: 4.7 seconds Finally, we are now faster than the framework’s Array.Sort implementation.

    Read the article

  • Is there a “P” programming language? [closed]

    - by Synetech
    I’m wondering if anybody has made a programming language based on BCPL, named P. There was a language named B that was based on BCPL, followed of course by C, also based on BCPL. I’ve seen plenty of whimsically named programming languages, so I can’t help but be surprised if nobody made one called P. I checked the Wikipedia’s—not exactly comprehensive—list of programming languages, and while there are three languages named L (none of which are related to BCPL), there are none called P—in fact, it is one of the only letters not used as a name. (Google is useless for one-letter query terms.) Does anybody know if a P has been made, even as a lark. (Yes, I know about P#, but that is based on Prolog, not BCPL; there is one called P, but it is also not related to BCPL.)

    Read the article

  • Are there any non-english programming languages? [closed]

    - by samarudge
    Possible Duplicate: Non-English-based programming languages Not sure if this is the right place to ask this, but I'm going to anyway. Without fail, every programming language I've ever seen, used or heard of has it's keywords based around English. if, else, while, for, query, foreach, image, path, extension, the list goes on, are all based around English words. Are there any languages, or ports of languages that base their core keywords based on non-english words to lower the wall for non-english speaking programmers? This is mostly for intrest (English is my first language so it's no problem for me). Are these languages popular locally (I.E. might a software development house in Germany use a programming language based in German over one based in English).

    Read the article

  • Teaching programming to a non-CS graduate

    - by Shahzada
    I have a couple of friends interested in computer programming, but they're non-CS graduates; some of them have very little experience in software testing field (some of them took some basic software testing courses). I am going to be working with them on teaching basic computer programming, and computer science fundamentals (data structures etc). My questions are; What language should I start with? What are essential computer science topics that I should cover before jumping them into computer programming? What readings can I incorporate to make the topic interesting and non-overwhelming? If we want to spend a year on it, what topics should take priority and must be covered in 12 months? Again, these are non computer science folks, and I want to keep the learning as much fun as possible. Thanks everyone.

    Read the article

  • When Programming will become deprecated [closed]

    - by Vibeeshan
    Possible Duplicate: Will programmers be around in a few years? According to the history of programming, with each new generation of software engineering it seems to become easier and easier. Machine Code - Assembly - Programming Languages - easier - more easier - etc. If this situation continues anyone will be able to program (even complex systems). Even now, most of the kids study programming at school (Pascal , VB, etc.). Will be there any jobs called software engineering in future (if everyone know to program). ....and.... What do you think about the future of software development?

    Read the article

  • Which programming language for text editing?

    - by Ali
    I need a programming language for text editing and processing (replace, formatting, regex, string comparison, word processing, text analysis, etc). Which programming language is more powerful and has more functions for this purpose? Since I work PHP for my web projects, I currently use PHP; but the fact is that PHP is a scripting language for web applications, my current project is offline. I am curious if other programming languages such as Perl, Python, C, C++, Java, etc have more functionality for this purpose, and worth of shifting the project?

    Read the article

  • How important is Programming for a Level Designer?

    - by WryGrin
    I'm currently attending school in a Level Design program, and I was wondering how important programming really is in being a Level Designer? I'm apparently incapable of learning programming (despite my best efforts), and tend to do very well in all other courses 3D modelling, story/character design, narrative and dialogue writing, environmental and conceptual design etc. I'm wondering if my strengths in the other areas are enough (with practice) to let me become a Level Designer, or I'm wasting my time if I can't program? I really want to be a Designer, but I just can't seem to wrap my head around the "language" of programming in general (Java kicks my teeth in even with tutoring and additional work on my own).

    Read the article

  • Pair Programming: Pros and Cons

    - by O.D
    I need some experience reporting from the ones who have done pair programming, I noticed that lots of people recommend it but my experience was that at one point it's more efficient to sit alone, think and then write code than to talk with the other programmer (which can be very annoying to other programmers in the same office), do you agree to this? and if yes can you mention situations where pair programming is less efficient than traditional programming? Actually, I'm more interested in Cons than in Pros, but if it's your own experience I would like to read both, the Cons and the Pros. I would like to read what you think about the Programmer who doesn't have the keyboard, what can he do in the meanwhile other than talking about the concept? or checking the code on the screen?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >