Search Results

Search found 18084 results on 724 pages for 'graphics programming'.

Page 19/724 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Dual-headed graphics card choice: DVI & VGA with 512MB _or_ DVI & DVI with 256MB

    - by TimH
    Which would you choose? Some more detail: I can choose between: A dual-headed card with both heads DVI but only 256MB of memory A dual-headed card with one VGA and one DVI, with 512MB of memory. Both monitors are 1600x1200 I'll be doing mostly business app development on the computer. No gameplay or advanced graphics work. It's running Win7 and is a quad-core i5. I'm thinking of going with 256MB one, just so both displays are DVI and I don't have to shift between sharp & blurry when I look from one screen to the other. But I'm not sure if the additional RAM would be a huge boon for some reason (Win7 GPU acceleration, for example? But with a quad-core, who cares?).

    Read the article

  • Graphics card failure, anything I could try...

    - by ILMV
    My gaming PC has decided to die, it's not the first time but usually a quick ATX reset brings it back to life. Today it didn't. I disconnect all unessasary devices so I've only got the case button / LED cables, GPU, CPU, RAM and power connected, the computer still didn't turn on. I've not got a speaker on my motherboard so found a spare one I have for testing and when the machine starts up I get one long beep and two short beeps from my Award BIOS, which apparently means a video card error. I change it with the GPU from another machine and all works well. Q: So I have a faulty graphics card (an nVidia 8800GT OC), is there anything I can try to resurect it?

    Read the article

  • Laptop monitor goes black under intense graphics

    - by drahcir
    I have a dell inspiron 1520 with a nvidia GeForce 8600M GT video card. I have Windows 7 professional as an operating system. Yesterday when I was using it the screen started to somewhat flicker then go completely black. I restarted it and in the boot screen everything seems fine, when I entered the login screen it became flickery again, and when I logged in, when a couple of windows popped up it went black again. I hooked it up to a tft monitor and the monitor works fine, and did some more tests that confirmed that under higher graphics causes the laptop monitor to go black. Is there a reason why this is happening?

    Read the article

  • Graphics Card Compatability

    - by Aaron
    I am a first time builder and while I have the basic understanding of how to put a computer together, one final thing eludes me. What are the requirements for instaling 2 or more graphics cards on a computer? Example: EVGA 015-P3-1589-AR GeForce GTX 580 (Fermi) FTW Hydro Copper 2 1536MB 384-bit GDDR5 PCI Express 2.0 x16 HDCP Ready SLI Support Video Card ECS NGTX580-1536PI-F GeForce GTX 580 (Fermi) 1536MB 384-bit GDDR5 PCI Express 2.0 x16 HDCP Ready SLI Support Video Card I am not planning on buying either of these cards in the near future. I simply am using them as an example because of the number of differences. As I understand it these two cards will work together (given the proper motherboard) because the only thing that matters is that they are both GeForce GTX 580. Is my assumption correct? If not, why?

    Read the article

  • My Graphics Card isn't working Properly

    - by Dan
    I have just upgraded from a Sapphire AMD 6670 2gb Graphics card to a Nvidia GTX 650 ti 2gb SSC and my Windows Experiance index has gone from 6.8 to 7.7 but when playing games i am seeing no improvements i cannot play saints row three on even the lowest settings but according to many people and benchmarks on the web i should be able to play it comfortably. I want to know why this is happening to me..... I have installed the latest drivers and i have direct 10 + 11 installed I am 15 and it's my birthday today as i got it as a present but its not doing what i want also in am using a dvi cable not hdmi because i need a new one and its in the post. Is it possible that using dvi will affect performance

    Read the article

  • Problem with fitting graphics card

    - by Gary Robinson
    I have recently purchased a new radeon hd5670 graphics card. The problem I have is that it is what I might call a double height card. So the end where you fix the card to the case is twice the size of the card it is replacing. But this means that it is blocked by the rear of the network card. The only way I can see it to cut that part of the fixture part out then I won't have a problem. But has anyone any suggestions what I could use? To be clear it is not the actual card it is the fixture part which is 90 degress to the card that is causing the problem. Unless someone knows of another solution - some kind of pci-e extender that I could use as I could easily

    Read the article

  • Latest XP Drivers for Intel Integrated Graphics

    - by John
    On the Intel site I'm struggling to find what the latest driver version is for certain chipsets. From a system info tool I have several PCs (laptops) with the following reported graphics: Mobile Intel(R) 4 Series Express Chipset Family Intel(R) Q35 Express Chipset Family Intel(R) Q45/Q43 Express Chipset All use the same driver igxprd32.dll at version 6.14.0010.4xxx (last 3 digits vary). All PCs are XP 32 bit. I am not certain but I think these are all using the same basic chipset. More than likely the drivers were never updated so I wondered what version might be relevant. Any help on tracking down the latest driver version (I just need the number, so I can see how out of date they are) and figuring out which chips these cards are would be great.

    Read the article

  • Ultrabook graphics card for gaming

    - by Francisc
    I want to get a Windows 8 "ultrabook" that I can also use for gaming. Most seem to have Intel HD Graphics 4000 (integrated) or NVIDIA GeForce GT 620M (dedicated). From experience, are they any good? Things I enjoy playing are Call of Duty, Fifa and Skyrim for example. I'm looking for a device that will work on the next generation of those games as well, so it shouldn't just barely work on the current versions that exist now. So, would those two cards suffice? Any advice is much appreciated. Thank you.

    Read the article

  • Learning by doing (and programming by trial and error)

    - by AlexBottoni
    How do you learn a new platform/toolkit while producing working code and keeping your codebase clean? When I know what I can do with the underlying platform and toolkit, I usually do this: I create a new branch (with GIT, in my case) I write a few unit tests (with JUnit, for example) I write my code until it passes my tests So far, so good. The problem is that very often I do not know what I can do with the toolkit because it is brand new to me. I work as a consulant so I cannot have my preferred language/platform/toolkit. I have to cope with whatever the customer uses for the task at hand. Most often, I have to deal (often in a hurry) with a large toolkit that I know very little so I'm forced to "learn by doing" (actually, programming by "trial and error") and this makes me anxious. Please note that, at some point in the learning process, usually I already have: read one or more five-stars books followed one or more web tutorials (writing working code a line at a time) created a couple of small experimental projects with my IDE (IntelliJ IDEA, at the moment. I use Eclipse, Netbeans and others, as well.) Despite all my efforts, at this point usually I can just have a coarse understanding of the platform/toolkit I have to use. I cannot yet grasp each and every detail. This means that each and every new feature that involves some data preparation and some non-trivial algorithm is a pain to implement and requires a lot of trial-and-error. Unfortunately, working by trial-and-error is neither safe nor easy. Actually, this is the phase that makes me most anxious: experimenting with a new toolkit while producing working code and keeping my codebase clean. Usually, at this stage I cannot use the Eclipse Scrapbook because the code I have to write is already too large and complex for this small tool. In the same way, I cannot use any more an indipendent small project for my experiments because I need to try the new code in place. I can just write my code in place and rely on GIT for a safe bail-out. This makes me anxious because this kind of intertwined, half-ripe code can rapidly become incredibly hard to manage. How do you face this phase of the development process? How do you learn-by-doing without making a mess of your codebase? Any tips&tricks, best practice or something like that?

    Read the article

  • HTML5 game programming style

    - by fnx
    I am currently trying learn javascript in form of HTML5 games. Stuff that I've done so far isn't too fancy since I'm still a beginner. My biggest concern so far has been that I don't really know what is the best way to code since I don't know the pros and cons of different methods, nor I've found any good explanations about them. So far I've been using the worst (and propably easiest) method of all (I think) since I'm just starting out, for example like this: var canvas = document.getElementById("canvas"); var ctx = canvas.getContext("2d"); var width = 640; var height = 480; var player = new Player("pic.png", 100, 100, ...); also some other global vars... function Player(imgSrc, x, y, ...) { this.sprite = new Image(); this.sprite.src = imgSrc; this.x = x; this.y = y; ... } Player.prototype.update = function() { // blah blah... } Player.prototype.draw = function() { // yada yada... } function GameLoop() { player.update(); player.draw(); setTimeout(GameLoop, 1000/60); } However, I've seen a few examples on the internet that look interesting, but I don't know how to properly code in these styles, nor do I know if there are names for them. These might not be the best examples but hopefully you'll get the point: 1: Game = { variables: { width: 640, height: 480, stuff: value }, init: function(args) { // some stuff here }, update: function(args) { // some stuff here }, draw: function(args) { // some stuff here }, }; // from http://codeincomplete.com/posts/2011/5/14/javascript_pong/ 2: function Game() { this.Initialize = function () { } this.LoadContent = function () { this.GameLoop = setInterval(this.RunGameLoop, this.DrawInterval); } this.RunGameLoop = function (game) { this.Update(); this.Draw(); } this.Update = function () { // update } this.Draw = function () { // draw game frame } } // from http://www.felinesoft.com/blog/index.php/2010/09/accelerated-game-programming-with-html5-and-canvas/ 3: var engine = {}; engine.canvas = document.getElementById('canvas'); engine.ctx = engine.canvas.getContext('2d'); engine.map = {}; engine.map.draw = function() { // draw map } engine.player = {}; engine.player.draw = function() { // draw player } // from http://that-guy.net/articles/ So I guess my questions are: Which is most CPU efficient, is there any difference between these styles at runtime? Which one allows for easy expandability? Which one is the most safe, or at least harder to hack? Are there any good websites where stuff like this is explained? or... Does it all come to just personal preferance? :)

    Read the article

  • How to get better at solving Dynamic programming problems

    - by newbie
    I recently came across this question: "You are given a boolean expression consisting of a string of the symbols 'true', 'false', 'and', 'or', and 'xor'. Count the number of ways to parenthesize the expression such that it will evaluate to true. For example, there is only 1 way to parenthesize 'true and false xor true' such that it evaluates to true." I knew it is a dynamic programming problem so i tried to come up with a solution on my own which is as follows. Suppose we have a expression as A.B.C.....D where '.' represents any of the operations and, or, xor and the capital letters represent true or false. Lets say the number of ways for this expression of size K to produce a true is N. when a new boolean value E is added to this expression there are 2 ways to parenthesize this new expression 1. ((A.B.C.....D).E) ie. with all possible parenthesizations of A.B.C.....D we add E at the end. 2. (A.B.C.(D.E)) ie. evaluate D.E first and then find the number of ways this expression of size K can produce true. suppose T[K] is the number of ways the expression with size K produces true then T[k]=val1+val2+val3 where val1,val2,val3 are calculated as follows. 1)when E is grouped with D. i)It does not change the value of D ii)it inverses the value of D in the first case val1=T[K]=N.( As this reduces to the initial A.B.C....D expression ). In the second case re-evaluate dp[K] with value of D reversed and that is val1. 2)when E is grouped with the whole expression. //val2 contains the number of 'true' E will produce with expressions which gave 'true' among all parenthesized instances of A.B.C.......D i) if true.E = true then val2 = N ii) if true.E = false then val2 = 0 //val3 contains the number of 'true' E will produce with expressions which gave 'false' among all parenthesized instances of A.B.C.......D iii) if false.E=true then val3=( 2^(K-2) - N ) = M ie. number of ways the expression with size K produces a false [ 2^(K-2) is the number of ways to parenthesize an expression of size K ]. iv) if false.E=false then val3 = 0 This is the basic idea i had in mind but when i checked for its solution http://people.csail.mit.edu/bdean/6.046/dp/dp_9.swf the approach there was completely different. Can someone tell me what am I doing wrong and how can i get better at solving DP so that I can come up with solutions like the one given above myself. Thanks in advance.

    Read the article

  • Remote Graphics Diagnostics with Windows RT 8.1 and Visual Studio 2013

    - by Michael B. McLaughlin
    Originally posted on: http://geekswithblogs.net/mikebmcl/archive/2013/11/12/remote-graphics-diagnostics-with-windows-rt-8.1-and-visual-studio.aspxThis blog post is a brief follow up to my What’s New in Graphics and Game Development in Visual Studio 2013 post on the MVP Award blog. While writing that post I was testing out various features to try to make sure everything worked as expected. I had some trouble getting Remote Graphics Diagnostics (a/k/a remote graphics debugging) working on my first generation Surface RT (upgraded to Windows RT 8.1). It was more strange since I could use remote debugging when doing CPU debugging; it was just graphics debugging that was causing trouble. After some discussions with the great folks who work on the graphics tools in Visual Studio, they were able to repro the problem and recommend a solution. My Surface RT needed the ARM Kits policy installed on it. Once I followed the instructions on the previous link, I could successfully use Remote Graphics Diagnostics on my Surface RT. Please note that this requires Windows RT 8.1 RTM (i.e. not Preview) and that Remote Graphics Diagnostics on ARM only works when you are using Visual Studio 2013 as it is a new feature (it should work just fine using the Express for Windows version). Also, when I installed the ARM Kits policy I needed to do two things to get it to work properly. First, when following the “How to install the Kits policy” instructions, I needed to copy the SecureBoot folder into Program Files on my Surface RT (specifically, I copied the SecureBoot folder to “C:\Program Files\Windows Kits\8.1\bin\arm\” on my Surface RT, creating any necessary directories). It may work if it’s in any system folder; I didn’t test any others after I got it working. I had initially put it in my Downloads folder and tried installing it from there. When the machine restarted it displayed a worrisome error message. I repeatedly pressed the button that would allow me to retry and eventually the machine rebooted and managed to recover itself to its previous state. Second, I needed to install it as an Administrator. The instructions say that this might be necessary. For me it was. This is a Remote Graphics Diagnostics is a great new feature in Visual Studio 2013 so I definitely encourage all of you to check it out!

    Read the article

  • Best direction for displaying game graphics in C# App

    - by Mike Webb
    I am making a small game as sort of a test project, nothing major. I just started and am working on the graphics piece, but I'm not sure the best way to draw the graphics to the screen. It is going to be sort of like the old Zelda, so pretty simple using bitmaps and such. I started thinking that I could just paint to a Picture Box control using Drawing.Graphics with the Handle from the control, but this seems cumbersome. I'm also not sure if I can use double buffering with this method either. I looked at XNA, but for now I wanted to use a simple method to display everything. So, my question. Using the current C# windows controls and framework, what is the best approach to displaying game graphics (i.e. Picture Box, build a custom control, etc.)

    Read the article

  • Icons/graphics for your applications

    - by rein
    I need to source some graphics/icons for my application I'm writing (Windows WinForms). I need graphics for toolbar buttons, icons for form headers and graphics for wizard dialog boxes. As a last resort I'm willing to create them myself - a move that might make them (and my whole application) look amazingly bad. What are some good sources (free or not) where I can get these kinds of programmer icons? I'd like the icons to be at least 32x32 256 color.

    Read the article

  • ActionScript Aligning Graphics Line Style Stroke?

    - by TheDarkIn1978
    is it possible to align the stroke of a graphic with actionscript? for example, the following code creates a black rounded rect with a grey stroke that is automatically centre aligned. var t:Sprite = new Sprite(); t.graphics.lineStyle(5, 0x555555); t.graphics.beginFill(0, 1); t.graphics.drawRoundRect(25, 25, 200, 75, 25, 25); t.graphics.endFill(); the lineStyle function doesn't seem to offer any built-in functionality for aligning the stroke. in Adobe Illustrator, you can align a stroke to be either centre (default), inside or outside.

    Read the article

  • Dual NVidia graphics cards in Ubuntu / xorg.conf mania

    - by John Zwinck
    I have two NVidia graphics cards: Quadro NVS 295 (PCI Express, dual DisplayPort outputs) GeForce FX 5200 (PCI, DVI and VGA outputs) I have three identical monitors, two on DisplayPort and one on DVI. I'm on Ubuntu Hardy (and cannot currently dist-upgrade for separate reasons). I use the "nvidia" driver. What's new is the GeForce card and the third monitor. I currently have the dual DisplayPort monitors working fine. Here are the display-related parts of my xorg.conf: Section "ServerLayout" Identifier "Default Layout" Screen "PCI-Express Screen" 0 0 # adding this makes X fail to start: Screen "PCI Screen" 0 Inputdevice "Generic Keyboard" Inputdevice "Configured Mouse" EndSection Section "Module" Load "glx" # not sure why/if this is needed EndSection Section "Monitor" Identifier "DELL 2408WFP" Option "DPMS" EndSection Section "Device" Identifier "NVIDIA Quadro NVS 295" Driver "nvidia" Option "RenderAccel" "true" Screen 0 BusID "PCI:2:0:0" EndSection Section "Device" Identifier "NVIDIA GeForce FX 5200" Driver "nvidia" Option "RenderAccel" "true" Screen 1 BusID "PCI:6:4:0" EndSection Section "Screen" Identifier "PCI-Express Screen" Device "NVIDIA Quadro NVS 295" Monitor "DELL 2408WFP" Defaultdepth 24 Option "TwinView" "True" Option "UseEdidFreqs" "True" Option "MetaModes" "1920x1200 +0+1200, 1920x1200 +0+0" EndSection Section "Screen" Identifier "PCI Screen" Device "NVIDIA GeForce FX 5200" Monitor "DELL 2408WFP" Defaultdepth 24 Option "TwinView" "True" Option "UseEdidFreqs" "True" Option "MetaModes" "1920x1200 +0+0" EndSection I use nvidia-settings to configure my monitors, and it does not show the second GPU. lspci, though, shows: 02:00.0 VGA compatible controller: nVidia Corporation Unknown device 06fd 06:04.0 VGA compatible controller: nVidia Corporation NV34 [GeForce FX 5200] Which is where I got the BusID settings for the two devices (when I just had one device, I didn't have any BusID listed...and adding the BusID hasn't broken anything). What am I missing? How can I make nvidia-settings show my second GPU so I can then configure its monitor?

    Read the article

  • Graphics Artifacts/ Texture Flickering

    - by Cerin
    Hey I been having some problems with artifacts in games. Sometimes textures flicker. Artifacts of various shapes and sizes show up usually after a couple games of dota 2. I built my computer almost exactly one month ago and it has been doing this pretty much from the start except before the artifacts I believe just flashed on screen fast enough to where I couldn't tell what it was but I still noticed. In dota I've seen green triangular artifacts among other things. I've tried running Furmark for a while but even though it pushes the gpu much harder than dota 2, there are still no artifacts. It maxes in furmark at about 60C and running every game I've tried on it at 40C. CPU and system temp don't usually get higher than 40C either. These are my system specs: Gigabyte Z68 Intel Motherboard 16 GB Gskill Ripjaws SDRAM DDR3 Sapphire Radeon HD 7770 GHz edition Intel Core i5-2500k (with built in gpu) Corsair 750 Watt PSU windows 7-64 bit I have the latest drivers for everything. What should I do about this? Try to RMA my graphics card? Are there other things that could be causing this?

    Read the article

  • Graphics artifacts/distortion with Win7 and nVidia

    - by Gepard
    Problem I encounter is rather hard to describe, so I provide a screenshot: http://dl.dropbox.com/u/1732760/video-distortion.png As you can see there are some horizontal stripes in random colors. These stripes appear sometimes in all windowed apps, games and on the desktop too. They tend to stay in place until I refresh window (or force it to to redraw by for example minimizing and maximizing again). They also tend to appear in the same place and shape multiple times, even if they disappear, it's very likely they will be again in the same place after a while. These artifacts do not blink or change if computer is idling. If I don't touch anything, do not use mouse, they will stay in place forever (unless some app redraws its window on its own). I first encountered this problem some weeks ago. Back then I thought it might be cooling problem, so I took out the graphics card, removed dust from the radiator and fan and put it into PC back. I also ran some stress test using Furmark (peak tempearature was ~65C) to see if the problem becomes more intense if the card gets hotter, but suprisingly no artifacts whatsoever appear during the stress test. Graphic card is Galaxy GeForce 7300 GT with DDR3 memory, was never overclocked. Drivers are the latest, from Nvidia site. OS is Windows 7 64-bit, updated. AMD64 3000+, 2GB RAM. I'm running a dual monitor setup with 2 19'' Samsung LCDs and problem is on both, so I assume it's not a monitor or cable issue.

    Read the article

  • STL algorithms and concurrent programming

    - by Andrew
    Hello everyone, Can any of STL algorithms/container operations like std::fill, std::transform be executed in parallel if I enable OpenMP for my compiler? I am working with MSVC 2008 at the moment. Or maybe there are other ways to make it concurrent? Thanks.

    Read the article

  • Are we in a functional programming fad?

    - by TraumaPony
    I use both functional and imperative languages daily, and it's rather amusing to see the surge of adoption of functional languages from both sides of the fence. It strikes me, however, that it looks rather like a fad. Do you think that it's a fad? I know the reasons for using functional languages at times and imperative languages in others, but do you really think that this trend will continue due to the cliched "many-core" revolution that has been only "18 months from now" since 2004 (sort of like communism's Radiant Future), or do you think that it's only temporary; a fascination of the mainstream developer that will be quickly replaced by the next shiny idea, like Web 3.0 or GPGPU? Note, that I'm not trying to start a flamewar or anything (sorry if it sounds bitter), I'm just curious as to whether people will think functional or functional/imperative languages will become mainstream. Edit: By mainstream, I mean, equal number of programmers to say, Python, Java, C#, etc

    Read the article

  • is this possible in java or any other programming language

    - by drake
    public abstract class Master { public void printForAllMethodsInSubClass() { System.out.println ("Printing before subclass method executes"); } } public class Owner extends Master { public void printSomething () { System.out.println ("This printed from Owner"); } public int returnSomeCals () { return 5+5; } } Without messing with methods of subclass...is it possible to execute printForAllMethodsInSubClass() before the method of a subclass gets executed?

    Read the article

  • Your thoughts on Best Practices for Scientific Computing?

    - by John Smith
    A recent paper by Wilson et al (2014) pointed out 24 Best Practices for scientific programming. It's worth to have a look. I would like to hear opinions about these points from experienced programmers in scientific data analysis. Do you think these advices are helpful and practical? Or are they good only in an ideal world? Wilson G, Aruliah DA, Brown CT, Chue Hong NP, Davis M, Guy RT, Haddock SHD, Huff KD, Mitchell IM, Plumbley MD, Waugh B, White EP, Wilson P (2014) Best Practices for Scientific Computing. PLoS Biol 12:e1001745. http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001745 Box 1. Summary of Best Practices Write programs for people, not computers. (a) A program should not require its readers to hold more than a handful of facts in memory at once. (b) Make names consistent, distinctive, and meaningful. (c) Make code style and formatting consistent. Let the computer do the work. (a) Make the computer repeat tasks. (b) Save recent commands in a file for re-use. (c) Use a build tool to automate workflows. Make incremental changes. (a) Work in small steps with frequent feedback and course correction. (b) Use a version control system. (c) Put everything that has been created manually in version control. Don’t repeat yourself (or others). (a) Every piece of data must have a single authoritative representation in the system. (b) Modularize code rather than copying and pasting. (c) Re-use code instead of rewriting it. Plan for mistakes. (a) Add assertions to programs to check their operation. (b) Use an off-the-shelf unit testing library. (c) Turn bugs into test cases. (d) Use a symbolic debugger. Optimize software only after it works correctly. (a) Use a profiler to identify bottlenecks. (b) Write code in the highest-level language possible. Document design and purpose, not mechanics. (a) Document interfaces and reasons, not implementations. (b) Refactor code in preference to explaining how it works. (c) Embed the documentation for a piece of software in that software. Collaborate. (a) Use pre-merge code reviews. (b) Use pair programming when bringing someone new up to speed and when tackling particularly tricky problems. (c) Use an issue tracking tool. I'm relatively new to serious programming for scientific data analysis. When I tried to write code for pilot analyses of some of my data last year, I encountered tremendous amount of bugs both in my code and data. Bugs and errors had been around me all the time, but this time it was somewhat overwhelming. I managed to crunch the numbers at last, but I thought I couldn't put up with this mess any longer. Some actions must be taken. Without a sophisticated guide like the article above, I started to adopt "defensive style" of programming since then. A book titled "The Art of Readable Code" helped me a lot. I deployed meticulous input validations or assertions for every function, renamed a lot of variables and functions for better readability, and extracted many subroutines as reusable functions. Recently, I introduced Git and SourceTree for version control. At the moment, because my co-workers are much more reluctant about these issues, the collaboration practices (8a,b,c) have not been introduced. Actually, as the authors admitted, because all of these practices take some amount of time and effort to introduce, it may be generally hard to persuade your reluctant collaborators to comply them. I think I'm asking your opinions because I still suffer from many bugs despite all my effort on many of these practices. Bug fix may be, or should be, faster than before, but I couldn't really measure the improvement. Moreover, much of my time has been invested on defence, meaning that I haven't actually done much data analysis (offence) these days. Where is the point I should stop at in terms of productivity? I've already deployed: 1a,b,c, 2a, 3a,b,c, 4b,c, 5a,d, 6a,b, 7a,7b I'm about to have a go at: 5b,c Not yet: 2b,c, 4a, 7c, 8a,b,c (I could not really see the advantage of using GNU make (2c) for my purpose. Could anyone tell me how it helps my work with MATLAB?)

    Read the article

  • What Functional features are worth a little OOP confusion for the benefits they bring?

    - by bonomo
    After learning functional programming in Haskell and F#, the OOP paradigm seems ass-backwards with classes, interfaces, objects. Which aspects of FP can I bring to work that my co-workers can understand? Are any FP styles worth talking to my boss about retraining my team so that we can use them? Possible aspects of FP: Immutability Partial Application and Currying First Class Functions (function pointers / Functional Objects / Strategy Pattern) Lazy Evaluation (and Monads) Pure Functions (no side effects) Expressions (vs. Statements - each line of code produces a value instead of, or in addition to causing side effects) Recursion Pattern Matching Is it a free-for-all where we can do whatever the programming language supports to the limit that language supports it? Or is there a better guideline?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >