Search Results

Search found 13534 results on 542 pages for 'gpu programming'.

Page 3/542 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Recommendations for Open Source Parallel programming IDE

    - by Andrew Bolster
    What are the best IDE's / IDE plugins / Tools, etc for programming with CUDA / MPI etc? I've been working in these frameworks for a short while but feel like the IDE could be doing more heavy lifting in terms of scaling and job processing interactions. (I usually use Eclipse or Netbeans, and usually in C/C++ with occasional Java, and its a vague question but I can't think of any more specific way to put it)

    Read the article

  • Does your programming knowledge decrease if you don't practice?

    - by Codereview
    I'm a beginner programmer, I study languages such as C/C++/Python and Java (Mainly focused on C++). I'm What you'd call "Young and inexperienced" and I admit that because I can't claim otherwise. As a student I have many other problems besides programming.I practice programming as often as I can, and especially because my teacher gives me a lot more exercises than the rest of the class (It's a very low level), so oftentimes I spend weeks doing something else such as school projects or sports, or travelling, anything besides programming. Don't get me wrong though, I love programming, I love to build functional code, to watch as a program comes alive at the push of a button and to learn as much as I can - I simply don't have much time for it. Straight to the question, now: does your programming knowledge decrease as time passes and you don't practice? You may ask "How much time do you mean?". I don't mean a specific amount of time, but for reference you could take a month-two or even a year as an example. By knowledge I mean anything: From syntax to language functionality.

    Read the article

  • Programming Constructs History

    - by kunjaan
    I need some help in figuring out which language introduced the constructs that we use everyday. For example: Constructs Introduced from LISP If-Else Block :"The ubiquitous if-then-else structure, now taken for granted as an essential element of any programming language, was invented by McCarthy for use in Lisp, where it saw its first appearance in a more general form (the cond structure). It was inherited by Algol, which popularized it. " - WikiPedia Function Type : Functions as first class citizens. Garbage Collection

    Read the article

  • what is the difference between declarative and imperative programming

    - by Brad
    I have been searching the web looking for a definition for declarative and imperative programming that would shed some light for me. However the language used at some of the resources that I have found is daunting - for instance at wikipedia. Does any one have a real world example that they could show me that might bring some perspective to this subject...perhaps in c# thanks

    Read the article

  • Overclocked GPU quantum problem

    - by Thrawn
    Hi all, I overclocked my nVidia GPU, and now I get it to be much faster, but after a ~40% overclock, I start getting "mistakes" on the screen, like wrongly coloured pixels, glitches and the sort. Temperature is still within limits, as I added extra coolers. So my question is: is this a permanent problem which is damaging the GPU or is only something related to the intrinsic quantum mistake rate of processing calculations? Thanks for your opinion :-)

    Read the article

  • Reinitialize GPU on RADEON HD 7970 under linux

    - by user1610662
    I have got a RADEON HD 7970 sapphire on Debian Squeeze. Since I often use it with running GPU codes, sometimes the performances highly decrease as I test it with "glxgears" tool (I get only 20 FPS in fullscreen). So I would like to be able to reinitialize the GPU without reboot the system. I know the "clinfo" tool which display the features of the graphics card. Is there a tool which allows to do this reinitialization ?

    Read the article

  • whats the difference between GPU and framegrabber?

    - by user261002
    I am working on a project to monitor if human tissue has been fused with radio frequency during the surgery or not, therefore we are using a very fast camera (1800fps) and also laser illumination on the tissue and a framegrabber (1GB memory). I notice that, instead of a framegrabber, I'm able to use GPU as well, but I am not sure on what's the difference between them? Can any body explain what is the difference between a frame grabber and a GPU?

    Read the article

  • Tellago && Tellago Studios 2010

    - by gsusx
    With 2011 around the corner we, at Tellago and Tellago Studios , we have been spending a lot of times evaluating our successes and failures (yes those too ;)) of 2010 and delineating some of our goals and strategies for 2011. When I look at 2010 here are some of the things that quickly jump off the page: Growing Tellago by 300% Launching a brand new company: Tellago Studios Expanding our customer base Establishing our business intelligence practice http://tellago.com/what-we-say/events/business-intelligence...(read more)

    Read the article

  • Is there a "golden ratio" in coding?

    - by badallen
    My coworkers and I often come up with silly ideas such as adding entries to Urban Dictionary that are inappropriate but completely make sense if you are a developer. Or making rap songs that are about delegates, reflections or closures in JS... Anyhow, here is what I brought up this afternoon which was immediately dismissed to be a stupid idea. So I want to see if I can get redemptions here. My idea is coming up with a Golden Ratio (or in the neighborhood of) between the number of classes per project versus the number of methods/functions per class versus the number of lines per method/function. I know this is silly and borderline, if not completely, useless, but just think of all the legacy methods or classes you have encountered that are absolutely horrid - like methods with 10000 lines or classes with 10000 methods. So Golden Ratio, anyone? :)

    Read the article

  • When/Why ( if ever ) should i think about doing Generic Programming/Meta Programming

    - by hotadvice
    Hi there IMHO to me OOPS, design patterns make sense and i have been able to apply them practically. But when it comes to "generic programming /meta programming" of the Modern C++ kind, i am left confused. -- Is it a new programming/design paradigm ? -- Is it just limited to "library development"? If not, What design/coding situations call for using meta programming/generic programming. -- Does using templates mean i am doing generic programming? I have googled a lot on this topic but do not grasp the BIG PICTURE fully. Also see this post. After reading dicussions here under, up till now, I am sure ( might still not be correct): a) Generic programming and meta programming are two different concepts.

    Read the article

  • Is the “jQuery programming style” a kind of Reactive programming?

    - by Peter Krauss
    jQuery is a Javascript library and framework, but when we are programming with jQuery into DOM problems/solutions, we can practice a style quite different of programming... We can read about jQuery at Wikipedia, The set of jQuery core features — DOM element selections, traversal and manipulation —, enabled by its selector engine (...), created a new "programming style", fusing algorithms and DOM-data-structures This question is similar to the "subquestion-3" of this question but not so generic. The focus here is about this new kind of "programming style"... So, the question: Is the "jQuery programming style in DOM context" a new paradign? Or it is more one example of reactive programming (not "cell-oriented" but "DOM-node oriented") or another one? We have no "standard taxonomy of paradigms", so, please, in your answer, indicate also your "best choice for Wikipedia Paradign". Example: if you understand that "jQuery programming DOM" is like "awk filtering data", your choice can be event-driven.

    Read the article

  • what exactly is system programming?

    - by kentjh
    I have never understood what system programming meant. The usual definition given is "...doing something close to the Os or extending Os features...". Does using Windows API directly rather than some libraries to say do file i/o make it system programming? Was writing Android OS system programming? If I write something that would expose linux kernel through a console like app on Android am I doing system programming? If I am writing software to control a washing machine am I writing system programming? I am a beginner in programming and this is confusing me to no end. Please explain contrasting it with "application programming".

    Read the article

  • Which is the next dominant programming paradigm? [closed]

    - by Kugathasan Abimaran
    What is the next programming paradigm when OOP get lost in the market? Or else will OOP be for ever? What is your advise for the future developers? To which paradigm should we aware of? Because, before OOP, structured programming paradigm is there with C. Don't close it Please, because I need to aware, which paradigm have the ability to withstand in future? Aspect-oriented programming. Declarative programming. Functional programming. Object-oriented programming. Any Others? This describes programming paradigm according to their kernel language.

    Read the article

  • Intro to GPU programming

    - by Adam Davis
    Everyone has this huge massively parallelized supercomputer on their desktop in the form of a graphics card GPU. What is the "hello world" equivalent of the GPU community? What do I do, where do I go, to get started programming the GPU for the major GPU vendors? -Adam

    Read the article

  • how a pure functional programming language manage without assignment statements?

    - by Gnijuohz
    When reading the famous SICP,I found the authors seem rather reluctant to introduce the assignment statement to Scheme in Chapter 3.I read the text and kind of understand why they feel so. As Scheme is the first functional programming language I ever know something about,I am kind of surprised that there are some functional programming languages(not Scheme of course) can do without assignments. Let use the example the book offers,the bank account example.If there is no assignment statement,how can this be done?How to change the balance variable?I ask so because I know there are some so-called pure functional languages out there and according to the Turing complete theory,this must can be done too. I learned C,Java,Python and use assignments a lot in every program I wrote.So it's really an eye-opening experience.I really hope someone can briefly explain how assignments are avoided in those functional programming languages and what profound impact(if any) it has on these languages. The example mentioned above is here: (define (make-withdraw balance) (lambda (amount) (if (>= balance amount) (begin (set! balance (- balance amount)) balance) "Insufficient funds"))) This changed the balance by set!.To me it looks a lot like a class method to change the class member balance. As I said,I am not familiar with functional programming languages,so if I said something wrong about them,feel free to point out.

    Read the article

  • Overclocking a nVIDIA GTX 660M GPU?

    - by heron1000
    I have a MSI GE60 0ND laptop with a GTX 660M GPU. When I play games like Minecraft or Portal 2, the core clock is stable at 835 MHz. Recently I tried to overclock it using MSI Afterburner but it wouldn't let me change the voltages or the clock speed no matter what I tried. Various Google searches yielded solutions that all didn't work. Is there any way I can overclock the GPU? Further Info: I have the nVIDIA 310.70 drivers and Windows 8.

    Read the article

  • I want to upgrade my GPU to MSI NVIDIA N630GT-MD4GD3 4 GB DDR3 Graphic card

    - by jatin singh
    Hello every body this is my first post here.. I want to know about my motherboard's PCI express version , my motherboard as I don't have 10 reputation so I am providing image link here http://i.stack.imgur.com/6PWV9.png I have PCI express slot and I want to upgrade it to MSI NVIDIA N630GT-MD4GD3 4 GB DDR3 Graphic card.. this GPU has PCI exp version 2. Here in my city,the shopkeeper said that we have to check if the PCI express of my board is compatible to that GPU but I want to purchase it from flipkart because they sell it for less money (sorry for bad English :/) I mailed to my computers company (Acer) but they didn't reply to any of my mail so my friend told me about Super User.

    Read the article

  • Chipset GPU causes a massive slowdown

    - by zyboxenterprises
    My AMD Radeon HD 7700 recently broke (fan stopped working and GPU overheated), and now I'm running on internal chipset graphics, and it causes a massive slowdown of the whole PC. I've changed the graphics memory from 32MB (minimum) to 256MB (highest), and it hasn't made any difference whatsoever. I'm using Windows Aero, and disabling it should have made a small difference, but it didn't; the whole PC is still slow. I know that it's not the computer build, because I built it myself, and it was a lot faster when it had the AMD Radeon HD 7700 in it, which is the reason why I believe it's the internal chipset graphics that are causing the problem. Is this behavior normal? I don't have the cash right now to go out and buy a new dedicated GPU. I'm using an ASRock N68C-GS FX motherboard with an AMD FX 4100 (overclocked to 4.3GHZ), with 4GB RAM. The overclock was an attempt to resolve this issue, and it isn't related to this issue that the integrated graphics is causing a slowdown.

    Read the article

  • How to resolve CGDirectDisplayID changing issues on newer multi-GPU Apple laptops in Core Foundation

    - by Dave Gallagher
    In Mac OS X, every display gets a unique CGDirectDisplayID number assigned to it. You can use CGGetActiveDisplayList() or [NSScreen screens] to access them, among others. Per Apple's docs: A display ID can persist across processes and system reboot, and typically remains constant as long as certain display parameters do not change. On newer mid-2010 MacBook Pro's, Apple started using auto-switching Intel/nVidia graphics. Laptops have two GPU's, a low-powered Intel, and a high-powered nVidia. Previous dual-GPU laptops (2009 models) didn't have auto-GPU switching, and required the user to make a settings change, logoff, and then logon again to make a GPU switch occur. Even older systems only had one GPU. There's an issue with the mid-2010 models where CGDirectDisplayID's don't remain the same when a display switches from one GPU to the next. For example: Laptop powers on. Built-In LCD Screen is driven by Intel chipset. Display ID: 30002 External Display is plugged in. Built-In LCD Screen switches to nVidia chipset. It's display ID changes: 30004 External Display is driven by nVidia chipset. ...at this point, the Intel chipset is dormant... User unplugs External Display. Built-In LCD Screen switches back to Intel chipset. It's display ID changes back to original: 30002 My question is, how can I match an old display ID to a new display ID when they alter due to a GPU change? Thought about: I've noticed that the display ID only changes by 2, but I don't have enough test Mac's available to determine if this is common to all new MacBook Pro's, or just mine. Kind of a kludge if "just check for display ID's which are +/-2 from one another" works, anyway. Tried: CGDisplayRegisterReconfigurationCallback(), which notifies before-and-after when displays are going to change, has no matching logic. Putting something like this inside a method registered with it doesn't work: // Run before display settings change: CGDirectDisplayID directDisplayID = ...; io_service_t servicePort = CGDisplayIOServicePort(directDisplayID); CFDictionaryRef oldInfoDict = IODisplayCreateInfoDictionary(servicePort, kIODisplayMatchingInfo); // ...display settings change... // Run after display settings change: CGDirectDisplayID directDisplayID = ...; io_service_t servicePort = CGDisplayIOServicePort(directDisplayID); CFDictionaryRef newInfoDict = IODisplayCreateInfoDictionary(servicePort, kIODisplayMatchingInfo); BOOL match = IODisplayMatchDictionaries(oldInfoDict, newInfoDict, 0); if (match) NSLog(@"Displays are a match"); else NSLog(@"Displays are not a match"); What's happening above is I'm caching oldInfoDict before display settings change, letting them change, and then comparing it to newInfoDict by using IODisplayMatchDictionaries(), which will say either "yes, both displays are the same!" or "no, both displays are not the same." Unfortunately, it does not return YES if GPU's have changed for a monitor. Example of the dictionary's it's comparing: // oldInfoDict (Display ID: 30002) oldInfoDict: { DisplayProductID = 40144; DisplayVendorID = 1552; IODisplayLocation = "IOService:/AppleACPIPlatformExpert/PCI0@0/AppleACPIPCI/IGPU@2/AppleIntelFramebuffer/display0/AppleBacklightDisplay"; } // newInfoDict (Display ID: 30004) newInfoDict: { DisplayProductID = 40144; DisplayVendorID = 1552; IODisplayLocation = "IOService:/AppleACPIPlatformExpert/PCI0@0/AppleACPIPCI/P0P2@1/IOPCI2PCIBridge/GFX0@0/NVDA,Display-A@0/NVDA/display0/AppleBacklightDisplay"; } As you can see, the IODisplayLocation key changes when GPU's are switched, hence IODisplayMatchDictionaries() doesn't work. I can, theoretically, compared just the DisplayProductID and DisplayVendorID keys, but I'm writing end-user software, and am worried of a situation where users have two or more identical monitors plugged in. Any help is greatly appreciated! :)

    Read the article

  • Port scientific software to GPU and publish it

    - by Werner
    Hi, let's say that I am a physicist and that I am the master of the universe when it comes to port salready existing oftware to GPU's with 100x or more speedups. Let's say that I find that some other scientist, which does not know how to program GPU, publishes the Open Source code in his/her website of a physical simulation program, in the field I am expert on. Let's say that I realize "I can port that code to GPU", and I suggest him, but he shows no interest. My interest here is, 1) to port it to GPU, 2) to publish this result in a scientific journal related with physics and/or computer science My question for you is 1- would you proceed here to port the code to GPU (or other new arch) and publish it? 2- how would you do it and which journal do you suggest? Thanks

    Read the article

  • Should functional programming be taught before imperative programming?

    - by Zifre
    It seems to me that functional programming is a great thing. It eliminates state and makes it much easier to automatically make code run in parallel. Many programmers who were first taught imperative programming styles find it very difficult to learn functional programming, because it is so different. I began to wonder if programmers who were taught functional programming first would find it hard to begin imperative programming. It seems like it would not be as hard as the other way around, so I thought it would be a good thing if more programmers were taught functional programming first. So, my question is, should functional programming be taught in school before imperative, and if so, why is it not more common to start with it?

    Read the article

  • How to join the World of Programming? [closed]

    - by litebread
    Name's Vlad and I am currently on my third year of Community College, studying Computer Science with emphasis on Programming in C++ and Networking. I have completed a few programming courses with general ease, but have not gained advanced understanding of programming through school. None of my friends are serious programmers working in the industry. Being an active lurker on many programming websites, and in general tech oriented sites I have noticed how little I know about the industry, the lingo and terminology. (I have no clue how Git hub works, but I generally understand what its for). So I am looking for help as to where I should look for information on the programming world and the industry in which I a very interested. By that I mean, what sites I should utilize to gain information on programming practices, introduction to advanced C++ and resources that simply introduce a 20some programming noob. I like programming, but I haven't dug my hands deep into it yet, I want to start to do so before I transfer to a University. All in all, where do I find information on becoming an actual programmer (Information that lays out a path). Thank you for reading. Have a great day!

    Read the article

  • ATI gpu (video accel, decode, encode, ATI Stream, DXVA)

    - by Shiki
    Okay its a long question title for sure. I'm looking for a new video card (yes,SU is not a page for that, but wait). I've been a loyal NVidia customer ever since, now using a 8600gts. Old but still somewhat good, its a bit slow though. I want an upgrade because 8600gts wont support better vdpau and new features. I checked out the prices and the documents, I would need a GTX260 card. Which costs ..well.. a lot. ATI performs much better for that price. (At least on every test it outperforms GTX260). However, as far as I know there is no gpu accel with ATI. The things you can use is DXVA only, no other method. Could you correct me out there? Will be there a gpu accel for ATI also? Or is there one available? (DXVA is not bad, but kinda slow compared to NVIdia's CUDA.) What about openCL? How does ATI support that? (I'm talking about the 5850 ATI card at the minute, I would buy that instead of the NVidia.)

    Read the article

  • What could be my path? Networking, programming, or something else?

    - by momong
    Well first and foremost, I would like to give my brief description: I was an aviation student but I didn't pursue that path because I lost my interest. Now I'm an I.T. student and currently stopped schooling because of confusion. I don't know which path I should choose: could it be programming or networking? Someone told me that on networking the money is easy, the job is easy. Others told me that programming is best suited for me because I'm very skilled and excellent at figures. I want to chose networking, but I can't find my passion for it, my mind tells me but my heart doesn't... and on programming, I don't know which language I should pick or if I like it or not. A good mentor, even if only online, would be a very big plus to me, but I don't think if there are many who could spent their time on teaching a nobody... but I'm very eager to learn. My real passion is gaming! I want to work in the gaming industry, I want to be a man behind those games! I've been a gamer freak since birth. But I don't know how to get in to that industry. I don't know what to do. I don't know which path would really suit me. Sorry if some of you find this a pointless question, but please bear with me, this could be the turn of my life.

    Read the article

  • First languages with generic programming support

    - by oluies
    Which was the first language with generic programming support, and what was the first major staticly typed language (widely used) with generics support. Generics implement the concept of parameterized types to allow for multiple types. The term generic means "pertaining to or appropriate to large groups of classes." I have seen the following mentions of "first": First-order parametric polymorphism is now a standard element of statically typed programming languages. Starting with System F [20,42] and functional programming lan- guages, the constructs have found their way into mainstream languages such as Java and C#. In these languages, first-order parametric polymorphism is usually called generics. From "Generics of a Higher Kind", Adriaan Moors, Frank Piessens, and Martin Odersky Generic programming is a style of computer programming in which algorithms are written in terms of to-be-specified-later types that are then instantiated when needed for specific types provided as parameters. This approach, pioneered by Ada in 1983 From Wikipedia Generic Programming

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >