Search Results

Search found 13738 results on 550 pages for 'compilation errors'.

Page 164/550 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • AMD FX CPU drivers/patch

    - by Mubeen Shahid
    I am using Ubuntu since 2009 on notebooks with Intel CPUs. However now, that I am using AMD's FX 6300, I am interested in knowing if there exists anything from Ubuntu (specifically any kernel enhancements/drivers/patches) for AMD's FX "family 15h" Piledrivers. Reason: I would like to have a kernel which uses the hardware to its full capacity, be able to use the latest instruction sets, for max. performance. I did some tests, started with compiling stable 3.9.7 on my 12.04 LTS box, and during compilation I choose processor vendor AMD (unchecked Intel/VIA/etc.), and when I started Ubuntu with this compiled kernel, in the section "System Settings - Additional Drivers" I found that, in addition to graphic card's drivers, there were AMD family 15h drivers also. However, I would prefer something in this regard tested/signed by Ubuntu developers. P.S: 1- the kernel that I have compiled has some issues with Nvidia graphics drivers, so I deleted kernel 3.9.7 and installed signed 3.8.xx from Ubuntu repositories. 2- incase if somebody is planning to advise me to install "AMD64", I am not talking about AMD64 (which is in fact for 64-bit platform).

    Read the article

  • Oracle Java Embedded Client 1.1 Released

    - by Roger Brinkley
    Yesterday an update release of Oracle Java Embedded Client (OJEC) 1.1 quietly slipped out door for general availability. Until last year it was pretty difficult to get your hands on either a Connected Limited Device Configuration (CLDC) for small devices or a Connected Device Configuration (CDC) for medium devices java implementation without a substantial initial commitment. But with the the release of OJWC (CLDC) and OJEC (CDC) last year that has changed. OJEC 1.1 is a binary distribution designed for installation on medium configurations which is a mid range processor requiring a  slow startup time, seamless upgrades, in a cost sensitive hardware environment  anywhere from 3.5mb to 8 mb. There are headless as well as headed versions available. It is intended for devices, such as Blu-­-ray Disc players, set-­-top boxes, residential gateways,VOIP phones, and similar. From a software point of view, OJEC is the Java runtime platform implementation of Connected Device Configuration (CDC v1.1, JSR-­-218), Foundation Profile (FP v1.1, JSR-­-219), and Personal Basis Profile (PBP v1.1, JSR-­-217)  and includes optional packages RMI (JSR 66), JDBC (JSR 169) and XML API for Java ME (JSR 280), and Java TV (JSR-­-927). New to this release is support for the XML API (JSR 280) and a number of bug fixes and performance enhancements, including an improved Just-in-Time (JIT) compilation for the x86 chipset architecture. The platforms supported include ArmV5, ArmV6/ArmV7, MIPS 32 74K, and X86 in headless mode. For embedded developers there are number of advantages to using Java and if you have shied away from the JavaME edition in the past I would encourage you to look into the updated version of OJEC 1.1.

    Read the article

  • Why do I always get this error when using 'apt-get' commands?

    - by Venki
    I am using Ubuntu 14.04(with Unity). Just today(as of the date of this post) I did a sudo apt-get update && sudo apt-get upgrade and at the end of the 'Upgrade' process I got the following error :- Setting up crossplatformui (1.0.38) ... * Stopping ACPI services... [ OK ] * Starting ACPI services... [ OK ] package libqtgui4 exist QT_VERSION = 4 make -C /lib/modules/3.13.0-27-generic/build M=/usr/local/bin/ztemtApp/zteusbserial/below2.6.27 modules make[1]: Entering directory `/usr/src/linux-headers-3.13.0-27-generic' CC [M] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:34:28: fatal error: linux/smp_lock.h: No such file or directory #include <linux/smp_lock.h> ^ compilation terminated. make[2]: *** [/usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o] Error 1 make[1]: *** [_module_/usr/local/bin/ztemtApp/zteusbserial/below2.6.27] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.13.0-27-generic' make: *** [modules] Error 2 dpkg: error processing package crossplatformui (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: crossplatformui E: Sub-process /usr/bin/dpkg returned an error code (1) From then on whatever apt-get command I use(so far as I know, except apt-get update) I keep getting the above error at the end of the process. But whichever apt-get command I use does what it has to without fail.(For example I tried installing blender with sudo apt-get install blender and it installed fine though it showed the above error.) After this I even got a kernel update(from 3.13.0-27 to 3.13.0-29 via the Software Updater), but even now the issue persists. How do I solve this issue?

    Read the article

  • Benefits of Server-side Coding

    There are numerous advantages to server scripting languages over client side languages in regards to creating web sites that are more compelling compared to a standard static site. Server side scripting are scripts that are executed on a web server during the compilation of data to return to a client. These scripts allow developers to modify the content that is being sent to the user prior to the return of the data to the user as well as store information about the user. In addition, server side scripts allow for a controllable environment in which they can be executed. This cannot be said for client side languages because the developer cannot control the users’ environment compared to a web server. Some users may turn off client scripts, some may be only allow limited access on the system and others may be able to gain full control of the environment.  I have been developing web applications for over 9 years, and I have used server side languages for most of the applications I have built.  Here is a list of common things I have developed with server side scripts. List of Common Generic Functionality Send Email FTP Files Security/ Access Control Encryption URL rewriting Data Access Data Creation I/O Access The one important feature server side languages will help me with on my website is Data Access because my component will be backed with a SQL server database. I believe that form validation is one instance where I might see server-side scripts and JavaScript used interchangeably because it does not matter how or where the data is validated as long as the data that gets inserted is valid. However, I would have to say that my personal experience would have to sway me in deciding what type of languages to use for form validation because they both have advantages and disadvantages based on the each situation.

    Read the article

  • GCC: assembly listing for IA64 without an Itanium machine

    - by KD04
    I need to try the following thing: I would like to compile some simple C code samples and see the assembly listing generated by GCC for IA64 architecture, i.e. I just want to run GCC with the -S switch and see the resultant .s file. I don't have an Itanium machine, so in order to do it myself I'll probably need a cross-compiling version of GCC built for x86 RedHat. I'm not interested in full cross-compilation, meaning that I don't need to generate the binaries at all. The easiest way, of course, would be to find an Itanium machine with with GCC and just try it there. Unfortunately, I don't seem to have access to any. Another option is to build a cross-compiling version GCC on my RedHat, but apparently that's quite an endeavor for someone who hasn't done it before (I assume that the fact that I only need .s output doesn't make it simpler). What other options are there, if any? Maybe there's some sort of a web front to an Itanium GCC compiler on the Net (something like Comeau Online or ideone.com, but with .s output)? Anything else? I would appreciate any help.

    Read the article

  • VMware Kernel Module Updater hangs on Ubuntu 13.04

    VMware Player has a nice auto-detection of kernel changes, and requests the user to compile the required modules in order to load them. This happens from time to time after a regular update of your system. Usually, the dialog of VMware Kernel Module Updater pops up, asks for root access authentication, and completes the compilation. VMware Player or Workstation checks if modules for the active kernel are available. In theory this is supposed to work flawlessly but in reality there are pitfalls occassionally. With the recent upgrade to Ubuntu 13.04 Raring Ringtail and the latest kernel 3.8.0-21 the actual VMware Kernel Module Updater simply disappeared and the application wouldn't start as expected. When you launch VMware Player as super user (root) the dialog would stall like so: VMware Kernel Module Updater stalls while stopping the services Prior to version 5.x of VMware Player or version 7.x of VMware Workstation you would run a command like: $ sudo vmware-config.pl to resolve the module version conflict but this doesn't work anyway. Solution Instead, you have to execute the following line in a terminal or console window: $ sudo vmware-modconfig --console --install-all Those switches are (as of writing this article) not documented in the output of the --help switch. But VMware already documented this procedure in their knowledge base: VMware Workstation stops functioning after updating the kernel on a Linux host (1002411). Update As of today I had the first kernel upgrade to version 3.8.0-22 in Ubuntu 13.04. Don't even try it without vmware-modconfig...

    Read the article

  • Release: Oracle Java Development Kit 8, Update 20

    - by Tori Wieldt
    Java Development Kit 8, Update 20 (JDK 8u20) is now available. This latest release of the Java Platform continues to improve upon the significant advances made in the JDK 8 release with new features, security and performance optimizations. These include: new enterprise-focused administration features available in Oracle Java SE Advanced; products offering greater control of Java version compatibility; security updates; and a very useful new feature, the MSI compatible installer. Download Release Notes Java SE 8 Documentation New tools, features and enhancements highlighted from JDK 8 Update 20 are: Advanced Management Console The Java Advanced Management Console 1.0 (AMC) is available for use with the Oracle Java SE Advanced products. AMC employs the Deployment Rule Set (DRS) security feature, along with other functionality, to give system administrators greater and easier control in managing Java version compatibility and security updates for desktops within their enterprise and for ISVs with Java-based applications and solutions. MSI Enterprise JRE Installer Available for Windows 64 and 32 bit systems in the Oracle Java SE Advanced products, the MSI compatible installer enables system administrators to provide automated, consistent installation of the JRE across all desktops in the enterprise, free of user interaction requirements. Performance: String de-duplication resulting in a reduced footprint Improved support in G1 Garbage Collection for long running apps. A new 'force' feature in DRS (Deployment Rule Set) which allows system administrators to specify the JRE with which an applet or Java Web Start application will run. This is useful for legacy applications so end users don't need to approve security exceptions to run.  Java Mission Control 5.4 with new ease-of-use enhancements and launcher integration with Eclipse 4.4 JavaFX on ARM Nashorn performance improvement by persisting bytecode after inital compilation There's much more information to be found in the JDK 8u20 Release Notes.

    Read the article

  • Is it better to specialize in a single field I like, or expand into other fields to broaden my horizons?

    - by Oak
    This is a dilemma about which I have been thinking for quite a while. I'm a graduate student and my topics of interest are programming language design, code analysis, compilation, etc. So far, this field has been very interesting and rewarding for me, so I was thinking about finding a job in that field and continuing to specialize in it. I feel like it's a relatively solid field which won't "get out of style" anytime soon. I've always thought that in such complex fields it's better to be a real expert than just another guy who superficially understand what the experts are talking about. On the other hand, I feel that by specializing this way I really limit my future option. I have always been a strong believer in multidisciplinary approaches to problems. Maybe I should go search for a general programming job in which I could gain experience in other fields, as well as occasionally apply my favorite field for solving problems. Specializing in only one or two fields can prevent me from thinking outside the box and cause stagnation. I would really like to hear more opinions about this choice. The truth is I'm already leaning towards one of the choices, so basic psychology says nothing will change my mind, but I would still love to hear some feedback.

    Read the article

  • Understanding Application binary interface (ABI)

    - by Tim
    I am trying to understand the concept of Application binary interface (ABI). From The Linux Kernel Primer: An ABI is a set of conventions that allows a linker to combine separately compiled modules into one unit without recompilation, such as calling conventions, machine interface, and operating-system interface. Among other things, an ABI defines the binary interface between these units. ... The benefits of conforming to an ABI are that it allows linking object files compiled by different compilers. From Wikipedia: an application binary interface (ABI) describes the low-level interface between an application (or any type of) program and the operating system or another application. ABIs cover details such as data type, size, and alignment; the calling convention, which controls how functions' arguments are passed and return values retrieved; the system call numbers and how an application should make system calls to the operating system; and in the case of a complete operating system ABI, the binary format of object files, program libraries and so on. I was wondering whether ABI depends on both the instruction set and the OS. Are the two all that ABI depends on? What kinds of role does ABI play in different stages of compilation: preprocessing, conversion of code from C to Assembly, conversion of code from Assembly to Machine code, and linking? From the first quote above, it seems to me that ABI is needed for only linking stage, not the other stages. Is it correct? When is ABI needed to be considered? Is ABI needed to be considered during programming in C, Assembly or other languages? If yes, how are ABI and API different? Or is it only for linker or compiler? Is ABI specified for/in machine code, Assembly language, and/or of C?

    Read the article

  • Function calls to calls in windows api

    - by Apeee
    I am a beginner, and learning C, I find it hard to grasp the whole programming concept. so hopefully this would help to clear up some things along the way. When programming in windows, which is my aim for the time being, it is really hard for me to understand how windows communicate with the programs that run on it. A question i have been pondering about is how when you incorporate a function call which is in another memory location on the disk or memory(not a function you yourself wrote and is included in the compilation), especially the windows API, does the compiler know where the function location is so when the program is run it can call that function? For example, a very simple program that displays a window which reads hello world. You would have to call windows API functions to achieve such features as creating the window, its size, colors and so on... So basically what I am struggling to grasp is how the programs I write communicate with the platform, framework they are run on(generally windows for Windows API). Apart from clarification on this one above, i would love a resource that explains this concept further. Thanks for your time!

    Read the article

  • How do you maintain focus when a particular aspect of programming takes 10+ seconds to complete?

    - by Jer
    I have a very difficult time focusing on what I'm doing (programming-wise) when something (compilation, startup time, etc.) takes more than just a few seconds. Anecdotally it seems that threshold is about 10 seconds (and I recall reading about study that said the same thing, though I can't find it now). So what typically happens is I make a change and then run the program to test it. That takes about 30 seconds, so I start reading something else, and before I know it 20 minutes have passed, and then it takes (if I'm lucky!) another 10+ minutes to deal with the context switch to getting back into programming. It's not an exaggeration to say that some things that should take me minutes literally take hours to complete. I'm very curious about what other programmers do to combat this tendency (or if I'm unique and they don't have this tendency?). Suggestions of any type at all are welcome - anything from "sit on your hands after hitting the compile button", to mental tricks, to "if it takes 30 seconds to start up something to test a change, then something's wrong with your development process!"

    Read the article

  • Problems when rendering code on Nvidia GPU

    - by 2am
    I am following OpenGL GLSL cookbook 4.0, I have rendered a tesselated quad, as you see in the screenshot below, and i am moving Y coordinate of every vertex using a time based sin function as given in the code in the book. This program, as you see on the text in the image, runs perfectly on built in Intel HD graphics of my processor, but i have Nvidia GT 555m graphics in my laptop, (which by the way has switchable graphics) when I run the program on the graphic card, the OpenGL shader compilation fails. It fails on following instruction.. pos.y = sin.waveAmp * sin(u); giving error Error C1105 : Cannot call a non-function I know this error is coming on the sin(u) function which you see in the instruction. I am not able to understand why? When i removed sin(u) from the code, the program ran fine on Nvidia card. Its running with sin(u) fine on Intel HD 3000 graphics. Also, if you notice the program is almost unusable with intel HD 3000 graphics, I am getting only 9FPS, which is not enough. Its too much load for intel HD 3000. So, sin(X) function is not defined in the OpenGL specification given by Nvidia drivers or something else??

    Read the article

  • Reformatting and version control

    - by l0b0
    Code formatting matters. Even indentation matters. And consistency is more important than minor improvements. But projects usually don't have a clear, complete, verifiable and enforced style guide from day 1, and major improvements may arrive any day. Maybe you find that SELECT id, name, address FROM persons JOIN addresses ON persons.id = addresses.person_id; could be better written as / is better written than SELECT persons.id, persons.name, addresses.address FROM persons JOIN addresses ON persons.id = addresses.person_id; while working on adding more columns to the query. Maybe this is the most complex of all four queries in your code, or a trivial query among thousands. No matter how difficult the transition, you decide it's worth it. But how do you track code changes across major formatting changes? You could just give up and say "this is the point where we start again", or you could reformat all queries in the entire repository history. If you're using a distributed version control system like Git you can revert to the first commit ever, and reformat your way from there to the current state. But it's a lot of work, and everyone else would have to pause work (or be prepared for the mother of all merges) while it's going on. Is there a better way to change history which gives the best of all results: Same style in all commits Minimal merge work ? To clarify, this is not about best practices when starting the project, but rather what should be done when a large refactoring has been deemed a Good Thing™ but you still want a traceable history? Never rewriting history is great if it's the only way to ensure that your versions always work the same, but what about the developer benefits of a clean rewrite? Especially if you have ways (tests, syntax definitions or an identical binary after compilation) to ensure that the rewritten version works exactly the same way as the original?

    Read the article

  • Does it make sense to write build scripts in C++?

    - by Klaim
    I'm using CMake to generate my projects IDE/makefiles, but I still need to call custom "scripts" to manipulate my compiled files or even generate code. In previous projects I've been using Python and it was OK, but now I'm having serious trouble managing a lot of dependencies in two very big projects I'm working on so I want to minimize the dependencies everywhere. Someone suggested to me to use C++ to write my build scripts instead of adding a language dependency just for that. The projects themeselves already use C++ so there are several advantages that I can see: to build the whole project, only a C++ compiler and CMake would be necessary, nothing else (all the other dependencies are C or C++); C++ type safety (when using modern C++) makes everything easier to get "correct"; it's also the language I know the better so I'm more at ease with it even if I'm able to write some good Python code; potential gain in execution speed (but i don't think it will really be perceptible); However, I think there might be some drawbacks and I'm not sure of the real impact as I didn't try yet: might be longer to write the code (that said I'm not sure because I'm efficient enough in C++ to write something that work quickly, so maybe for this system it wouldn't be so long to write) (compilation time shouldn't be a problem for this case); I must assume that all the text files I'll read as input are in UTF-8, I'm not sure it can be easilly checked at runtime in C++ and the language will not check it for you; libraries in C++ are harder to manage than in scripting languages; I lack experience and forsight so maybe I'm missing advantages and drawbacks. So the question is: does it make sense to use C++ for this? do you have experiences to report and do you see advantages and disadvantages that might be important?

    Read the article

  • Acer Aspire One 725 - missing graphic card driver for Radeon HD 6290?

    - by Melon
    Recently I bought an Acer Aspire One 725 Netbook and installed Ubuntu 12.10 on it. I bought it, because it can run HD movies and has Full HD on external VGA port. However, movies from youtube have a really slow framerate. If you open three tabs in Opera (for example g-mail, youtube and askubuntu) it gets really laggy. My suspicion is that the driver for graphic card is missing. When I check the System->Details->Graphics the driver is unknown. After running lspci | grep VGA I get this output: 00:01.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Device 980a From what I see, I have a AMD C70 processor integrated with (or something similar) AMD Radeon HD 6290. Has anyone had the same problem? Do you know which drivers need to be installed for the graphics to work properly? On official Acer page there are only drivers for Win7 and Win8... Update: I have tried installing fglrx but I get the following error (either I don't have libraries or someone didn't make a clean build before release ;) /lib/modules/fglrx/build_mod/2.6.x/firegl_public.c: In function ‘KCL_MEM_AllocLinearAddrInterval’: /lib/modules/fglrx/build_mod/2.6.x/firegl_public.c:2124:5: error: implicit declaration of function ‘do_mmap’ [-Werror=implicit-function-declaration] /lib/modules/fglrx/build_mod/2.6.x/firegl_public.c:2124:13: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] /lib/modules/fglrx/build_mod/2.6.x/firegl_public.c: In function ‘kasInitExecutionLevels’: /lib/modules/fglrx/build_mod/2.6.x/firegl_public.c:4159:5: error: ‘cpu_possible_map’ undeclared (first use in this function) /lib/modules/fglrx/build_mod/2.6.x/firegl_public.c:4159:5: note: each undeclared identifier is reported only once for each function it appears in /lib/modules/fglrx/build_mod/2.6.x/firegl_public.c:4159:5: warning: left-hand operand of comma expression has no effect [-Wunused-value] Update 2: After fixing the erros in compilation, ubuntu acts bizarre and unstable (no left icon panel, no upper panel, cannot run any programs, I only see desktop)

    Read the article

  • Error: kernel headers not found. (But they are in place)

    - by Guandalino
    I'm trying to install the Guest Additions in VirtualBox 4.04. Host OS is Ubuntu desktop 11.04 64bit, guest OS is Ubuntu server 11.10 64bit. $ sudo ./VBoxLinuxAdditions.run After some output this line is printed: The headers for the current running kernel were not found. But the headers are installed, at least accordingly to dpkg: $ dpkg --get-selections | grep linux-headers linux-headers-3.0.0-12 install linux-headers-3.0.0-12-server install linux-headers-server install The running kernel is: $ uname -a Linux foobar 3.0.0-12-server #20-Ubuntu SMP Fri Oct 7 16:36:30 UTC 2011 x86_64 x86_64 X86_64 GNU/Linux How do I fix things so that Guest Additions installer is able to find kernel headers? Update: added full output. The headers for the current running kernel were not found. If the module compilation fails then this could be the reason. Building the main Guest Additions module ...done. Building the shared folder support module ...fail! (Look at /var/log/vboxadd-install.log to find out what went wrong) Installing the Window System drivers ...fails! (Could not find the X.Org or XFree86 Window System). I don't care for fail #2, because that's a server and I don't need X server. But I need shared folder support. Some further detail: $ tail /val/log/vboxadd-install.log .......... cc1: some warnings being treated as errors make[2]: *** [/tmp/vbox.0/vfsmod.o] Error 1 make[1]: *** [_module_/tmp/vbox.0] Error 2 make: *** [vboxsf] Error 2

    Read the article

  • Run Grunt task in Visual Studio Release Build with a bat file

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/08/19/run-grunt-task-in-visual-studio-release-build-with-a.aspx 1. Add a BeforeBuild in your csproj file. Edit the xml with a text editor. <Target Name="BeforeBuild"> <Exec Condition="'$(Configuration)' == 'Release'" Command="script-optimize.bat" /> </Target> 2. Create the script-optimize.batREM "%~dp0" maps to the directory where this file exists cd %~dp0\..\YourProjectFolder call npm uninstall grunt call npm uninstall grunt call npm install --cache-min 604800 -g grunt-cli call npm install --cache-min 604800 grunt typescript requirejs copy less:compile less:mincompileThis grunt command will compile typescript, run the requireJs optimizer, complie and minimize less.3. Make it use the minified code when the Web.config compilation debug is set to false <!-- These CustomCollectFiles actions are used so that the Scripts-Release folder/files are included        when publishing even though they are not project references -->  <Target Name="CustomCollectFiles">    <ItemGroup>      <_CustomFiles Include="Scripts-Release\**\*" />  </ItemGroup>  </Target> That should be all you need to get a Grunt task to minify and combine JS (plus other tasks) in Visual Studio Release build with debug = false. This is a great video of Steve Sanderson talking about SPAs, npm, Knockout, Grunt, Gulp, ect. I highly recommend it.

    Read the article

  • F# Application Entry Point

    - by MarkPearl
    Up to now I have been looking at F# for modular solutions, but have never considered writing an end to end application. Today I was wondering how one would even start to write an end to end application and realized that I didn’t even know where the entry point is for an F# application. After browsing MSDN a bit I got a basic example of a F# application with an entry point [<EntryPoint>] let main args = printfn "Arguments passed to function : %A" args // Return 0. This indicates success. 0 Pretty simple stuff… but what happens when you have a few modules in a program – so I created a F# project with two modules and a main module as illustrated in the image below… When I try to compile my program I get a build error… A function labeled with the 'EntryPointAttribute' attribute must be the last declaration in the last file in the compilation sequence, and can only be used when compiling to a .exe… What does this mean? After some more reading I discovered that the Program.fs needs to be the last file in the F# application – the order of the files in a F# solution are important. How do I move a source file up or down? I tried dragging the Program.fs file below ModuleB.fs but it wouldn’t allow me to. Then I thought to right click on a source file and got the following menu.   Wala… to move the source file to the bottom of the solution you can select the “Move Up” or “Move Down” option. Now that I got this right I decided to put some code in ModuleA & ModuleB and I have the start of a basic application structure. ModuleA Code namespace MyApp module ModuleA = let PrintModuleA = printf "hello a \n" ()   ModuleB Code namespace MyApp module ModuleB = let PrintModuleB = printf "hello b \n" ()   Program Code // Learn more about F# at http://fsharp.net #light namespace MyApp module Main = open System [<EntryPoint>] let main args = ModuleA.PrintModuleA let endofapp = Console.ReadKey() 0

    Read the article

  • Material System

    - by Towelie
    I'm designing Material/Shader System (target API DX10+ and may be OpenGL3+, now only DX10). I know, there was a lot of topics about this, but i can't find what i need. I don't want to do some kind of compilation/parsing scripts in real-time. So there some artist-created material, written at some analog of CG. After it compiled to hlsl code and after to final shader. Also there are some hard-coded ConstantBuffers, like cbuffer EveryFrameChanging { float4x4 matView; float time; float delta; } And shader use shared constant buffers to get parameters. For each mesh in the scene, getting needs and what it can give (normals, binormals etc.) and finding corresponding permutation of shader or calculating missing parts. Also, during build calculating render states and the permutations or hash for this shader which later will be used for sorting or even giving the ID from 0 to ShaderCount w/o gaps to it for sorting. FinalShader have only 1 technique and one pass. After it for each Mesh setting some shader and it's good to render. some pseudo code SetConstantBuffer(ConstantBuffer::PerFrame); foreach (shader in FinalShaders) SetConstantBuffer(ConstantBuffer::PerShader, shader); SetRenderState(shader); foreach (mesh in shader.GetAllMeshes) SetConstantBuffer(ConstantBuffer::PerMesh, mesh); SetBuffers(mesh); Draw(); class FinalShader { public: UUID m_ID; RenderState m_RenderState; CBufferBindings m_BufferBindings; } But i have no idea how to create this CG language and do i really need it?

    Read the article

  • is wisdom of what happens 'behind scenes' (in compiler, external DLLs etc.) important?

    - by I_Question_Things_Deeply
    I have been a computer-fanatic for almost a decade now. I've always loved and wondered how computers work, even from the purest, lowest hardware level to the very smallest pixel on the screen, and all the software around that. That seems to be my problem though ... as I try to write code (I'm pretty fluent at C++) I always sit there enormous amounts of time in front of a text-editor wondering how every line, statement, datum, function, etc. will correspond to every Assembly and machine instruction performed to do absolutely everything necessary for the kernel to allocate memory to run my compiled program, and all of the other hardware being used as well. For example ... I would write cout << "Before memory changed" << endl; and run the debugger to get the Assembly for this, and then try and reverse disassemble the Assembly to machine code based on my ISA, and then research every .dll, library file, linked library, linking process, linker source code of the program, the make file, the kernel I'm using's steps of processing this compilation, the hardware's part aside from the processor (e.g. video card, sound card, chipset, cache latency, byte-sized registers, calling convention use, DDR3 RAM and disk drive, filesystem functioning and so many other things). Am I going about programming wrong? I mean I feel I should know everything that goes on underneath English syntax on a computer program. But the problem is that the more I research every little thing the less I actually accomplish at all. I can never finish anything because of this mentality, yet I feel compelled to know everything... what should I do?

    Read the article

  • Layers - Logical seperation vs physical

    - by P.Brian.Mackey
    Some programmers recommend logical seperation of layers over physical. For example, given a DL, this means we create a DL namespace not a DL assembly. Benefits include: faster compilation time simpler deployment Faster startup time for your program Less assemblies to reference Im on a small team of 5 devs. We have over 50 assemblies to maintain. IMO this ratio is far from ideal. I prefer an extreme programming approach. Where if 100 assemblies are easier to maintain than 10,000...then 1 assembly must be easier than 100. Given technical limits, we should strive for < 5 assemblies. New assemblies are created out of technical need not layer requirements. Developers are worried for a few reasons. A. People like to work in their own environment so they dont step on eachothers toes. B. Microsoft tends to create new assemblies. E.G. Asp.net has its own DLL, so does winforms. Etc. C. Devs view this drive for a common assembly as a threat. Some team members Have a tendency to change the common layer without regard for how it will impact dependencies. My personal view: I view A. as silos, aka cowboy programming and suggest we implement branching to create isolation. C. First, that is a human problem and we shouldnt create technical work arounds for human behavior. Second, my goal is not to put everything in common. Rather, I want partitions to be made in namespaces not assemblies. Having a shared assembly doesnt make everything common. I want the community to chime in and tell me if Ive gone off my rocker. Is a drive for a single assembly or my viewpoint illogical or otherwise a bad idea?

    Read the article

  • What are some best practices for minimizing code?

    - by CrystalBlue
    While maintaining the sites our development team has created, we have come across include files and plugins that have proven to be very useful to more then one part of our applications. Most of these modules have come with two different files, a normal source file and a min file. Seeing that the performance and speed of a page can be increased by minimizing the size of the file, we're looking into doing that to our pages as well. The problem that we run into is a lot of our normal pages (written in ASP classic) is a mix of HTML, ASP, Javascript, CSS, and include files. We have some pages that have their JS both in include files and in the page, depending on if the function is only really used in that page or if it's used in many other pages. For example, we have a common.js and an ajax.js file, both are used in a lot of pages, but not all of them. As well as having some functions in a page that doesn't really make sense to put into one master page. What I have seen a few other people do online is use one master JS file and place all of their javascript into that, minify it, gzip it, and only use that on their production server. Again, this would be great, but I don't know if that fully works for our purposes. What I'm looking for is some direction to go with on this. I'm in favor of taking all of our JS and putting it in one include file, and just having it included in every page that is hit. However, not every page we have needs every bit of JS. So would it be worth the compilation and minifying of the files into one master file and include it everywhere, or would it be better to minify all other files and still include them on a need-to-use basis?

    Read the article

  • Isn't Java a quite good choice for desktop applications?

    - by tactoth
    At present most applications are still developed with C++, painfully. Lack of portability, in compatible libraries, memory leaks, slow compilation, and poor productivity. Even if you pick only a single from these shortages, it's still a big headache. However the surprising truth is that C++ remains the first choice for desktop applications. Compared to C++ Java has lots of advantages. The success in server side development shows that the language itself is good, Swing is also thought to be as programmer friendly as the highly recognized QT framework (No, never say even a single word about MFC!). All the disadvantages of C++ listed above has a solution in Java. "Performance!", Well that might still be the problem but to my experience it's a slight problem. I'd been using Java to decode some screen video and generate key frames. The video has a duration of more than 1 hour. The time spent on an average machine is just 1 minute. With C++ I don't expect even faster speed. In recent days there are many news on the JIT performance improvements, that make us feel Java is gradually becoming very suitable for desktop development, without people realizing it. Isn't it?

    Read the article

  • Problem installing qtbase

    - by teucer
    I am getting the following error when installing "qtbase" package for R: [ 68%] Building CXX object smoke/qt/CMakeFiles/smokeqt.dir/x_1.cpp.o /home/mroot/qtbase/kdebindings-build/smoke/qt/x_1.cpp: In static member function ‘static void __smokeqt::x_QAbstractPrintDialog::x_8(Smoke::StackItem*)’: /home/mroot/qtbase/kdebindings-build/smoke/qt/x_1.cpp:4893: error: cannot allocate an object of abstract type ‘__smokeqt::x_QAbstractPrintDialog’ /home/mroot/qtbase/kdebindings-build/smoke/qt/x_1.cpp:4834: note: because the following virtual functions are pure within ‘__smokeqt::x_QAbstractPrintDialog’: /usr/include/qt4/QtGui/qabstractprintdialog.h:89: note: virtual int QAbstractPrintDialog::exec() /home/mroot/qtbase/kdebindings-build/smoke/qt/x_1.cpp: In constructor ‘__smokeqt::x_QAbstractPrintDialog::x_QAbstractPrintDialog()’: /home/mroot/qtbase/kdebindings-build/smoke/qt/x_1.cpp:4896: error: no matching function for call to ‘QAbstractPrintDialog::QAbstractPrintDialog()’ /usr/include/qt4/QtGui/qabstractprintdialog.h:116: note: candidates are: QAbstractPrintDialog::QAbstractPrintDialog(const QAbstractPrintDialog&) /usr/include/qt4/QtGui/qabstractprintdialog.h:113: note: QAbstractPrintDialog::QAbstractPrintDialog(QAbstractPrintDialogPrivate&, QPrinter*, QWidget*) /usr/include/qt4/QtGui/qabstractprintdialog.h:86: note: QAbstractPrintDialog::QAbstractPrintDialog(QPrinter*, QWidget*) make[3]: *** [smoke/qt/CMakeFiles/smokeqt.dir/x_1.cpp.o] Error 1 make[3]: Leaving directory `/home/mroot/qtbase/kdebindings-build' make[2]: *** [smoke/qt/CMakeFiles/smokeqt.dir/all] Error 2 make[2]: Leaving directory `/home/mroot/qtbase/kdebindings-build' make[1]: *** [all] Error 2 make[1]: Leaving directory `/home/mroot/qtbase/kdebindings-build' make: *** [all] Error 2 ERROR: compilation failed for package ‘qtbase’ * removing ‘/home/mroot/R/i686-pc-linux-gnu-library/2.12/qtbase’ Any ideas?

    Read the article

  • Is this too much to ask for a game programming and developing enthusiast? Am I doing this wrong?

    - by I_Question_Things_Deeply
    I have been a computer-fanatic for almost a decade now. I've always loved and wondered how computers work, even from the purest, lowest hardware level to the very smallest pixel on the screen, and all the software around that. That seems to be my problem though ... as I try to write code (I'm pretty fluent at C++) I always sit there enormous amounts of time in front of a text-editor wondering how every line, statement, datum, function, etc. will correspond to every Assembly and machine instruction performed to do absolutely everything necessary for the kernel to allocate memory to run my compiled program, and all of the other hardware being used as well. For example ... I would write cout << "Before memory changed" << endl; and run the debugger to get the Assembly for this, and then try and reverse disassemble the Assembly to machine code based on my ISA, and then research every .dll, library file, linked library, linking process, linker source code of the program, the make file, the kernel I'm using's steps of processing this compilation, the hardware's part aside from the processor (e.g. video card, sound card, chipset, cache latency, byte-sized registers, calling convention use, DDR3 RAM and disk drive, filesystem functioning and so many other things). Am I going about programming wrong? I mean I feel I should know everything that goes on underneath English syntax on a computer program. But the problem is that the more I research every little thing the less I actually accomplish at all. I can never finish anything because of this mentality, yet I feel compelled to know everything... what should I do?

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >