Search Results

Search found 1058 results on 43 pages for 'compute'.

Page 16/43 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • how do I go about removing all the language packs I don't need

    - by knotech
    I just noticed that in /usr/share/help I have the ubuntu help files in 70 different languages. I only speak 2, and I only really compute in one. I also noticed that it is full of broken symbolic links to /usr/share/help-langpack. I want just want to get rid of all the languages I don't need. How can I do this without getting all rm -r happy? I'm preferably looking for a way to do this without installing any new packages, as my main goal is to get rid of excess stuff on my machine. I'd like to find a way to do this preferably with dpkg, or apt.

    Read the article

  • multithreading problem with Nvidia PhysX

    - by xcrypt
    I'm having a multithreading problem with Nvidia PhysX. the SDK requires that you call Simulate() (starts computing new physics positions within a new thread) and FetchResults(waits 'till the physics computations are done). Inbetween Simulate() and FetchResults() you may not 'compute new physics' It is proposed (in a sample) that we create a game loop as such: Logic (you may calculate physics here and other stuff) Render + Simulate() at start of Render call and FetchResults at end of Render() call However, this has given me various little errors that stack up: since you actually render the scene that was computed in the previous iteration in the game loop. I wonder if there's a way around this? I've been trying and trying, but I can't think of a solution...

    Read the article

  • What's the most efficient way to find barycentric coordinates?

    - by bobobobo
    In my profiler, finding barycentric coordinates is apparently somewhat of a bottleneck. I am looking to make it more efficient. It follows the method in shirley, where you compute the area of the triangles formed by embedding the point P inside the triangle. Code: Vector Triangle::getBarycentricCoordinatesAt( const Vector & P ) const { Vector bary ; // The area of a triangle is real areaABC = DOT( normal, CROSS( (b - a), (c - a) ) ) ; real areaPBC = DOT( normal, CROSS( (b - P), (c - P) ) ) ; real areaPCA = DOT( normal, CROSS( (c - P), (a - P) ) ) ; bary.x = areaPBC / areaABC ; // alpha bary.y = areaPCA / areaABC ; // beta bary.z = 1.0f - bary.x - bary.y ; // gamma return bary ; } This method works, but I'm looking for a more efficient one!

    Read the article

  • Defaulting the HLSL Vertex and Pixel Shader Levels to Feature Level 9_1 in VS 2012

    - by Michael B. McLaughlin
    I love Visual Studio 2012. But this is not a post about that. This is a post about tweaking one particular parameter that I’ve found a bit annoying. Disclaimer: You will be modifying important MSBuild files. If you screw up you will break your build tools. And maybe your computer will catch fire. I’m not responsible. No warranties or guaranties of any sort. This info is provided “as is”. By default, if you add a new vertex shader or pixel shader item to a project, it will be set to build with shader profile 4.0_level_9_3. If you need 9_3 functionality, this is all well and good. But (especially for Windows Store apps) you really want to target the lowest shader profile possible so that your game will run on as many computers as possible. So it’s a good idea to default to 9_1. To do this you could add in new HLSL files via “Add->New Item->Visual C++->HLSL->______ Shader File (.hlsl)” and then edit the shader files’ properties to set them manually to use 9_1 via “Properties->HLSL Compiler->General->Shader Model”. This is fine unless you forget to do this once and then submit your game with 9_3 shaders instead of 9_1 shaders to the Windows Store or to some other game store. Then you’d wind up with either rejection or angry “this doesn’t work on my computer! ripoff!” messages. There’s another option though. In “Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ItemTemplates\VC\HLSL\1033\VertexShader” (note the path might vary slightly for you if you are using a 32-bit system or have a non-ENU version of Visual Studio 2012) you will find a “VertexShader.vstemplate” file. If you open this file in a text editor (e.g. Notepad++), then inside the CustomParameters tag within the TemplateContent tag you should see a CustomParameter tag for the ShaderType, i.e.: <CustomParameter Name="$ShaderType$" Value="Vertex"/> On a new line, we are going to add another CustomParameter tag to the CustomParameters tag. It will look like this: <CustomParameter Name="$ShaderModel$" Value="4.0_level_9_1"/> such that we now have:     <CustomParameters>       <CustomParameter Name="$ShaderType$" Value="Vertex"/>       <CustomParameter Name="$ShaderModel$" Value="4.0_level_9_1"/>     </CustomParameters> You can then save the file (you will need to be an Administrator or have Administrator access). Back in the 1033 directory (or whatever the number is for your language), go into the “PixelShader” directory. Edit the “PixelShader.vstemplate” file and make the same change (note that this time $ShaderType$ is “Pixel” not “Vertex”; you shouldn’t be changing that line anyway, but if you were to just copy and replace the above four lines then you will wind up creating pixel shaders that the HLSL compiler would try to compile as vertex shaders, with all sort of weird errors as a result). Once you’ve added the $ShaderModel$ line to “PixelShader.vstemplate” and have saved it, everything should be done. Since Feature Level 9_1 and 9_3 don’t support any of the other shader types, those are set to default to their appropriate minimums already (Compute and Geometry are set to “4.0” and Domain and Hull are set to “5.0”, which are their respective minimums (though not all 4.0 cards support Compute shaders; they were an optional feature added with DirectX 10.1 and only became required for DirectX 11 hardware). In case you are wondering where these magic values come from, you can find them all in the “fxc.xml” file in the “\Program Files (x86)\MSBuild\Microsoft.CPP\v4.0\V110\1033” directory (or whatever your language number is; 1033 is ENU and various other product languages have their own respective numbers (see: http://msdn.microsoft.com/en-us/goglobal/bb964664.aspx ) such that Japanese is 1041 (for example), though for all I know MSBuild tasks might be 1033 for everyone). If, like me, you installed VS 2012 to a drive other than the C:\ drive, you will find the vstemplate files in the drive to which you installed VS 2012 (D:\ in my case) but you will find the fxc.xml file on the C:\ drive. You should not edit fxc.xml. You will almost definitely break things by doing that; it’s just something you can look through to see all the other options that the FXC task takes such that you could, if needed, add further CustomParameter tags if you wanted to default to other supported options. I haven’t tried any others though so I don’t have any advice on how to set them.

    Read the article

  • Arbitrary projection matrix from 6 arbitrary frustum planes

    - by Doub
    A projection matrix represent a tranformation from the camera view space to the rendering system clip space. In other words, it defines the transormation between a 6-sided frustum to the clip cube. The glOrtho and glFrustum use only 6 parameter to define such a projection, but impose several constraints on the frustum that will get projected to the clip cube: the near and far planes are parallel, the left and right planes intersect on a vertical line, and the top and bottom planes intersect on a horizontal lines, both lines being parallel to the near and far planes. I'd like to lift these restrictions. So, from the definition of the 6 frustum side planes (in whatever representation you see fit), how can I compute a general projection matrix?

    Read the article

  • Is there any advantage in using DX10/11 for a 2D game?

    - by David Gouveia
    I'm not entirely familiar with the feature set introduced by DX10/11 class hardware. I'm vaguely familiar with the new stages added to the programmable graphics pipeline, such as the geometry shader, the compute shader, and the new tesselation stages. I don't see how any of these make much of a difference for a 2D game though. Is there any compelling reason to make the switch to DX10/11 (or the OpenGL equivalents) for a 2D game, or would it be wiser to stick with DX9 considering that that a significant share of the market still runs on older technologies (e.g. the February 2012 Steam surveys lists around 17% of users as still using Windows XP)?

    Read the article

  • How do I properly use multithreading with Nvidia PhysX?

    - by xcrypt
    I'm having a multithreading problem with Nvidia PhysX. the SDK requires that you call Simulate() (starts computing new physics positions within a new thread) and FetchResults() (waits 'till the physics computations are done). Inbetween Simulate() and FetchResults() you may not "compute new physics". It is proposed (in a sample) that we create a game loop as such: Logic (you may calculate physics here and other stuff) Render + Simulate() at start of Render call and FetchResults at end of Render() call However, this has given me various little errors that stack up: since you actually render the scene that was computed in the previous iteration in the game loop. Does anyone have a solution to this?

    Read the article

  • Q&amp;A: Does it make sense to run a personal blog on the Windows Azure Platform?

    - by Eric Nelson
    I keep seeing people wanting to do this (or something very similar) and then being surprised at how much it might cost them if they went with Windows Azure. Time for a Q&A. Short answer: No, definitely not. Madness, sheer madness. (Hopefully that was clear enough) Longer answer: No because It would cost you a heck of a lot more than just about any other approach to running a blog. A site that can easily be run on a shared hosting solution (as many blogs do today) does not require the rich capabilities of Windows Azure. Capabilities such as simplified deployed and management, dedicated resources, elastic resources, “unlimited” storage etc. It is simply not the type of application the Windows Azure Platform has been designed for. Related Links: Q&A- How can I calculate the TCO and ROI when considering the Windows Azure Platform? Q&A- When do I get charged for compute hours on Windows Azure? Q&A- What are the UK prices for the Windows Azure Platform

    Read the article

  • Windows Azure XDrive

    - by kaleidoscope
    This allows your Windows Azure compute applications running in our cloud to use the existing NTFS APIs to store their data in a durable drive. The drive is backed by a Windows Azure Page Blob formatted as a single NTFS volume VHD.   The Page Blob can be mounted as a drive within the Windows Azure cloud, where all non-buffered/flushed NTFS writes are made durable to the drive (Page Blob).   If the application using the drive crashes, the data is kept persistent via the Page Blob, and can be remounted when the application instance is restarted or remounted elsewhere for a different application instance to use.   Since the drive is an NTFS formatted Page Blob, you can also use the standard blob interfaces to uploaded and download your NTFS VHDs to the cloud. More details can be found at: http://microsoftpdc.com/Sessions/SVC14 Anish, S

    Read the article

  • Impact of variable-length loops on GPU shaders

    - by Will
    Its popular to render procedural content inside the GPU e.g. in the demoscene (drawing a single quad to fill the screen and letting the GPU compute the pixels). Ray marching is popular: This means the GPU is executing some unknown number of loop iterations per pixel (although you can have an upper bound like maxIterations). How does having a variable-length loop affect shader performance? Imagine the simple ray-marching psuedocode: t = 0.f; while(t < maxDist) { p = rayStart + rayDir * t; d = DistanceFunc(p); t += d; if(d < epsilon) { ... emit p return; } } How are the various mainstream GPU families (Nvidia, ATI, PowerVR, Mali, Intel, etc) affected? Vertex shaders, but particularly fragment shaders? How can it be optimised?

    Read the article

  • How does a BSP tree work for Z sorting?

    - by Jenko
    I'm developing a 3D engine in software, and so I must compute Z sorting manually. I'm currently using the painters algorithm to sort triangles and then drawing them back-to-front. This causes artifacts that I'm trying to correct. Would using a dynamic BSP-tree ensure "correct Z sorting" of triangles? Why? Because the bounding volumes of triangles would be similar? Since I would have a single "world" BSP tree, would I have to remove and re-add any moved/scaled/rotated object into the tree? Is it possible to add triangles into a BSP tree without the expensive cutting process? Why do you need to cut triangles on the axis planes anyway? Is it faster to traverse a BSP tree from any angle, than to sort all tris each draw like the painters algorithm?

    Read the article

  • scaling point sprites with distance

    - by Will
    How can you scale a point sprite by its distance from the camera? GLSL fragment shader: gl_PointSize = size / gl_Position.w; seems along the right tracks; for any given scene all sprites seem nicely scaled by distance. Is this correct? How do you compute the proper scaling for my vertex attribute size? I want each sprite to be scaled by the modelview matrix. I had played with arbitrary values and it seems that size is the radius in pixels at the camera, and is not in modelview scale. I've also tried: gl_Position = pMatrix * mvMatrix * vec4(vertex,1.0); vec4 v2 = pMatrix * mvMatrix * vec4(vertex.x,vertex.y+0.5*size,vertex.z,1.0); gl_PointSize = length(gl_Position.xyz-v2.xyz) * gl_Position.w; But this makes the sprites be bigger in the distance, rather than smaller:

    Read the article

  • Understanding how texCUBE works and writing cubemaps properly into a cube rendertarget

    - by cubrman
    My goal is to create accurate reflections, sampled from a dynamic cubemap, for specific 3d objects (mostly lights) in XNA 4.0. To sample the cubemap I compute the 3d reflection vector in a classic way: half3 ReflectionVec = reflect(-directionToCamera, Normal.rgb); I then use the vector to get the actual reflected color: half3 ReflectionCol = texCUBElod(ReflectionSampler, float4(ReflectionVec, 0)); The cubemap I am sampling from is a RenderTarget with 6 flat faces. So my question is, given the 3d world position of an arbitrary 3d object, how can I make sure that I get accurate reflections of this object, when I re-render the cubemap. Should I build the ViewProjection matrix in a specific way? Or is there any other approach?

    Read the article

  • AMD Catalyst diskless cluster

    - by Nathan Moos
    I'm using Ubuntu 13.10 to set up a diskless compute cluster. When I use the procedure detailed in https://help.ubuntu.com/community/DisklessUbuntuHowto, I successfully am able to boot all four nodes. However, once I install Catalyst, I immediately have problems: only one diskless node boots properly, with the other two hanging while attempting to start X. My assumption is that my Catalyst build was somehow specific to the node which I booted from first, which somehow prevents the other nodes from loading Catalyst. Can anyone provide hints to help solve this? Thank you in advance!

    Read the article

  • Top Questions and Answers for Pluging into Oracle Database as a Service

    - by David Swanger
    Yesterday we hosted a comprehensive online forum that shared a comprehensive path to help your organization design, deploy, and deliver a Database as a Service cloud. If you missed the online forum, you can watch it on demand by registering here. We received numerous questions.  Below are highlights of the most informative: DBaaS requires a lengthy and careful design efforts. What is the minimum requirements of setting up a scaled-down environment and test it out? You should have an OEM 12c environment for DBaaS administration and then a target database deployment platform that has the key characteristics of what your production environment will look like. This could be a single server or it could be a small pool of hosts if your production DBaaS will be larger and you want to test a more robust / real world configuration with Zones and Pools or DR capabilities for example. How does this benefit companies having their own data center? This allows companies to transform their internal IT to a service delivery model for the database. The benefits to the company are significant cost savings, improved business agility and reduced risk. The benefits to the consumers (internal) of services if much fast provisioning, and response to change in business requirements. From a deployment perspective, is DBaaS's job solely DBA's job? The best deployment model enables the DBA (or end-user) to control the entire process. All resources required to deploy the service are pre-provisioned, and there are no external dependencies (on network, storage, sysadmins teams). The service is created either via a self-service portal or by the DBA. The purpose of self service seems to be that the end user does not rely on the DBA. I just need to give him a template. He decides how much AMM he needs. Why shall I set it one by one. That doesn't seem to be the purpose of self service. Most customers we have worked with define a standardized service catalog, with a few (2 to 5) different classes of service. For each of these classes, there is a pre-defined deployment template, and the user has the ability to select from some pre-defined service sizes. The administrator only has to create this catalog once. Each user then simply selects from the options offered in the catalog.  Looking at DBaaS service definition, it seems to be no different from a service definition provided by a well defined DBA team. Why do you attribute it to DBaaS? There are a couple of perspectives. First, some organizations might already be operating with a high level of standardization and a higher level of maturity from an ITIL or Service Management perspective. Their journey to DBaaS could be shorter and their Service Definition will evolve less but they still might need to add capabilities such as Self Service and Metering/Chargeback. Other organizations are still operating in highly siloed environments with little automation and their formal Service Definition (if they have one) will be a lot less mature today. Therefore their future state DBaaS will look a lot different from their current state, as will their Service Definition. How database as a service impact or help with "Click to Compute" or deploying "Database in cloud infrastructure" DBaaS enables Click to Compute. Oracle DBaaS can be implemented using three architecture models: Oracle Multitenant 12c, native consolidation using Oracle Database and consolidation using virtualization in infrastructure cloud. As Deploy session showed, you get higher consolidating density and efficiency using Multitenant and higher isolation using infrastructure cloud. Depending upon your business needs, DBaaS can be implemented using any of these models. How exactly is the DBaaS different from the traditional db? Storage/OS/DB all together to 'transparently' provide service to applications? Will there be across-databases access by application/user. Some key differences are: 1) The services run on a shared platform. 2) The services can be rapidly provisioned (< 15 minutes). 3) The services are dynamic and can be relocated, grown, shrunk as needed to meet business needs without disruption and rapidly. 4) The user is able to provision the services directly from a standardized service catalog.. With 24x7x365 databases its difficult to find off peak hrs to do basic admin tasks such as gathering stats, running backups, batch jobs. How does pluggable database handle this and different needs/patching downtime of apps databases might be serving? You can gather stats in Oracle Multitenant the same way you had been in regular databases. Regarding patching/upgrading, Oracle Multitenant makes patch/upgrade very efficient in that you can pre-provision a new version/patched multitenant db in a different ORACLE_HOME and then unplug a PDB from its CDB and plug it into the newer/patched CDB in seconds.  Thanks for all the great questions!  If you'd like to learn more and missed the online forum, you can watch it on demand here.

    Read the article

  • What are your intentions with Java technology, Big Red?

    - by hinkmond
    Here's another article (this time from TechCentral) giving the roadmap of what's intended to be done with Java technology moving forward toward Java SE 8, 9, 10 and beyond. See: Oracle outlines Java Intentions Here's a quote: Under the subheading, "Works Everywhere and With Everything," Oracle lists goals like scaling down to embedded systems and up to massive servers, as well as support for heterogeneous compute models. If our group is going to get Java working "Everywhere and With Everything", we'd better get crackin'! We have to especially make more room in our lab, if we need to fit "Everything" in there to test... "Everything" takes up a lot of room! Hinkmond

    Read the article

  • Useful versioning scheme for a git project?

    - by Oliver Weiler
    I have a small github project, which I need to add an option to to output some version number on the commandline. The problem is I have no idea how to "compute" the version number. Is this some random process? Should I just start at 1.0 (probably creating a tag or something), and put a number after . for fixes? I know this question is a bit vague... I just had never to deal with this, and want to use some sane versioning scheme. EDIT Im also interested into how to update this version number automatically, maybe using something like a git hook.

    Read the article

  • How do I optimize searching for the nearest point?

    - by Rootosaurus
    For a little project of mine I'm trying to implement a space colonization algorithm in order to grow trees. The current implementation of this algorithm works fine. But I have to optimize the whole thing in order to make it generate faster. I work with 1 to 300K of random attraction points to generate one tree, and it takes a lot of time to compute and compare distances between attraction points and tree node in order to keep only the closest treenode for an attraction point. So I was wondering if some solutions exist (I know they must exist) in order to avoid the time loss looping on each tree node for each attraction point to find the closest... and so on until the tree is finished.

    Read the article

  • Is Azure Compatible with JPEG XR?

    - by Shawn Eary
    I just put an F#/MVC app into a Windows Azure solution as a Web Role. Before migration, my JPEG XR (*.WDP) files were getting displayed on the client in IE9 without issue via my local and hosted sites. Now, after migration into Windows Azure, my JPEG XR files neither get displayed in my local Windows Azure compute emulator nor do they get displayed when they are deployed to http://*.cloudapp.net. Is there some sort of conflict with Widows Azure and (JPEG XR) *.wdp files? If so, what is the accepted best practice for overcoming this conflict?

    Read the article

  • Developing an AI opponent for Monopoly

    - by Bernhard Zürn
    i want to develop an AI opponent for the Board Game Monopoly. I want to implement the whole Game with Prolog (XPCE). The probability for a field on the Board being hit, can be computed with Markov Chains. I already know some "best practices" like "after 50% of the playing time it does not make sense to buy out of jail because in jail you get renting fees for your fields but you don't have to pay for other fields as long as you stay in prison". The interesting question always is: buy a streetfield ? buy houses / hotels ? how much ? so i think i would have to compute some kind of future liquidity .. does anyone know how to pack that into an algorithm or how to translate it to prolog ?

    Read the article

  • Coherence Query Performance in Large Clusters

    - by jpurdy
    Large clusters (measured in terms of the number of storage-enabled members participating in the largest cache services) may introduce challenges when issuing queries. There is no particular cluster size threshold for this, rather a gradually increasing tendency for issues to arise. The most obvious challenges are that a client's perceived query latency will be determined by the slowest responder (more likely to be a factor in larger clusters) as well as the fact that adding additional cache servers will not increase query throughput if the query processing is not compute-bound (which would generally be the case for most indexed queries). If the data set can take advantage of the partition affinity features of Coherence, then the application can use a PartitionedFilter to target a query to a single server (using partition affinity to ensure that all data is in a single partition). If this can not be done, then avoiding an excessive number of cache server JVMs will help, as will ensuring that each cache server has sufficient CPU resources available and is also properly configured to minimize GC pauses (the most common cause of a slow-responding cache server).

    Read the article

  • Rotation matrix for a 3D vector

    - by Shashwat
    I have a direction vector on which I have to apply some rotation to align it to positive z-axis. To use Matrix.CreateRotationX(angle) of XNA, I need the angle for which I'd have to compute cos or tan inverse. I think this is a complex task to do. Also, eventually those are also converted to sin(angle) and cos(angle) in the matrix. Is there any inbuilt way to create rotation matrix from a 3D vector? However, I can write the function but still asking if there is one already there.

    Read the article

  • Q&amp;A: Where does high performance computing fit with Windows Azure?

    - by Eric Nelson
    Answer I have been asked a couple of times this year about taking compute intensive operations to Windows Azure and/or High Performance Computing on Windows Azure. It is an interesting (if slightly niche) area. The good news is we have a great paper from David Chappell on HPC Server and Windows Azure integration. As a taster: A SOA application running entirely on Windows Azure runs its WCF services in Azure Worker nodes. Download now Related Links: Other Q&A posts on my team blog Don’t forget to connect with the UK team if you stumbled across this post by accident/bing/google

    Read the article

  • WP: Oracle Multitenant on SuperCluster T5-8: Study of Database Consolidation Efficiency

    - by uwes
    Consolidation in the data center is the driving factor in reducing capital and operational expense in IT today. This is particularly relevant as customers invest more in cloud infrastructure and associated service delivery. Database consolidation is a strategic component in this effort. Oracle Database 12 c introduces Oracle Multitenant , a new database consolidation model in which multiple Pluggable Databases (PDBs) are consolidated within a Container Database (CDB). While keeping many of the isolation aspects of single databases, it allows PDBs to share the system global area (SGA) and background processes of a common CDB . The white paper recently published on OTN: Oracle Multitenant on SuperCluster T5-8: Study of Database Consolidation Efficiency analyzes and quantifies savings in compute resources, efficiencies in transaction processing, and consolidation density of Oracle Multitenant compared to consolidated single instance databases (SIDBs) running in a bare-metal environment.

    Read the article

  • Converting Celsius Processor Temperature to Fahrenheit

    - by WindowsEscapist
    I'm editing a Conky theme. I would like it to output the processor temperatures in degrees Fahrenheit instead of Celsius. In the ~/.conkyrc file, the command sensors | grep 'Core 0' | cut -c18-19 is used to find the temperature in Celsius for the first processor core. I want to use bc to compute this (give it outputvalue*9/5+32). Problem is, bc wants just absolute values, and I see no way to pass it program output. If I try to use something like temp=$(sensors | grep 'Core 0' | cut -c18-19) & echo 'temp*9/5+32' | bc, it ends up giving me 32 because it registers "temp" as a 0.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >