Search Results

Search found 6679 results on 268 pages for 'cube processing'.

Page 103/268 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Run-Time Check Failure #2 - Stack around the variable 'indices' was corrupted.

    - by numerical25
    well I think I know what the problem is. I am just having a hard time debugging it. I am working with the directx api and I am trying to generate a plane along the x and z axis according to a book I have. The problem is when I am creating my indices. I think I am setting values out of the bounds of the indices array. I am just having a hard time figuring out what I did wrong. I am unfamiliar with the this method of generating a plane. so its a little difficult for me. below is my code. Take emphasis on the indices loop. #include "MyGame.h" //#include "CubeVector.h" /* This code sets a projection and shows a turning cube. What has been added is the project, rotation and a rasterizer to change the rasterization of the cube. The issue that was going on was something with the effect file which was causing the vertices not to be rendered correctly.*/ typedef struct { ID3D10Effect* pEffect; ID3D10EffectTechnique* pTechnique; //vertex information ID3D10Buffer* pVertexBuffer; ID3D10Buffer* pIndicesBuffer; ID3D10InputLayout* pVertexLayout; UINT numVertices; UINT numIndices; }ModelObject; ModelObject modelObject; // World Matrix D3DXMATRIX WorldMatrix; // View Matrix D3DXMATRIX ViewMatrix; // Projection Matrix D3DXMATRIX ProjectionMatrix; ID3D10EffectMatrixVariable* pProjectionMatrixVariable = NULL; //grid information #define NUM_COLS 16 #define NUM_ROWS 16 #define CELL_WIDTH 32 #define CELL_HEIGHT 32 #define NUM_VERTSX (NUM_COLS + 1) #define NUM_VERTSY (NUM_ROWS + 1) bool MyGame::InitDirect3D() { if(!DX3dApp::InitDirect3D()) { return false; } D3D10_RASTERIZER_DESC rastDesc; rastDesc.FillMode = D3D10_FILL_WIREFRAME; rastDesc.CullMode = D3D10_CULL_FRONT; rastDesc.FrontCounterClockwise = true; rastDesc.DepthBias = false; rastDesc.DepthBiasClamp = 0; rastDesc.SlopeScaledDepthBias = 0; rastDesc.DepthClipEnable = false; rastDesc.ScissorEnable = false; rastDesc.MultisampleEnable = false; rastDesc.AntialiasedLineEnable = false; ID3D10RasterizerState *g_pRasterizerState; mpD3DDevice->CreateRasterizerState(&rastDesc, &g_pRasterizerState); mpD3DDevice->RSSetState(g_pRasterizerState); // Set up the World Matrix D3DXMatrixIdentity(&WorldMatrix); D3DXMatrixLookAtLH(&ViewMatrix, new D3DXVECTOR3(0.0f, 10.0f, -20.0f), new D3DXVECTOR3(0.0f, 0.0f, 0.0f), new D3DXVECTOR3(0.0f, 1.0f, 0.0f)); // Set up the projection matrix D3DXMatrixPerspectiveFovLH(&ProjectionMatrix, (float)D3DX_PI * 0.5f, (float)mWidth/(float)mHeight, 0.1f, 100.0f); if(!CreateObject()) { return false; } return true; } //These are actions that take place after the clearing of the buffer and before the present void MyGame::GameDraw() { static float rotationAngle = 0.0f; // create the rotation matrix using the rotation angle D3DXMatrixRotationY(&WorldMatrix, rotationAngle); rotationAngle += (float)D3DX_PI * 0.0f; // Set the input layout mpD3DDevice->IASetInputLayout(modelObject.pVertexLayout); // Set vertex buffer UINT stride = sizeof(VertexPos); UINT offset = 0; mpD3DDevice->IASetVertexBuffers(0, 1, &modelObject.pVertexBuffer, &stride, &offset); mpD3DDevice->IASetIndexBuffer(modelObject.pIndicesBuffer, DXGI_FORMAT_R32_UINT, 0); // Set primitive topology mpD3DDevice->IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST); // Combine and send the final matrix to the shader D3DXMATRIX finalMatrix = (WorldMatrix * ViewMatrix * ProjectionMatrix); pProjectionMatrixVariable->SetMatrix((float*)&finalMatrix); // make sure modelObject is valid // Render a model object D3D10_TECHNIQUE_DESC techniqueDescription; modelObject.pTechnique->GetDesc(&techniqueDescription); // Loop through the technique passes for(UINT p=0; p < techniqueDescription.Passes; ++p) { modelObject.pTechnique->GetPassByIndex(p)->Apply(0); // draw the cube using all 36 vertices and 12 triangles mpD3DDevice->DrawIndexed(modelObject.numIndices,0,0); } } //Render actually incapsulates Gamedraw, so you can call data before you actually clear the buffer or after you //present data void MyGame::Render() { DX3dApp::Render(); } bool MyGame::CreateObject() { VertexPos vertices[NUM_VERTSX * NUM_VERTSY]; for(int z=0; z < NUM_VERTSY; ++z) { for(int x = 0; x < NUM_VERTSX; ++x) { vertices[x + z * NUM_VERTSX].pos.x = (float)x * CELL_WIDTH; vertices[x + z * NUM_VERTSX].pos.z = (float)z * CELL_HEIGHT; vertices[x + z * NUM_VERTSX].pos.y = 0.0f; vertices[x + z * NUM_VERTSX].color = D3DXVECTOR4(1.0, 0.0f, 0.0f, 0.0f); } } DWORD indices[NUM_VERTSX * NUM_VERTSY]; int curIndex = 0; for(int z=0; z < NUM_ROWS; ++z) { for(int x = 0; x < NUM_COLS; ++x) { int curVertex = x + (z * NUM_VERTSX); indices[curIndex] = curVertex; indices[curIndex + 1] = curVertex + NUM_VERTSX; indices[curIndex + 2] = curVertex + 1; indices[curIndex + 3] = curVertex + 1; indices[curIndex + 4] = curVertex + NUM_VERTSX; indices[curIndex + 5] = curVertex + NUM_VERTSX + 1; curIndex += 6; } } //Create Layout D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12, D3D10_INPUT_PER_VERTEX_DATA, 0} }; UINT numElements = (sizeof(layout)/sizeof(layout[0])); modelObject.numVertices = sizeof(vertices)/sizeof(VertexPos); //Create buffer desc D3D10_BUFFER_DESC bufferDesc; bufferDesc.Usage = D3D10_USAGE_DEFAULT; bufferDesc.ByteWidth = sizeof(VertexPos) * modelObject.numVertices; bufferDesc.BindFlags = D3D10_BIND_VERTEX_BUFFER; bufferDesc.CPUAccessFlags = 0; bufferDesc.MiscFlags = 0; D3D10_SUBRESOURCE_DATA initData; initData.pSysMem = vertices; //Create the buffer HRESULT hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pVertexBuffer); if(FAILED(hr)) return false; modelObject.numIndices = sizeof(indices)/sizeof(DWORD); bufferDesc.ByteWidth = sizeof(DWORD) * modelObject.numIndices; bufferDesc.BindFlags = D3D10_BIND_INDEX_BUFFER; initData.pSysMem = indices; hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pIndicesBuffer); if(FAILED(hr)) return false; ///////////////////////////////////////////////////////////////////////////// //Set up fx files LPCWSTR effectFilename = L"effect.fx"; modelObject.pEffect = NULL; hr = D3DX10CreateEffectFromFile(effectFilename, NULL, NULL, "fx_4_0", D3D10_SHADER_ENABLE_STRICTNESS, 0, mpD3DDevice, NULL, NULL, &modelObject.pEffect, NULL, NULL); if(FAILED(hr)) return false; pProjectionMatrixVariable = modelObject.pEffect->GetVariableByName("Projection")->AsMatrix(); //Dont sweat the technique. Get it! LPCSTR effectTechniqueName = "Render"; modelObject.pTechnique = modelObject.pEffect->GetTechniqueByName(effectTechniqueName); if(modelObject.pTechnique == NULL) return false; //Create Vertex layout D3D10_PASS_DESC passDesc; modelObject.pTechnique->GetPassByIndex(0)->GetDesc(&passDesc); hr = mpD3DDevice->CreateInputLayout(layout, numElements, passDesc.pIAInputSignature, passDesc.IAInputSignatureSize, &modelObject.pVertexLayout); if(FAILED(hr)) return false; return true; }

    Read the article

  • What is the proper way to change the UINavigationController transition effect

    - by Felipe Sabino
    I have seen lots of people asking on how to push/pop UINavigationControllers using other animations besides the default one, like flip or curl. The problem is that either the question/answer was relative old, which means the have some things like [UIView beginAnimations:] (example here) or they use two very different approaches. The first is to use UIView's transitionFromView:toView:duration:options:completion: selector before pushing the controller (with the animation flag set to NO), like the following: UIViewController *ctrl = [[UIViewController alloc] init]; [UIView transitionFromView:self.view toView:ctrl.view duration:1 options:UIViewAnimationOptionTransitionFlipFromTop completion:nil]; [self.navigationController pushViewController:ctrl animated:NO]; Another one is to use CoreAnimation explicitly with a CATransaction like the following: // remember you will have to have the QuartzCore framework added to your project for this approach and also add <QuartzCore/QuartzCore.h> to the class this code is used CATransition* transition = [CATransition animation]; transition.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseIn]; transition.duration = 1.0f; transition.type = @"flip"; transition.subtype = @"fromTop"; [self.navigationController.view.layer removeAllAnimations]; [self.navigationController.view.layer addAnimation:transition forKey:kCATransition]; UIViewController *ctrl = [[UIViewController alloc] init]; [self.navigationController pushViewController:ctrl animated:NO]; There are pros and cons for both approaches. The first approach gives me a much cleaner code but restricts me from using animations like "suckEffect", "cube" and others. The second approach feels wrong just by looking at it. It starts by using undocumented transitions types (i.e. not present in the Common transition types documentation from CATransition Class Reference) which might get your app rejected from App Store (I mean might as I could not found any reference of apps being rejected because it was using this transactions, which I would also appreciate any clarification on this matter), but it gives you much more flexibility on your animations, as I can use other animation types such as "cameraIris", "rippleEffect" and so on. Regarding all that, do I really need to appeal for QuartzCore and CoreAnimation whenever I need a fancier UINavigationController transition? Is there any other way to accomplish the same effect using only UIKit? If not, will the use of string values like "flip" and "cube" instead of the pre-defined constants (kCATransitionFade, kCATransitionMoveIn, etc...) be an issue regarding my app approval in the App Store? Also, are there other pros and cons regarding both approaches that could help me deciding whether to choose each one of them?

    Read the article

  • how to connect SQL Server cubes using dotnet(C#)

    - by prince23
    Hi. I am new to this cubes concept in SQL Server. I need to connect to cubes and query and get a result and display that result in grid view Any help would be great telling how to connect to a cube, articles on it, code any thing that can help me to achieve the result Thank you.

    Read the article

  • What's the name of the storage paradigm that uses cubes subdivided into 8 smaller cubes ad infinitum?

    - by Eric
    If I have a cube divided into 8 smaller cubes, each of which may be subdivided into a further 8 cubes, ad infinitum, what is the name of my system? I know that it's a special case of a tree, where each brance contains exactly 8 other leaves/branches. I remember the name starting with "Oct", and there was a wikipedia article on it, but I honestly can't find it! Does anyone know what such a data structure is actually known as?

    Read the article

  • NEED PROGRAM in visualc++ opengl [closed]

    - by HAREEFML
    I AM NEW IN OPENGL VISUALC++ PROGRAMMING.I NEEDED A PROGRAM THAT, THE ROTATING OBJECTS.WHERE THE CUBE,CYLINDER,TRIANGLE & CONE AND THESE OBJECTS ROTATE AS THE KEYBOARD ARROW KEYS ARE PRESSED AND EACH OBJECTS START ROTATION WHEN THE KEYBOARD NUMBERS 1,2,3,4 ARE PRESSED. CAN I GET PROGRAM LIKE THIS? THANKS IN ADVANCE.

    Read the article

  • Finding a Relative Path in .NET

    - by Rick Strahl
    Here’s a nice and simple path utility that I’ve needed in a number of applications: I need to find a relative path based on a base path. So if I’m working in a folder called c:\temp\templates\ and I want to find a relative path for c:\temp\templates\subdir\test.txt I want to receive back subdir\test.txt. Or if I pass c:\ I want to get back ..\..\ – in other words always return a non-hardcoded path based on some other known directory. I’ve had a routine in my library that does this via some lengthy string parsing routines, but ran into some Uri processing today that made me realize that this code could be greatly simplified by using the System.Uri class instead. Here’s the simple static method: /// <summary> /// Returns a relative path string from a full path based on a base path /// provided. /// </summary> /// <param name="fullPath">The path to convert. Can be either a file or a directory</param> /// <param name="basePath">The base path on which relative processing is based. Should be a directory.</param> /// <returns> /// String of the relative path. /// /// Examples of returned values: /// test.txt, ..\test.txt, ..\..\..\test.txt, ., .., subdir\test.txt /// </returns> public static string GetRelativePath(string fullPath, string basePath ) { // ForceBasePath to a path if (!basePath.EndsWith("\\")) basePath += "\\"; Uri baseUri = new Uri(basePath); Uri fullUri = new Uri(fullPath); Uri relativeUri = baseUri.MakeRelativeUri(fullUri); // Uri's use forward slashes so convert back to backward slashes return relativeUri.ToString().Replace("/", "\\"); } You can then call it like this: string relPath = FileUtils.GetRelativePath("c:\temp\templates","c:\temp\templates\subdir\test.txt") It’s not exactly rocket science but it’s useful in many scenarios where you’re working with files based on an application base directory. Right now I’m working on a templating solution (using the Razor Engine) where templates live in a base directory and are supplied as relative paths to that base directory. Resolving these relative paths both ways is important in order to properly check for existance of files and their change status in this case. Not the kind of thing you use every day, but useful to remember.© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  

    Read the article

  • SQL Server Editions and Integration Services

    The SQL Server 2005 and SQL Server 2008 product family has quite a few editions now, so what does this mean for SQL Server Integration Services? Starting from the bottom we have the free edition known as Express, and the entry level Workgroup edition, as well as the new Web edition. None of these three include the full SSIS product, but they do all include the SQL Server Import and Export Wizard, with access to basic data sources but nothing more, so for simple loading and extraction of data this should suffice. You will not be able to build packages though, this is just a one shot deal aimed at using the wizard on an ad-hoc basis. To get the full power of Integration Services you need to start with Standard edition. This includes the BI Development Studio, for building your own packages, and fully functional IDE integrated into Visual Studio. (You get the full VS 2005/2008 IDE with the product). All core functions will be available but with a restricted set of transformations and tasks. The SQL Server 2005 Features Comparison or Features Supported by the Editions of SQL Server 2008 describes standard edition as having basic transforms, compared to Enterprise which includes the advanced transforms. I think basic is a little harsh considering the power you get with Standard, but the advanced covers the truly ground-breaking capabilities of data mining, text mining and cleansing or fuzzy transforms. The power of performing these operations within your ETL pipeline should not be underestimated, but not all processes will require these capabilities, so it seems like a reasonable delineation. Thankfully there are no feature limitations or artificial governors within Standard compared to Enterprise. The same control flow and data flow engines underpin both editions, with the same configuration and deployment options allowing you to work seamlessly between environments and editions if using the common components. In fact there are no govenors at all in SSIS, so whilst the SQL Database engine is limited to 4 CPUs in Standard edition, SSIS is only limited by the base operating system. The advanced transforms only available with Enterprise edition: Data Mining Training Destination Data Mining Query Component Fuzzy Grouping Fuzzy Lookup Term Extraction Term Lookup Dimension Processing Destination Partition Processing Destination The advanced tasks only available with Enterprise edition: Data Mining Query Task So in summary, if you want SQL Server Integration Services, you need SQL Server Standard edition, and for the more advanced tasks and transforms you need SQL Server Enterprise edition. To recap, the answer to the often asked question is no, SQL Server Integration Services is not available in SQL Server Express or Workgroup editions.

    Read the article

  • GLSL - one-pass gaussian blur

    - by martin pilch
    It is possible to implement fragment shader to do one-pass gaussian blur? I have found lot of implementation of two-pass blur (gaussian and box blur): http://callumhay.blogspot.com/2010/09/gaussian-blur-shader-glsl.html http://www.gamerendering.com/2008/10/11/gaussian-blur-filter-shader/ http://www.geeks3d.com/20100909/shader-library-gaussian-blur-post-processing-filter-in-glsl/ and so on. I have been thinking of implementing gaussian blur as convolution (in fact, it is the convolution, the examples above are just aproximations): http://en.wikipedia.org/wiki/Gaussian_blur

    Read the article

  • Python and Ruby in Oracle Tuxedo

    - by christopher.jones
    Did you know you can now develop services and applications in Python or Ruby with Oracle Tuxedo? The Tuxedo team have a blog post about it at Python and Ruby in Tuxedo. I used to think of Tuxedo as a Transaction Processing Monitor but it has evolved into much more.

    Read the article

  • T-SQL Equivalents for Microsoft Access VBA Functions

    If you need to migrate your Access application to SQL Server, don't count on The SQL Server Upsize Wizard in Microsoft Access to automatically convert your VBA functions. If you want to push the complex query processing done by your Access queries to the back end, you'll have to rewrite them in T-SQL.

    Read the article

  • How to connect a WordPress contact form to another database which uses a form script on a static site?

    - by eirlymeyer
    Static Site B has two separate contact form scripts. One script processes leads via a script developed using Cold Fusion. Another script processes leads via a script using MySql Database. New Site A is being developed using WordPress. How do I use a WordPress Contact Form plug-in to integrate these two scripts (built on ColdFusion, and uses the existing MySQL database) to ensure the same functionality and processing of leads.

    Read the article

  • Transactions in LINQ to SQL applications

    - by nikolaosk
    In this post I would like to talk about LINQ to SQL and transactions.When I have a LINQ to SQL class I always get asked this question, "How does LINQ treat Transactions?". When we use the DeleteOnSubmit() method or the InsertOnSubmit() method, all of those commands at some point are translated into T-SQL commands and then are executed against the database. All of those commands live in transactions and they follow the basic rules of transaction processing. They do succeed together or fail together...(read more)

    Read the article

  • Unable to set background image using python (2.7.3), bash and gnome3

    - by malon
    #!/usr/bin/env python import os bashCommand = "gsettings set org.gnome.desktop.background picture-uri file:///home/malon/autowallpaperchanger/" + pic_name print bashCommand os.system(bashCommand) Print result: gsettings set org.gnome.desktop.background picture-uri file:///home/malon/autowallpaperchanger/wallpaper-1252048.jpg Copying and pasting the print result into a terminal makes the change successfully, so the command is correct, but os.system isn't processing the request correctly for some reason. In the full script (posted below), I use os.system for a different reason immediately before (wget) and that works fine. Thank you! Full script: http://pastebin.com/R90GTmBZ

    Read the article

  • Best of "The Moth" 2010

    - by Daniel Moth
    It is the time again (like in 2004, 2005, 2006, 2007, 2008, 2009) to look back at my blog for the past year and identify areas of interest that seem to be more prominent than others. After doing so, representative posts follow in my top 5 list (in random order). 1. This was the year where I had to move for the first time since 2004 my blog engine (blogger.com –> dasBlog), host provider (zen –> godaddy), web server technology and OS (apache on Linux –> IIS on Windows Server). My goal was not to break any permalinks or the look and feel of this website. A series of posts covered how I achieved that goal, culminating in a tool for others to use if they wanted to do the same: Tool to convert blogger.com content to dasBlog. Going forward I aim to be sharing more small code utilities like that one… 2. At work I am known for being fairly responsive on email, and more importantly never dropping email balls on the floor. This is due to my email processing system, which I shared here: Processing Email in Outlook. I will be sharing more tips with regards to making the best of the Office products. 3. There is no doubt in my mind that this is the year people will remember as the one where Microsoft finally fights back in the mobile space. Even though the new platform means my Windows Mobile book sales will dwindle :-), I am ecstatic about Windows Phone 7 both as a consumer and as a developer. On the release day, to get you started I shared the top 10 Windows Phone 7 developer resources. I will be sharing my tips from my experience in writing code for and consuming this new platform… 4. For my HPC developer friends using Visual Studio, I shared Slides and code for MPI Cluster Debugger and also gave you all the links you need for getting started with Dryad and DryadLINQ from MSR. Expect more from me on cluster development in the coming year… 5. Still in the HPC space, but actually also in the game and even mainstream development, the big disruption and opportunity comes in the form of GPGPU and, on the Microsoft platform, (currently) DirectCompute. Expect more from me on gpgpu development in the coming year… Subscribe via the link on the left to stay tuned for 2011… I wish you a very Happy New Year (with whatever definition of happiness works for you)! Comments about this post welcome at the original blog.

    Read the article

  • New DMV… not yet

    - by Michael Zilberstein
    Downloaded and installed new toy: And while reading BOL, stumbled upon new extremely useful DMV: sys.dm_exec_query_profiles . This DMV enables DBA to monitor query progress while it is being executed. Counters in the DMV are per operation per thread. So we’ll be able to monitor in real time which thread (even for parallel processing) processes which node in the plan. Or find heavy operations “post mortem”. We all know the uncomfortable feeling when some heavy query runs and the boss starts asking...(read more)

    Read the article

  • Thread Synchronization - UI thread and Worker thread

    This article describes how a Worker thread can take the control of the UI and can update the UI, created by the UI thread. This will be useful when a worker thread needs to update the UI in the mid of the background processing or on the completion without relying on UI thread to synchronize the work

    Read the article

  • SQLBits 8 – Conor’s back

    - by simonsabin
    I recently announced the awesome line up for SQLBits 8 in which I mentioned Conor Cunningham . Yes we have Conor coming back. Conor is the most popular SQLBits speaker ever. Conor Cunningham is a Principal Software Architect at Microsoft on the SQL Server Query Processor Team.  He's worked on database technologies for Microsoft for over 10 years and is holds numerous patents related to Query Optimization and Query Processing.  Conor is the author of a number of peer-reviewed articles...(read more)

    Read the article

  • 'Photo editor' and 'RAW editor' in Shotwell

    - by Chris Wilson
    The preference menu in Shotwell allows the user to specify both an 'External photo editor' and an 'External RAW editor', but I'm confused as to why two external editors would be required. I'm not a photographer, so this confusion may simply be a result of my ignorance, but I thought RAW images were unprocessed photographs, in which case two editors would be kinda redundant. Am I simply missing one of the finer details of photograph processing?

    Read the article

  • Daily tech links for .net and related technologies - Apr 8-10, 2010

    - by SanjeevAgarwal
    Daily tech links for .net and related technologies - Apr 8-10, 2010 Web Development Using RIA DomainServices with ASP.NET and MVC 2 - geekswithblogs Using AntiXss As The Default Encoder For ASP.NET - Phil Haack New Syntax for HTML Encoding Output in ASP.NET 4 (and ASP.NET MVC 2) - Scott Gu Multi-Step Processing in ASP.NET - Dave M. Bush MvcContrib - Portable Area – Visual Studio project template - erichexter Encoding/Decoding URIs and HTML in the .NET 4 Client Profile - Pete Brown Jon Takes Five...(read more)

    Read the article

  • Azure Flavor for the Sharepoint Media Component

    - by spano
    Some time ago I wrote about a Media Processing Component for Sharepoint that I was working on. It is a Media Assets list for Sharepoint that lets you choose where to store the blob files. It provides also intelligence for encoding videos, generating thumbnail and poster images, obtaining media metadata, etc. On that first post the component was explained in detail, with the original 3 storage flavors: Sharepoint list, Virtual Directoy or FTP. The storage manager is extensible, so a new flavor was...(read more)

    Read the article

  • Don’t Miss The Top Exastack ISV Headlines – Week Of June 5

    - by Roxana Babiciu
    Kerridge achieves Oracle Exadata Optimized status with K8, an ERP Solution for distribution, merchant and wholesale/retail sectors. The online transactional processing saw a 12x increase in the volume throughput from previous benchmarks – Watch video. Accenture achieves Oracle Exalogic Optimized status with AFPO, a unique accelerator for customer-facing solutions. Over 125 clients cut their implementation costs by up to thirty percent – Read more.

    Read the article

  • When do I use Apache Kafka, Azure Service Bus, vs Azure Queues?

    - by makerofthings7
    I'm trying to understand the situations I'd use Apache Kafka, Azure Service Bus, or Azure Queues for high scale message processing. Which is better for standard Pub Sub situations? Where multiple clients get a copy of the same message? Which is better for low latency Pub sub and no durability? Which is better for "cooperating producer" and "competing consumer"? (what does this mean?) I see a bit of overlap in function between Kafka, Service Bus, Azure Queues

    Read the article

  • SPARC T4-4 Beats 8-CPU IBM POWER7 on TPC-H @3000GB Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered a world record TPC-H @3000GB benchmark result for systems with four processors. This result beats eight processor results from IBM (POWER7) and HP (x86). The SPARC T4-4 server also delivered better performance per core than these eight processor systems from IBM and HP. Comparisons below are based upon system to system comparisons, highlighting Oracle's complete software and hardware solution. This database world record result used Oracle's Sun Storage 2540-M2 arrays (rotating disk) connected to a SPARC T4-4 server running Oracle Solaris 11 and Oracle Database 11g Release 2 demonstrating the power of Oracle's integrated hardware and software solution. The SPARC T4-4 server based configuration achieved a TPC-H scale factor 3000 world record for four processor systems of 205,792 QphH@3000GB with price/performance of $4.10/QphH@3000GB. The SPARC T4-4 server with four SPARC T4 processors (total of 32 cores) is 7% faster than the IBM Power 780 server with eight POWER7 processors (total of 32 cores) on the TPC-H @3000GB benchmark. The SPARC T4-4 server is 36% better in price performance compared to the IBM Power 780 server on the TPC-H @3000GB Benchmark. The SPARC T4-4 server is 29% faster than the IBM Power 780 for data loading. The SPARC T4-4 server is up to 3.4 times faster than the IBM Power 780 server for the Refresh Function. The SPARC T4-4 server with four SPARC T4 processors is 27% faster than the HP ProLiant DL980 G7 server with eight x86 processors on the TPC-H @3000GB benchmark. The SPARC T4-4 server is 52% faster than the HP ProLiant DL980 G7 server for data loading. The SPARC T4-4 server is up to 3.2 times faster than the HP ProLiant DL980 G7 for the Refresh Function. The SPARC T4-4 server achieved a peak IO rate from the Oracle database of 17 GB/sec. This rate was independent of the storage used, as demonstrated by the TPC-H @3000TB benchmark which used twelve Sun Storage 2540-M2 arrays (rotating disk) and the TPC-H @1000TB benchmark which used four Sun Storage F5100 Flash Array devices (flash storage). [*] The SPARC T4-4 server showed linear scaling from TPC-H @1000GB to TPC-H @3000GB. This demonstrates that the SPARC T4-4 server can handle the increasingly larger databases required of DSS systems. [*] The SPARC T4-4 server benchmark results demonstrate a complete solution of building Decision Support Systems including data loading, business questions and refreshing data. Each phase usually has a time constraint and the SPARC T4-4 server shows superior performance during each phase. [*] The TPC believes that comparisons of results published with different scale factors are misleading and discourages such comparisons. Performance Landscape The table lists the leading TPC-H @3000GB results for non-clustered systems. TPC-H @3000GB, Non-Clustered Systems System Processor P/C/T – Memory Composite(QphH) $/perf($/QphH) Power(QppH) Throughput(QthH) Database Available SPARC Enterprise M9000 3.0 GHz SPARC64 VII+ 64/256/256 – 1024 GB 386,478.3 $18.19 316,835.8 471,428.6 Oracle 11g R2 09/22/11 SPARC T4-4 3.0 GHz SPARC T4 4/32/256 – 1024 GB 205,792.0 $4.10 190,325.1 222,515.9 Oracle 11g R2 05/31/12 SPARC Enterprise M9000 2.88 GHz SPARC64 VII 32/128/256 – 512 GB 198,907.5 $15.27 182,350.7 216,967.7 Oracle 11g R2 12/09/10 IBM Power 780 4.1 GHz POWER7 8/32/128 – 1024 GB 192,001.1 $6.37 210,368.4 175,237.4 Sybase 15.4 11/30/11 HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 8/64/128 – 512 GB 162,601.7 $2.68 185,297.7 142,685.6 SQL Server 2008 10/13/10 P/C/T = Processors, Cores, Threads QphH = the Composite Metric (bigger is better) $/QphH = the Price/Performance metric in USD (smaller is better) QppH = the Power Numerical Quantity QthH = the Throughput Numerical Quantity The following table lists data load times and refresh function times during the power run. TPC-H @3000GB, Non-Clustered Systems Database Load & Database Refresh System Processor Data Loading(h:m:s) T4Advan RF1(sec) T4Advan RF2(sec) T4Advan SPARC T4-4 3.0 GHz SPARC T4 04:08:29 1.0x 67.1 1.0x 39.5 1.0x IBM Power 780 4.1 GHz POWER7 05:51:50 1.5x 147.3 2.2x 133.2 3.4x HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 08:35:17 2.1x 173.0 2.6x 126.3 3.2x Data Loading = database load time RF1 = power test first refresh transaction RF2 = power test second refresh transaction T4 Advan = the ratio of time to T4 time Complete benchmark results found at the TPC benchmark website http://www.tpc.org. Configuration Summary and Results Hardware Configuration: SPARC T4-4 server 4 x SPARC T4 3.0 GHz processors (total of 32 cores, 128 threads) 1024 GB memory 8 x internal SAS (8 x 300 GB) disk drives External Storage: 12 x Sun Storage 2540-M2 array storage, each with 12 x 15K RPM 300 GB drives, 2 controllers, 2 GB cache Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 Enterprise Edition Audited Results: Database Size: 3000 GB (Scale Factor 3000) TPC-H Composite: 205,792.0 QphH@3000GB Price/performance: $4.10/QphH@3000GB Available: 05/31/2012 Total 3 year Cost: $843,656 TPC-H Power: 190,325.1 TPC-H Throughput: 222,515.9 Database Load Time: 4:08:29 Benchmark Description The TPC-H benchmark is a performance benchmark established by the Transaction Processing Council (TPC) to demonstrate Data Warehousing/Decision Support Systems (DSS). TPC-H measurements are produced for customers to evaluate the performance of various DSS systems. These queries and updates are executed against a standard database under controlled conditions. Performance projections and comparisons between different TPC-H Database sizes (100GB, 300GB, 1000GB, 3000GB, 10000GB, 30000GB and 100000GB) are not allowed by the TPC. TPC-H is a data warehousing-oriented, non-industry-specific benchmark that consists of a large number of complex queries typical of decision support applications. It also includes some insert and delete activity that is intended to simulate loading and purging data from a warehouse. TPC-H measures the combined performance of a particular database manager on a specific computer system. The main performance metric reported by TPC-H is called the TPC-H Composite Query-per-Hour Performance Metric (QphH@SF, where SF is the number of GB of raw data, referred to as the scale factor). QphH@SF is intended to summarize the ability of the system to process queries in both single and multiple user modes. The benchmark requires reporting of price/performance, which is the ratio of the total HW/SW cost plus 3 years maintenance to the QphH. A secondary metric is the storage efficiency, which is the ratio of total configured disk space in GB to the scale factor. Key Points and Best Practices Twelve Sun Storage 2540-M2 arrays were used for the benchmark. Each Sun Storage 2540-M2 array contains 12 15K RPM drives and is connected to a single dual port 8Gb FC HBA using 2 ports. Each Sun Storage 2540-M2 array showed 1.5 GB/sec for sequential read operations and showed linear scaling, achieving 18 GB/sec with twelve Sun Storage 2540-M2 arrays. These were stand alone IO tests. The peak IO rate measured from the Oracle database was 17 GB/sec. Oracle Solaris 11 11/11 required very little system tuning. Some vendors try to make the point that storage ratios are of customer concern. However, storage ratio size has more to do with disk layout and the increasing capacities of disks – so this is not an important metric in which to compare systems. The SPARC T4-4 server and Oracle Solaris efficiently managed the system load of over one thousand Oracle Database parallel processes. Six Sun Storage 2540-M2 arrays were mirrored to another six Sun Storage 2540-M2 arrays on which all of the Oracle database files were placed. IO performance was high and balanced across all the arrays. The TPC-H Refresh Function (RF) simulates periodical refresh portion of Data Warehouse by adding new sales and deleting old sales data. Parallel DML (parallel insert and delete in this case) and database log performance are a key for this function and the SPARC T4-4 server outperformed both the IBM POWER7 server and HP ProLiant DL980 G7 server. (See the RF columns above.) See Also Transaction Processing Performance Council (TPC) Home Page Ideas International Benchmark Page SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Sun Storage 2540-M2 Array oracle.com OTN Disclosure Statement TPC-H, QphH, $/QphH are trademarks of Transaction Processing Performance Council (TPC). For more information, see www.tpc.org. SPARC T4-4 205,792.0 QphH@3000GB, $4.10/QphH@3000GB, available 5/31/12, 4 processors, 32 cores, 256 threads; IBM Power 780 QphH@3000GB, 192,001.1 QphH@3000GB, $6.37/QphH@3000GB, available 11/30/11, 8 processors, 32 cores, 128 threads; HP ProLiant DL980 G7 162,601.7 QphH@3000GB, $2.68/QphH@3000GB available 10/13/10, 8 processors, 64 cores, 128 threads.

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >