Search Results

Search found 4320 results on 173 pages for 'vertex arrays'.

Page 149/173 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • fit a ellipse in Python given a set of points xi=(xi,yi)

    - by Gianni
    I am computing a series of index from a 2D points (x,y). One index is the ratio between minor and major axis. To fit the ellipse i am using the following post when i run these function the final results looks strange because the center and the axis length are not in scale with the 2D points center = [ 560415.53298363+0.j 6368878.84576771+0.j] angle of rotation = (-0.0528033467597-5.55111512313e-17j) axes = [0.00000000-557.21553487j 6817.76933256 +0.j] thanks in advance for help import numpy as np from numpy.linalg import eig, inv def fitEllipse(x,y): x = x[:,np.newaxis] y = y[:,np.newaxis] D = np.hstack((x*x, x*y, y*y, x, y, np.ones_like(x))) S = np.dot(D.T,D) C = np.zeros([6,6]) C[0,2] = C[2,0] = 2; C[1,1] = -1 E, V = eig(np.dot(inv(S), C)) n = np.argmax(np.abs(E)) a = V[:,n] return a def ellipse_center(a): b,c,d,f,g,a = a[1]/2, a[2], a[3]/2, a[4]/2, a[5], a[0] num = b*b-a*c x0=(c*d-b*f)/num y0=(a*f-b*d)/num return np.array([x0,y0]) def ellipse_angle_of_rotation( a ): b,c,d,f,g,a = a[1]/2, a[2], a[3]/2, a[4]/2, a[5], a[0] return 0.5*np.arctan(2*b/(a-c)) def ellipse_axis_length( a ): b,c,d,f,g,a = a[1]/2, a[2], a[3]/2, a[4]/2, a[5], a[0] up = 2*(a*f*f+c*d*d+g*b*b-2*b*d*f-a*c*g) down1=(b*b-a*c)*( (c-a)*np.sqrt(1+4*b*b/((a-c)*(a-c)))-(c+a)) down2=(b*b-a*c)*( (a-c)*np.sqrt(1+4*b*b/((a-c)*(a-c)))-(c+a)) res1=np.sqrt(up/down1) res2=np.sqrt(up/down2) return np.array([res1, res2]) if __name__ == '__main__': points = [(560036.4495758876, 6362071.890493258), (560036.4495758876, 6362070.890493258), (560036.9495758876, 6362070.890493258), (560036.9495758876, 6362070.390493258), (560037.4495758876, 6362070.390493258), (560037.4495758876, 6362064.890493258), (560036.4495758876, 6362064.890493258), (560036.4495758876, 6362063.390493258), (560035.4495758876, 6362063.390493258), (560035.4495758876, 6362062.390493258), (560034.9495758876, 6362062.390493258), (560034.9495758876, 6362061.390493258), (560032.9495758876, 6362061.390493258), (560032.9495758876, 6362061.890493258), (560030.4495758876, 6362061.890493258), (560030.4495758876, 6362061.390493258), (560029.9495758876, 6362061.390493258), (560029.9495758876, 6362060.390493258), (560029.4495758876, 6362060.390493258), (560029.4495758876, 6362059.890493258), (560028.9495758876, 6362059.890493258), (560028.9495758876, 6362059.390493258), (560028.4495758876, 6362059.390493258), (560028.4495758876, 6362058.890493258), (560027.4495758876, 6362058.890493258), (560027.4495758876, 6362058.390493258), (560026.9495758876, 6362058.390493258), (560026.9495758876, 6362057.890493258), (560025.4495758876, 6362057.890493258), (560025.4495758876, 6362057.390493258), (560023.4495758876, 6362057.390493258), (560023.4495758876, 6362060.390493258), (560023.9495758876, 6362060.390493258), (560023.9495758876, 6362061.890493258), (560024.4495758876, 6362061.890493258), (560024.4495758876, 6362063.390493258), (560024.9495758876, 6362063.390493258), (560024.9495758876, 6362064.390493258), (560025.4495758876, 6362064.390493258), (560025.4495758876, 6362065.390493258), (560025.9495758876, 6362065.390493258), (560025.9495758876, 6362065.890493258), (560026.4495758876, 6362065.890493258), (560026.4495758876, 6362066.890493258), (560026.9495758876, 6362066.890493258), (560026.9495758876, 6362068.390493258), (560027.4495758876, 6362068.390493258), (560027.4495758876, 6362068.890493258), (560027.9495758876, 6362068.890493258), (560027.9495758876, 6362069.390493258), (560028.4495758876, 6362069.390493258), (560028.4495758876, 6362069.890493258), (560033.4495758876, 6362069.890493258), (560033.4495758876, 6362070.390493258), (560033.9495758876, 6362070.390493258), (560033.9495758876, 6362070.890493258), (560034.4495758876, 6362070.890493258), (560034.4495758876, 6362071.390493258), (560034.9495758876, 6362071.390493258), (560034.9495758876, 6362071.890493258), (560036.4495758876, 6362071.890493258)] a_points = np.array(points) x = a_points[:, 0] y = a_points[:, 1] from pylab import * plot(x,y) show() a = fitEllipse(x,y) center = ellipse_center(a) phi = ellipse_angle_of_rotation(a) axes = ellipse_axis_length(a) print "center = ", center print "angle of rotation = ", phi print "axes = ", axes from pylab import * plot(x,y) plot(center[0:1],center[1:], color = 'red') show() each vertex is a xi,y,i point plot of 2D point and center of fit ellipse

    Read the article

  • Dijkstras Algorithm exaplination java

    - by alchemey89
    Hi, I have found an implementation for dijkstras algorithm on the internet and was wondering if someone could help me understand how the code works. Many thanks private int nr_points=0; private int[][]Cost; private int []mask; private void dijkstraTSP() { if(nr_points==0)return; //algorithm=new String("Dijkstra"); nod1=new Vector(); nod2=new Vector(); weight=new Vector(); mask=new int[nr_points]; //initialise mask with zeros (mask[x]=1 means the vertex is marked as used) for(int i=0;i<nr_points;i++)mask[i]=0; //Dijkstra: int []dd=new int[nr_points]; int []pre=new int[nr_points]; int []path=new int[nr_points+1]; int init_vert=0,pos_in_path=0,new_vert=0; //initialise the vectors for(int i=0;i<nr_points;i++) { dd[i]=Cost[init_vert][i]; pre[i]=init_vert; path[i]=-1; } pre[init_vert]=0; path[0]=init_vert; pos_in_path++; mask[init_vert]=1; for(int k=0;k<nr_points-1;k++) { //find min. cost in dd for(int j=0;j<nr_points;j++) if(dd[j]!=0 && mask[j]==0){new_vert=j; break;} for(int j=0;j<nr_points;j++) if(dd[j]<dd[new_vert] && mask[j]==0 && dd[j]!=0)new_vert=j; mask[new_vert]=1; path[pos_in_path]=new_vert; pos_in_path++; for(int j=0;j<nr_points;j++) { if(mask[j]==0) { if(dd[j]>dd[new_vert]+Cost[new_vert][j]) { dd[j]=dd[new_vert]+Cost[new_vert][j]; } } } } //Close the cycle path[nr_points]=init_vert; //Save the solution in 3 vectors (for graphical purposes) for(int i=0;i<nr_points;i++) { nod1.addElement(path[i]); nod2.addElement(path[i+1]); weight.addElement(Cost[path[i]][path[i+1]]); } }

    Read the article

  • Dijkstra's Algorithm explanation java

    - by alchemey89
    Hi, I have found an implementation for dijkstras algorithm on the internet and was wondering if someone could help me understand how the code works. Many thanks private int nr_points=0; private int[][]Cost; private int []mask; private void dijkstraTSP() { if(nr_points==0)return; //algorithm=new String("Dijkstra"); nod1=new Vector(); nod2=new Vector(); weight=new Vector(); mask=new int[nr_points]; //initialise mask with zeros (mask[x]=1 means the vertex is marked as used) for(int i=0;i<nr_points;i++)mask[i]=0; //Dijkstra: int []dd=new int[nr_points]; int []pre=new int[nr_points]; int []path=new int[nr_points+1]; int init_vert=0,pos_in_path=0,new_vert=0; //initialise the vectors for(int i=0;i<nr_points;i++) { dd[i]=Cost[init_vert][i]; pre[i]=init_vert; path[i]=-1; } pre[init_vert]=0; path[0]=init_vert; pos_in_path++; mask[init_vert]=1; for(int k=0;k<nr_points-1;k++) { //find min. cost in dd for(int j=0;j<nr_points;j++) if(dd[j]!=0 && mask[j]==0){new_vert=j; break;} for(int j=0;j<nr_points;j++) if(dd[j]<dd[new_vert] && mask[j]==0 && dd[j]!=0)new_vert=j; mask[new_vert]=1; path[pos_in_path]=new_vert; pos_in_path++; for(int j=0;j<nr_points;j++) { if(mask[j]==0) { if(dd[j]>dd[new_vert]+Cost[new_vert][j]) { dd[j]=dd[new_vert]+Cost[new_vert][j]; } } } } //Close the cycle path[nr_points]=init_vert; //Save the solution in 3 vectors (for graphical purposes) for(int i=0;i<nr_points;i++) { nod1.addElement(path[i]); nod2.addElement(path[i+1]); weight.addElement(Cost[path[i]][path[i+1]]); } }

    Read the article

  • OpenGL, how to set a monochrome texture to a colored shape?

    - by Santiago
    I'm developing on Android with OpenGL ES, I draw some cubes and I change its colors with glColor4f. Now, what I want is to give a more realistic effect on the cubes, so I create a monochromatic 8bit depth, 64x64 pixel size PNG file. I loaded on a texture, and here is my problem, witch is the way to combine the color and the texture to get a colorized and textured cubes onto the screen? I'm not an expert on OpenGL, I tried this: On create: public void asignBitmap(GL10 gl, Bitmap bitmap) { int[] textures = new int[1]; gl.glGenTextures(1, textures, 0); mTexture = textures[0]; gl.glBindTexture(GL10.GL_TEXTURE_2D, mTexture); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE); gl.glTexEnvf(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE, GL10.GL_REPLACE); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_ALPHA, bitmap, 0); ByteBuffer tbb = ByteBuffer.allocateDirect(texCoords.length * 4); tbb.order(ByteOrder.nativeOrder()); mTexBuffer = tbb.asFloatBuffer(); for (int i = 0; i < 48; i++) mTexBuffer.put(texCoords[i]); mTexBuffer.position(0); } And OnDraw: public void draw(GL10 gl, int alphawires) { gl.glColor4f(1.0f, 0.0f, 0.0f, 0.5f); //RED gl.glBindTexture(GL10.GL_TEXTURE_2D, mTexture); gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); gl.glEnable(GL10.GL_TEXTURE_2D); gl.glEnable(GL10.GL_BLEND); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, mTexBuffer); //Set the face rotation gl.glFrontFace(GL10.GL_CW); //Point to our buffers gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); //Enable the vertex and color state gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); //Draw the vertices as triangles, based on the Index Buffer information gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE, indexBuffer); //Disable the client state before leaving gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glDisable(GL10.GL_BLEND); gl.glDisable(GL10.GL_TEXTURE_2D); } I'm even not sure if I have to use a blend option, because I don't need transparency, but is a plus :)

    Read the article

  • OpenGL, how to set a monocrome texture to a colored shape?

    - by Santiago
    I'm developing on Android with OpenGL ES, I draw some cubes and I change its colors with glColor4f. Now, what I want is to give a more realistic effect on the cubes, so I create a monochromatic 8bit depth, 64x64 pixel size PNG file. I loaded on a texture, and here is my problem, witch is the way to combine the color and the texture to get a colorized and textured cubes onto the screen? I'm not an expert on OpenGL, I tried this: On create: public void asignBitmap(GL10 gl, Bitmap bitmap) { int[] textures = new int[1]; gl.glGenTextures(1, textures, 0); mTexture = textures[0]; gl.glBindTexture(GL10.GL_TEXTURE_2D, mTexture); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE); gl.glTexEnvf(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE, GL10.GL_REPLACE); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_ALPHA, bitmap, 0); ByteBuffer tbb = ByteBuffer.allocateDirect(texCoords.length * 4); tbb.order(ByteOrder.nativeOrder()); mTexBuffer = tbb.asFloatBuffer(); for (int i = 0; i < 48; i++) mTexBuffer.put(texCoords[i]); mTexBuffer.position(0); } And OnDraw: public void draw(GL10 gl, int alphawires) { gl.glColor4f(1.0f, 0.0f, 0.0f, 0.5f); //RED gl.glBindTexture(GL10.GL_TEXTURE_2D, mTexture); gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); gl.glEnable(GL10.GL_TEXTURE_2D); gl.glEnable(GL10.GL_BLEND); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, mTexBuffer); //Set the face rotation gl.glFrontFace(GL10.GL_CW); //Point to our buffers gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); //Enable the vertex and color state gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); //Draw the vertices as triangles, based on the Index Buffer information gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE, indexBuffer); //Disable the client state before leaving gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glDisable(GL10.GL_BLEND); gl.glDisable(GL10.GL_TEXTURE_2D); } I'm even not sure if I have to use a blend option, because I don't need transparency, but is a plus :) Thank's

    Read the article

  • iPhone OpenGL scrolling background jumps when texture is drawn for first time

    - by Magnum39
    I have been fighting a problem for a while now and would appreciate any help anybody could give. I have a sprite that moves within a landscape. The sprite remains in the center of the screen and the background moves to simulate that the sprite is moving within the landscape. I have split the landscape into sections so that I only draw the sections of the landscape that I need (are on screen). The Problem: As a new texture section of the screen appears on the screen (is drawn for the first time) the movement jumps. Almost like a frame is missed. I have done some timing experiments and I do not thinks a frame is missed. My processing is well below the 30fps that I have the animation set to. It only happens the first time the texture section is drawn. Is there something extra that is done the first time a texture is drawn? Here is the code: - (void) render{ // Sets up an array of values to use as the sprite vertices. const GLfloat sVerts[] = { -1.6f, -1.6f, 1.6f, -1.6f, -1.6f, 1.6f, 1.6f, 1.6f, }; static const GLfloat sTexCoords[] = { 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0 }; glDisableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); // Setup opengl to draw the object in correct orientation, size, position, etc glLoadIdentity(); // Enable use of the texture glEnable(GL_TEXTURE_2D); glVertexPointer(2, GL_FLOAT, 0, sVerts); glTexCoordPointer(2, GL_FLOAT, 0, sTexCoords); // draw the texture // set the position of the first tile float xOffset = -4.8; float yOffset = 4.8; int i; int y; int currentTexture = textureA; for(i=0; i<2; i++) { for(y=0; y<2; y++) { // test for the texture tile on the screen if not on screen then do not draw float localX = xOffset+(3.21*y); float localY = yOffset-(3.21*i); float xDiff = monkeyX - localX; float yDiff = monkeyY - localY; if(((xDiff < 3.2) && (xDiff > -3.2)) && ((yDiff <2.7) && (yDiff > -2.7))) { // bind the texture and set the vertex data pointers glBindTexture(GL_TEXTURE_2D, spriteTexture[currentTexture]); // move to draw position for the texture glLoadIdentity(); glTranslatef((localX+self.positionX), (localY+self.positionY), 0.0); //draw the texture glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); } currentTexture++; } } }

    Read the article

  • Windows Phone 7 Prototype 001: Speech Recognition on WP7

    At some point in the future it will be awesome when you can just tell your computer what to do and it does it - without typing to help those of us with a blistering 11 WPM hunk and peck technique. Siri, a mobile digital assistant using speech recognition was voted best tech at SXSW. I dont know about that one. Although, I'm sure it will get better when Apple rebuilds it and  bundles on iPhone 5. So how would you do that on WP7? There have been some videos floating around showing Bing with some voice control so obviously the phone has speech recognition. So what options are there: System.Speech? Not included in WP7/SL Nuance software like Siri? No WP7/SL version yet. Invoking the SAPI dlls on the phone? No automation factory in WP7 SL. Web services using System.Speech and mic on the phone? YES! The last one was my least favorite but that works for now. I built a quick sample app to show how to do text-to-speech and speech recognition on WP7.   @eklimczak will not be happy with the developer designed UI. In this sample there is web service with provides access to the system.speech APIs in .NET. Basically its just passing around byte arrays. On the phone its using the XNA audio frameworks to play the text-to-speech stream and to record using the microphone. The code is pretty simple and you can download from the link at the end of this post. The only things to note are adjusting the WCF config to handle larger byte uploads and the Microphone API is a little weird with that 1 second buffer. It would be nice if you could just to mic.start and mic.end which would return an array of bytes instead of managing your own stream inside the buffer ready callback. Couple of downsides to this approach: Recoding from the phone has some static. Could be my code or the my mic is bad / not calibrated right. Having to make web service calls instead of local access is not ideal (Microsoft, please add an API for the SAPI dlls) Although in the context of an app like Siri its not so bad since you need to do web service lookups to get data back Speech recognition quality really depends on either a) a limited grammar set like that pizza grammar in the sample or b) training the recognizer. For the latter it would be annoying to have users train the system. Using the System.Speech stuff youd have to have a profile for each user. So until Microsoft adds some speech client APIs on the phone or Nuance releases a wp7 product, this is a decent workaround. In the future Id like to build something similar to Siri. I shall call it Iris in homage. Im a big fan of mobile speech apps because frankly its just not safe to Google while driving. Since some of my designer co-workers have been posting UI sketches for WP7, Id like to start posting some code prototypes for things I try out on the phone. That will probably last 2 weeks, but for the moment I have like 10 posts in the queue. Sample Code 100% guaranteed to work on my emulatorDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Knockout.js - Filtering, Sorting, and Paging

    - by jtimperley
    Originally posted on: http://geekswithblogs.net/jtimperley/archive/2013/07/28/knockout.js---filtering-sorting-and-paging.aspxKnockout.js is fantastic! Maybe I missed it but it appears to be missing flexible filtering, sorting, and pagination of its grids. This is a summary of my attempt at creating this functionality which has been working out amazingly well for my purposes. Before you continue, this post is not intended to teach you the basics of Knockout. They have already created a fantastic tutorial for this purpose. You'd be wise to review this before you continue. http://learn.knockoutjs.com/ Please view the full source code and functional example on jsFiddle. Below you will find a brief explanation of some of the components. http://jsfiddle.net/JTimperley/pyCTN/13/ First we need to create a model to represent our records. This model is a simple container with defined and guaranteed members. function CustomerModel(data) { if (!data) { data = {}; } var self = this; self.id = data.id; self.name = data.name; self.status = data.status; } Next we need a model to represent the page as a whole with an array of the previously defined records. I have intentionally overlooked the filtering and sorting options for now. Note how the filtering, sorting, and pagination are chained together to accomplish all three goals. This strategy allows each of these pieces to be used selectively based on the page's needs. If you only need sorting, just sort, etc. function CustomerPageModel(data) { if (!data) { data = {}; } var self = this; self.customers = ExtractModels(self, data.customers, CustomerModel); var filters = […]; var sortOptions = […]; self.filter = new FilterModel(filters, self.customers); self.sorter = new SorterModel(sortOptions, self.filter.filteredRecords); self.pager = new PagerModel(self.sorter.orderedRecords); } The code currently supports text box and drop down filters. Text box filters require defining the current 'Value' and the 'RecordValue' function to retrieve the filterable value from the provided record. Drop downs allow defining all possible values, the current option, and the 'RecordValue' as before. Once defining these filters, they are automatically added to the screen and any changes to their values will automatically update the results, causing their sort and pagination to be re-evaluated. var filters = [ { Type: "text", Name: "Name", Value: ko.observable(""), RecordValue: function(record) { return record.name; } }, { Type: "select", Name: "Status", Options: [ GetOption("All", "All", null), GetOption("New", "New", true), GetOption("Recently Modified", "Recently Modified", false) ], CurrentOption: ko.observable(), RecordValue: function(record) { return record.status; } } ]; Sort options are more simplistic and are also automatically added to the screen. Simply provide each option's name and value for the sort drop down as well as function to allow defining how the records are compared. This mechanism can easily be adapted for using table headers as the sort triggers. That strategy hasn't crossed my functionality needs at this point. var sortOptions = [ { Name: "Name", Value: "Name", Sort: function(left, right) { return CompareCaseInsensitive(left.name, right.name); } } ]; Paging options are completely contained by the pager model. Because we will be chaining arrays between our filtering, sorting, and pagination models, the following utility method is used to prevent errors when handing an observable array to another observable array. function GetObservableArray(array) { if (typeof(array) == 'function') { return array; }   return ko.observableArray(array); }

    Read the article

  • Why your Netapp is so slow...

    - by Darius Zanganeh
    Have you ever wondered why your Netapp FAS box is slow and doesn't perform well at large block workloads?  In this blog entry I will give you a little bit of information that will probably help you understand why it’s so slow, why you shouldn't use it for applications that read and write in large blocks like 64k, 128k, 256k ++ etc..  Of course since I work for Oracle at this time, I will show you why the ZS3 storage boxes are excellent choices for these types of workloads. Netapp’s Fundamental Problem The fundamental problem you have running these workloads on Netapp is the backend block size of their WAFL file system.  Every application block on a Netapp FAS ends up in a 4k chunk on a disk. Reference:  Netapp TR-3001 Whitepaper Netapp has proven this lacking large block performance fact in at least two different ways. They have NEVER posted an SPC-2 Benchmark yet they have posted SPC-1 and SPECSFS, both recently. In 2011 they purchased Engenio to try and fill this GAP in their portfolio. Block Size Matters So why does block size matter anyways?  Many applications use large block chunks of data especially in the Big Data movement.  Some examples are SAS Business Analytics, Microsoft SQL, Hadoop HDFS is even 64MB! Now let me boil this down for you.  If an application such MS SQL is writing data in a 64k chunk then before Netapp actually writes it on disk it will have to split it into 16 different 4k writes and 16 different disk IOPS.  When the application later goes to read that 64k chunk the Netapp will have to again do 16 different disk IOPS.  In comparison the ZS3 Storage Appliance can write in variable block sizes ranging from 512b to 1MB.  So if you put the same MSSQL database on a ZS3 you can set the specific LUNs for this database to 64k and then when you do an application read/write it requires only a single disk IO.  That is 16x faster!  But, back to the problem with your Netapp, you will VERY quickly run out of disk IO and hit a wall.  Now all arrays will have some fancy pre fetch algorithm and some nice cache and maybe even flash based cache such as a PAM card in your Netapp but with large block workloads you will usually blow through the cache and still need significant disk IO.  Also because these datasets are usually very large and usually not dedupable they are usually not good candidates for an all flash system.  You can do some simple math in excel and very quickly you will see why it matters.  Here are a couple of READ examples using SAS and MSSQL.  Assume these are the READ IOPS the application needs even after all the fancy cache and algorithms.   Here is an example with 128k blocks.  Notice the numbers of drives on the Netapp! Here is an example with 64k blocks You can easily see that the Oracle ZS3 can do dramatically more work with dramatically less drives.  This doesn't even take into account that the ONTAP system will likely run out of CPU way before you get to these drive numbers so you be buying many more controllers.  So with all that said, lets look at the ZS3 and why you should consider it for any workload your running on Netapp today.  ZS3 World Record Price/Performance in the SPC-2 benchmark ZS3-2 is #1 in Price Performance $12.08ZS3-2 is #3 in Overall Performance 16,212 MBPS Note: The number one overall spot in the world is held by an AFA 33,477 MBPS but at a Price Performance of $29.79.  A customer could purchase 2 x ZS3-2 systems in the benchmark with relatively the same performance and walk away with $600,000 in their pocket.

    Read the article

  • How to make the constructor for the following exercise in c++?

    - by user40630
    This is the exercise I?m trying to solve. It's from C++, How to program book from Deitel and it's my homework. (Card Shuffling and Dealing) Create a program to shuffle and deal a deck of cards. The program should consist of class Card, class DeckOfCards and a driver program. Class Card should provide: a) Data members face and suit of type int. b) A constructor that receives two ints representing the face and suit and uses them to initialize the data members. c) Two static arrays of strings representing the faces and suits. d) A toString function that returns the Card as a string in the form “face of suit.” You can use the + operator to concatenate strings. Class DeckOfCards should contain: a) A vector of Cards named deck to store the Cards. b) An integer currentCard representing the next card to deal. c) A default constructor that initializes the Cards in the deck. The constructor should use vector function push_back to add each Card to the end of the vector after the Card is created and initialized. This should be done for each of the 52 Cards in the deck. d) A shuffle function that shuffles the Cards in the deck. The shuffle algorithm should iterate through the vector of Cards. For each Card, randomly select another Card in the deck and swap the two Cards. e) A dealCard function that returns the next Card object from the deck. f) A moreCards function that returns a bool value indicating whether there are more Cards to deal. The driver program should create a DeckOfCards object, shuffle the cards, then deal the 52 cards. The problem I'm facing is that I don't know exactly how to make the constructor for the second class. See description commented in the code bellow. #include <iostream> #include <vector> using namespace std; /* * */ //Class card. No problems here. class Card { public: Card(int, int); string toString(); private: int suit, face; static string faceNames[13]; static string suitNames[4]; }; string Card::faceNames[13] = {"Ace","Two","Three","Four","Five","Six","Seven","Eight","Nine","Ten","Queen","Jack","King"}; string Card::suitNames[4] = {"Diamonds","Clubs","Hearts","Spades"}; string Card::toString() { return faceNames[face]+" of "+suitNames[suit]; } Card::Card(int f, int s) :face(f), suit(s) { } /*The problem begins here. This class should create(when and object for it is created) a copy of the vector deck, right? But how exactly are these vector cards be initialized? I'll explain better in the constructor definition bellow.*/ class DeckOfCards { public: DeckOfCards(); void shuffleCards(); Card dealCard(); bool moreCards(); private: vector<Card> deck(52); int currentCard; }; int main(int argc, char** argv) { return 0; } DeckOfCards::DeckOfCards() { //This is where I'm stuck. I can't figure out how to set each of the 52 cards of the vector deck to have a specific suit and face every one of them, by using only the constructor of the Card class. //What you see bellow was one of my attempts to solve this problem but I blocked pretty soon in the middle of it. for(int i=0; i<deck.size(); i++) { deck[i]//....There is no function to set them. They must be set when initialized. But how?? } } For easier reading: http://pastebin.com/pJeXMH0f

    Read the article

  • Drawing transparent glyphs on the HTML canvas

    - by Bertrand Le Roy
    The HTML canvas has a set of methods, createImageData and putImageData, that look like they will enable you to draw transparent shapes pixel by pixel. The data structures that you manipulate with these methods are pseudo-arrays of pixels, with four bytes per pixel. One byte for red, one for green, one for blue and one for alpha. This alpha byte makes one believe that you are going to be able to manage transparency, but that’s a lie. Here is a little script that attempts to overlay a simple generated pattern on top of a uniform background: var wrong = document.getElementById("wrong").getContext("2d"); wrong.fillStyle = "#ffd42a"; wrong.fillRect(0, 0, 64, 64); var overlay = wrong.createImageData(32, 32), data = overlay.data; fill(data); wrong.putImageData(overlay, 16, 16); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } where the fill method is setting the pixels in the lower-left half of the overlay to opaque red, and the rest to transparent black. And here’s how it renders: As you can see, the transparency byte was completely ignored. Or was it? in fact, what happens is more subtle. What happens is that the pixels from the image data, including their alpha byte, replaced the existing pixels of the canvas. So the alpha byte is not lost, it’s just that it wasn’t used by putImageData to combine the new pixels with the existing ones. This is in fact a clue to how to write a putImageData that works: we can first dump that image data into an intermediary canvas, and then compose that temporary canvas onto our main canvas. The method that we can use for this composition is drawImage, which works not only with image objects, but also with canvas objects. var right = document.getElementById("right").getContext("2d"); right.fillStyle = "#ffd42a"; right.fillRect(0, 0, 64, 64); var overlay = wrong.createImageData(32, 32), data = overlay.data; fill(data); var overlayCanvas = document.createElement("canvas"); overlayCanvas.width = overlayCanvas.height = 32; overlayCanvas.getContext("2d").putImageData(overlay, 0, 0); right.drawImage(overlayCanvas, 16, 16); And there is is, a version of putImageData that works like it should always have:

    Read the article

  • SQL Sentry Truth-Telling and Disk Configuration

    - by AjarnMark
    Recently, SQL Sentry told me something about my SQL Server disk configurations that I just didn’t want to believe, but alas, it was true. Several days ago I posted my First Impressions of the SQL Sentry Power Suite.  Today’s post could fall into the category of, “Hey, as long as you have that fancy tool…”  Unfortunately, it also falls into the category of an overloaded worker taking someone else’s word for the truth, not verifying it with independent fact-checking, and then making decisions based on that.  Here’s my story… I’m not exactly an Accidental DBA (or Involuntary DBA as Paul Randal calls it).  I came to this company five years ago as a lead application developer with extensive experience in database design and development.  I worked my way into management, and along the way, took over the DBA responsibilities.  Fortunately, our systems run pretty smoothly most of the time, but I’m always looking for ways to make them better and to fit into my understanding of best practices.  When I took over as DBA, I inherited a SQL 2000 server with about 30 databases on it supporting our main systems, and a SQL 2005 server with multiple instances.  Both of these servers were configured with the Operating System and Application files on the C drive, data files on a different drive letter, and log files on a third drive letter.  Even before I took over as DBA, I verified that this was true with a previous server administrator, and that these represented actual separate disks.  He stated that they did, and I thought that all was well. Then one day, I’m poking around inside the SQL Sentry Performance Advisor, checking out features as I am evaluating whether to purchase the product, and I come across a Disk Configuration section.  The first thing I notice is that the drives do not have the proper partition offset, which was not at all surprising to me given the age of the installation and the relative newness of that topic.  But what threw me for a loop was that the graphic display appeared to be telling me that I did not in fact have three separate drives (or arrays) but rather had two, and that the log files were merely on a separate volume on the same physical array as the OS.  I figured that I must be reading it wrong so I scanned the Help file, but that just seemed to confirm my interpretation.  Then I thought, “there must be something wrong with the demo version of the software!  This can’t be right!”  But just to double-check, I went to our current server admin to talk it over with him, and sure enough, SQL Sentry was telling the truth! I was stunned!  I quickly went through the grieving process…denial…anger…reconciliation.  Here was something that I thought was such a basic truth that was turned upside down.  OK, granted, this wasn’t disastrous.  Our databases didn’t suddenly grind to a halt.  I didn’t get calls late at night inquiring about the sudden downturn in performance.  But it was a bit of a shock to the system, in a good way, to jolt me out of taking what I had believed as the truth for granted, and instead to Trust, but Verify! Yes, before someone else points it out, I know that there are”free” disk management tools built-in to Windows that would have told me the same thing if I had only looked at them; I did not have to buy a fancy tool to tell me that, but the fact is, until I was evaluating the tool, I had just gone with what I was told, and never bothered to check what was actually there. So, what things do you believe to be true but you actually never verified?

    Read the article

  • How to create array with unique sprites? in cocos2d iphone

    - by prakash s
    I write the code like this. This displays only one sprite (red colour bubble) with number of times and moving down, but actually I want to display different sprites (different colour bubble) every time and moving down. I also add no of .png images in resource folder of my project. Here I used only 3.png, but I need to display all *.png images (different colour bubbles) in my project but I don't know how to get this. Please help me Thank you. Here is the code: -(void)addTarget { CCSprite *target = [CCSprite spriteWithFile:@"3.png" rect:CGRectMake(0, 0, 256, 256)]; CGSize winSize = [[CCDirector sharedDirector] winSize]; int minY = target.contentSize.height/2; int maxY = winSize.height - target.contentSize.height/2; int rangeY = maxY - minY; int actualY = (arc4random() % rangeY) + minY; // Create the target slightly off-screen along the right edge, // and along a random position along the Y axis as calculated above target.position = ccp(winSize.width + (target.contentSize.width/2), actualY); [self addChild:target]; // Determine speed of the target int minDuration = 4.0; int maxDuration = 12.0; int rangeDuration = maxDuration - minDuration; int actualDuration = (arc4random() % rangeDuration) + minDuration; // Create the actions id actionMove = [CCMoveTo actionWithDuration:actualDuration position:ccp(-target.contentSize.width/2,actualY)]; id actionMoveDone = [CCCallFuncN actionWithTarget:self selector:@selector(spriteMoveFinished:)]; [target runAction:[CCSequence actions:actionMove, actionMoveDone, nil]]; // Add to targets array target.tag = 2; [_targets addObject:target]; } -(void)gameLogic:(ccTime)dt { [self addTarget]; } -(id) init { if( (self=[super initWithColor:ccc4(255,255,255,255)] )) { // Enable touch events self.isTouchEnabled = YES; // Initialize arrays _targets = [[NSMutableArray alloc] init]; _projectiles = [[NSMutableArray alloc] init]; // Get the dimensions of the window for calculation purposes CGSize winSize = [[CCDirector sharedDirector] winSize]; [self schedule:@selector(gameLogic:) interval:1.0]; [self schedule:@selector(update:)]; } return self; } - (void)update:(ccTime)dt { NSMutableArray *projectilesToDelete = [[NSMutableArray alloc] init]; for (CCSprite *projectile in _projectiles) { CGRect projectileRect = CGRectMake(projectile.position.x - (projectile.contentSize.width/2), projectile.position.y - (projectile.contentSize.height/2), projectile.contentSize.width, projectile.contentSize.height); NSMutableArray *targetsToDelete = [[NSMutableArray alloc] init]; for (CCSprite *target in _targets) { CGRect targetRect = CGRectMake(target.position.x - (target.contentSize.width/2), target.position.y - (target.contentSize.height/2), target.contentSize.width, target.contentSize.height); if (CGRectIntersectsRect(projectileRect, targetRect)) { [targetsToDelete addObject:target]; } } for (CCSprite *target in targetsToDelete) { [_targets removeObject:target]; [self removeChild:target cleanup:YES]; _projectilesDestroyed++; if (_projectilesDestroyed > 30) { //GameOverScene *gameOverScene = [GameOverScene node]; // [gameOverScene.layer.label setString:@"You Win!"]; // [[CCDirector sharedDirector] replaceScene:gameOverScene]; } } if (targetsToDelete.count > 0) { [projectilesToDelete addObject:projectile]; } [targetsToDelete release]; } for (CCSprite *projectile in projectilesToDelete) { [_projectiles removeObject:projectile]; [self removeChild:projectile cleanup:YES]; } [projectilesToDelete release]; }

    Read the article

  • Surviving MATLAB and R as a Hardcore Programmer

    - by dsimcha
    I love programming in languages that seem geared towards hardcore programmers. (My favorites are Python and D.) MATLAB is geared towards engineers and R is geared towards statisticians, and it seems like these languages were designed by people who aren't hardcore programmers and don't think like hardcore programmers. I always find them somewhat awkward to use, and to some extent I can't put my finger on why. Here are some issues I have managed to identify: (Both): The extreme emphasis on vectors and matrices to the extent that there are no true primitives. (Both): The difficulty of basic string manipulation. (Both): Lack of or awkwardness in support for basic data structures like hash tables and "real", i.e. type-parametric and nestable, arrays. (Both): They're really, really slow even by interpreted language standards, unless you bend over backwards to vectorize your code. (Both): They seem to not be designed to interact with the outside world. For example, both are fairly bulky programs that take a while to launch and seem to not be designed to make simple text filter programs easy to write. Furthermore, the lack of good string processing makes file I/O in anything but very standard forms near impossible. (Both): Object orientation seems to have a very bolted-on feel. Yes, you can do it, but it doesn't feel much more idiomatic than OO in C. (Both): No obvious, simple way to get a reference type. No pointers or class references. For example, I have no idea how you roll your own linked list in either of these languages. (MATLAB): You can't put multiple top level functions in a single file, encouraging very long functions and cut-and-paste coding. (MATLAB): Integers apparently don't exist as a first class type. (R): The basic builtin data structures seem way too high level and poorly documented, and never seem to do quite what I expect given my experience with similar but lower level data structures. (R): The documentation is spread all over the place and virtually impossible to browse or search. Even D, which is often knocked for bad documentation and is still fairly alpha-ish, is substantially better as far as I can tell. (R): At least as far as I'm aware, there's no good IDE for it. Again, even D, a fairly alpha-ish language with a small community, does better. In general, I also feel like MATLAB and R could be easily replaced by plain old libraries in more general-purpose langauges, if sufficiently comprehensive libraries existed. This is especially true in newer general purpose languages that include lots of features for library writers. Why do R and MATLAB seem so weird to me? Are there any other major issues that you've noticed that may make these languages come off as strange to hardcore programmers? When their use is necessary, what are some good survival tips? Edit: I'm seeing one issue from some of the answers I've gotten. I have a strong personal preference, when I analyze data, to have one script that incorporates the whole pipeline. This implies that a general purpose language needs to be used. I hate having to write a script to "clean up" the data and spit it out, then another to read it back in a completely different environment, etc. I find the friction of using MATLAB/R for some of my work and a completely different language with a completely different address space and way of thinking for the rest to be a huge source of friction. Furthermore, I know there are glue layers that exist, but they always seem to be horribly complicated and a source of friction.

    Read the article

  • 45 minutes to talk about C# [closed]

    - by Philip
    I have the opportunity to give a 45 minute talk on C# in the theory of programming languages class I'm taking. The college teaches Java almost exclusively, so that's what all the students are most familiar with. (There's a little C, assembly, Prolog and LISP as well.) I decide what to talk about. It seems to me the best approach is to focus on a few of the big, obvious differences between C# and Java. I don't intend it to be a recommendation to use C# -- there are reasons to use each, mostly because of their ecosystems. So I want to focus on C# as a language. I don't want to go too fast and end up listing a whole bunch of features without showing their usefulness. My current plan is this: Functions as first class objects. This is, in my opinion, one of the biggest differences between C# and Java. The professor briefly mentioned this notion and showed a LISP example, but many of the students have probably never used it. I can show real world examples where it's made my code more readable. Lambda expressions as concise syntax for anonymous functions. Obviously with examples to show how this is useful. The real hit-home examples will be at the end when it's combined with the rest. I don't see an advantage to first showing the old delegate syntax and then replacing it with lambdas -- most of us won't have ever seen delegates anyway so it would just be confusing. The yield keyword and how it's different from returning an array. I have the impression that a lot of C# developers aren't familiar with how to use this. It will likely be very foreign to Java developers. I have some examples from my own work where it was really useful, such as iterating over a tree traversal, or iterating over neighbors in a graph where the neighbors aren't stored in memory. In both cases, doing it in Java would likely mean returning a complete list -- with yield I can stop iterating if I find what I want early on, without using memory for superfluous lists or arrays. Extension methods as a way to write implementation on interfaces. We'll all be familiar with how interfaces don't allow method implementation, and how this leads to code duplication. I'll show a specific example of this and how the extension method can solve the problem. Demonstrate how the above can be combined by implementing some simple Linq methods and using them. Where, Select, First, maybe more depending on how much time is left. Ideas on which ones might 'hit home' the best? There are other things I could talk about such as generics, value types, properties and more. I haven't yet though of good ways to incorporate these. In the case of generics and value types, the advantages might not be obvious or as relevant. Properties are obviously useful, particularly since we're taught strict JavaBeans here, but I don't know if I could integrate it with the "path to Linq" discussion above without it feeling tacked on. So I'm looking for thoughts on how to talk about C#, and what to talk about. Even minor details. I'm sure there are more experienced C# developers than me here who have good insight about what's really important in the language, and what would miss the point.

    Read the article

  • Lessons Building KeyRef (a .NET developer learning Rails)

    - by Liam McLennan
    Just because I like to build things, and I like to learn, I have been working on a keyboard shortcut reference site. I am using this as an opportunity to improve my ruby and rails skills. The first few days were frustrating. Perhaps the learning curve of all the fun new toys was a bit excessive. Finally tonight things have really started to come together. I still don’t understand the rails built-in testing support but I will get there. Interesting Things I Learned Tonight RubyMine IDE Tonight I switched to RubyMine instead of my usual Notepad++. I suspect RubyMine is a powerful tool if you know how to use it – but I don’t. At the moment it gives me errors about some gems not being activated. This is another one of those things that I will get to. I have also noticed that the editor functions significantly differently to the editors I am used to. For example, in visual studio and notepad++ if you place the cursor at the start of a line and press left arrow the cursor is sent to the end of the previous line. In RubyMine nothing happens. Haml Haml is my favourite view engine. For my .NET work I have been using its non-union Mexican CLR equivalent – nHaml. Multiple CSS Classes To define a div with more than one css class haml lets you chain them together with a ‘.’, such as: .span-6.search_result contents of the div go here Indent Consistency I also learnt tonight that both haml and nhaml complain if you are not consistent about indenting. As a consequence of the move from notepad++ to RubyMine my haml views ended up with some tab indenting and some space indenting. For the view to render all of the indents within a view must be consistent. Sorting Arrays I guessed that ruby would be able to sort an array alphabetically by a property of the elements so my first attempt was: Application.all.sort {|app| app.name} which does not work. You have to supply a comparer (much like .NET). The correct sort is: Application.all.sort {|a,b| a.name.downcase <=> b.name.downcase} MongoMapper Find by Id Since document databases are just fancy key-value stores it is essential to be able to easily search for a document by its id. This functionality is so intrinsic that it seems that the MongoMapper author did not bother to document it. To search by id simply pass the id to the find method: Application.find(‘4c19e8facfbfb01794000002’) Rails And CoffeeScript I am a big fan of CoffeeScript so integrating it into this application is high on my priorities. My first thought was to copy Dr Nic’s strategy. Unfortunately, I did not get past step 1. Install Node.js. I am doing my development on Windows and node is unix only. I looked around for a solution but eventually had to concede defeat… for now. Quicksearch The front page of the application I am building displays a list of applications. When the user types in the search box I want to reduce the list of applications to match their search. A quick googlebing turned up quicksearch, a jquery plugin. You simply tell quicksearch where to get its input (the search textbox) and the list of items to filter (the divs containing the names of applications) and it just works. Here is the code: $('#app_search').quicksearch('.search_result'); Summary I have had a productive evening. The app now displays a list of applications, allows them to be sorted and links through to an application page when an application is selected. Next on the list is to display the set of keyboard shortcuts for an application.

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • concurrency::extent<N> from amp.h

    - by Daniel Moth
    Overview We saw in a previous post how index<N> represents a point in N-dimensional space and in this post we'll see how to define the N-dimensional space itself. With C++ AMP, an N-dimensional space can be specified with the template class extent<N> where you define the size of each dimension. From a look and feel perspective, you'd expect the programmatic interface of a point type and size type to be similar (even though the concepts are different). Indeed, exactly like index<N>, extent<N> is essentially a coordinate vector of N integers ordered from most- to least- significant, BUT each integer represents the size for that dimension (and hence cannot be negative). So, if you read the description of index, you won't be surprised with the below description of extent<N> There is the rank field returning the value of N you passed as the template parameter. You can construct one extent from another (via the copy constructor or the assignment operator), you can construct it by passing an integer array, or via convenience constructor overloads for 1- 2- and 3- dimension extents. Note that the parameterless constructor creates an extent of the specified rank with all bounds initialized to 0. You can access the components of the extent through the subscript operator (passing it an integer). You can perform some arithmetic operations between extent objects through operator overloading, i.e. ==, !=, +=, -=, +, -. There are operator overloads so that you can perform operations between an extent and an integer: -- (pre- and post- decrement), ++ (pre- and post- increment), %=, *=, /=, +=, –= and, finally, there are additional overloads for plus and minus (+,-) between extent<N> and index<N> objects, returning a new extent object as the result. In addition to the usual suspects, extent offers a contains function that tests if an index is within the bounds of the extent (assuming an origin of zero). It also has a size function that returns the total linear size of this extent<N> in units of elements. Example code extent<2> e(3, 4); _ASSERT(e.rank == 2); _ASSERT(e.size() == 3 * 4); e += 3; e[1] += 6; e = e + index<2>(3,-4); _ASSERT(e == extent<2>(9, 9)); _ASSERT( e.contains(index<2>(8, 8))); _ASSERT(!e.contains(index<2>(8, 9))); grid<N> Our upcoming pre-release bits also have a similar type to extent, grid<N>. The way you create a grid is by passing it an extent, e.g. extent<3> e(4,2,6); grid<3> g(e); I am not going to dive deeper into grid, suffice for now to think of grid<N> simply as an alias for the extent<N> object, that you create when you encounter a function that expects a grid object instead of an extent object. Usage The extent class on its own simply defines the size of the N-dimensional space. We'll see in future posts that when you create containers (arrays) and wrappers (array_views) for your data, it is an extent<N> object that you'll need to use to create those (and use an index<N> object to index into them). We'll also see that it is a grid<N> object that you pass to the new parallel_for_each function that I'll cover in the next post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Intern Screening - Software 'Quiz'

    - by Jeremy1026
    I am in charge of selecting a new software development intern for a company that I work with. I wanted to throw a little 'quiz' at the applicants before moving forth with interviews so as to weed out the group a little bit to find some people that can demonstrate some skill. I put together the following quiz to send to applicants, it focuses only on PHP, but that is because that is what about 95% of the work will be done in. I'm hoping to get some feedback on A. if its a good idea to send this to applicants and B. if it can be improved upon. # 1. FizzBuzz # Write a small application that does the following: # Counts from 1 to 100 # For multiples of 3 output "Fizz" # For multiples of 5 output "Buzz" # For multiples of 3 and 5 output "FizzBuzz" # For numbers that are not multiples of 3 nor 5 output the number. <?php ?> # 2. Arrays # Create a multi-dimensional array that contains # keys for 'id', 'lot', 'car_model', 'color', 'price'. # Insert three sets of data into the array. <?php ?> # 3. Comparisons # Without executing the code, tell if the expressions # below will return true or false. <?php if ((strpos("a","abcdefg")) == TRUE) echo "True"; else echo "False"; //True or False? if ((012 / 4) == 3) echo "True"; else echo "False"; //True or False? if (strcasecmp("abc","ABC") == 0) echo "True"; else echo "False"; //True or False? ?> # 4. Bug Checking # The code below is flawed. Fix it so that the code # runs properly without producing any Errors, Warnings # or Notices, and returns the proper value. <?php //Determine how many parts are needed to create a 3D pyramid. function find_3d_pyramid($rows) { //Loop through each row. for ($i = 0; $i < $rows; $i++) { $lastRow++; //Append the latest row to the running total. $total = $total + (pow($lastRow,3)); } //Return the total. return $total; } $i = 3; echo "A pyramid consisting of $i rows will have a total of ".find_3d_pyramid($i)." pieces."; ?> # 5. Quick Examples # Create a small example to complete the task # for each of the following problems. # Create an md5 hash of "Hello World"; # Replace all occurances of "_" with "-" in the string "Welcome_to_the_universe." # Get the current date and time, in the following format, YYYY/MM/DD HH:MM:SS AM/PM # Find the sum, average, and median of the following set of numbers. 1, 3, 5, 6, 7, 9, 10. # Randomly roll a six-sided die 5 times. Store the 5 rolls into an array. <?php ?>

    Read the article

  • Profiling Startup Of VS2012 &ndash; JustTrace Profiler

    - by Alois Kraus
    JustTrace is made by Telerik which is mainly known for its collection of UI controls. The current version (2012.3.1127.0) does include a performance and memory profiler which does cost 614€ and is currently with a special offer for 306€ on sale. It does include one year of free upgrades. The uneven € numbers are calculated from the 799€ and 50% dicsount price. The UI is already in Metro style and simple to use. Multi process, attach, method recording filter are not supported. It looks like JustTrace is like Ants a Just My Code profiler. For stuff where you do not have the pdbs or you want to dig deeper into the BCL code you will not get far. After getting the profile data you get in the All Methods grid a plain list with hit count and own time. The method list for all methods is also suspiciously short which is a clear sign that you will not get far during the analysis of foreign code. But at least there is also a memory profiler included. For this I have to choose in the first window for Profiling Type “Memory Profiler” to check the memory consumption of VS.  There are some interesting number to see but I do really miss from YourKit the thread stack window. How am I supposed to get a clue when much memory is allocated and the CPU consumption is high in which places I should look? The Snapshot summary gives a rough overview which is ok for a first impression. Next is Assemblies? This gives you a list of all loaded assemblies. Not terribly useful.   The By Type view gives you exactly what it is supposed to do. You have to keep in mind that this list is filtered by the types you did check in the Assemblies list. The By Type instance list does only show types from assemblies which do not originate from Microsoft. By default mscorlib and System are not checked. That is the reason why for the first time my By Type window looked like The idea behind this feature is to show only your instances because you are ultimately responsible for the overall memory consumption. I am not sure if I do like this feature because by default it does hide too much. I do want to see at least how many strings and arrays are allocated. A simple namespace filter would also do it in my opinion. Now you can examine all string instances and look who in the object graph does keep a reference on them. That is nice but YourKit has the big plus that you can also look into the string contents.  I am also not sure how in the graph cycles are visualized and what will happen if you have thousands of objects referencing you. That's pretty much it about JustTrace. It can help the average developer to pinpoint performance and memory issues by just looking at his own code and instances. Showing them more will not help them because the sheer amount of information will overwhelm them. And you need to have a pretty good understanding how the GC and the CLR does work. When you have a performance issue at a customer machine it is sometimes very helpful to be able a bring a profiler onto the machine (no pdbs, …) and to get a full snapshot of all processes which are in the problematic use case involved. For these more advanced use cased JustTrace is certainly the wrong tool. Next: SpeedTrace

    Read the article

  • Auto-organized / smart inventory system?

    - by VeXe
    for the past week I've been working on an inventory system with Unity3D. At first I got help from the guys at Design3 but it wasn't too long till we split path, because I really didn't like the way they did their code, it didn't have any smell of OOP whatsoever. I took it further steps ahead - items take more than one slot, advanced placement system (items tries their best to find the best close fit), local mouse system (mouse gets trapped in active bag area), etc. Here's a demo of my work. What we would like to have in our game, is an auto-organizing feature - not auto-sort. We want this feature because our inventory's going to be in 'real-time' - not like in Resident Evil 1,2,3 etc where you would pause the game and do things in your inventory. Now imagine your self in a sticky situation surrounded by zombies, and you don't have bullets, you look around, you see that there are bullets nearby on the ground, so you go for them and try to pick them up, but they don't fit! you look at your inventory and find out that if you reorganize some of the items, it will fit! - now the player - in that situation doesn't have time to reorganize because he's surrounded with zombies and will die if he stops and organizes the inventory to make space (remember inventory in real-time, no pausing) - wouldn't it be nice for that to happen automatically? - Yes! (I believe this has been implemented in some games like Dungeon siege or something, so sure it's doable) take a look at this picture for example: Yes, so if you auto-sort the issue you will get your spaces but it's bad because: 1- Expensive: it doesn't need a whole sort operation to free those spaces, in the first picture, just slide the red item at the bottom to the very left, and you get the same spaces that you got from the auto-sort. 2- It's annoying to the player: "Who the F told you to re-order my stuff?" I'm not asking for "How to write the code" for this, I'm just asking for some guidance, where to look, what algorithms are involved? Is this something related to graphs and shortest path stuff? I hope not cuz I didn't manage to continue my college studies :/ But even if it is, just tell me and I will learn the stuff related. Notice there could be more than just one solution. So I guess the first thing I have to do is figure out if the situation is 'solvable' - if I know how to determine if a situation is solvable or not, then I can 'solve' it. I just need to know the conditions that makes it 'solvable'. And I believe there must be some algorithm/data structure for this. Here's a pic for more than one solution of trying to fit a 1x3 item: The arrows show just one of the solutions, but if you look you will find more than one. This is what I ultimately not auto-sorting but find a solution and applying it. Note that if I spend time on it I will come up with a way to solve it, but it wouldn't be the best way, it's like, holding a car wheel with your feet instead of your hands! XD Or just like trying to solve an issue that requires arrays, but you're not yet aware of their existence! So what is the right approach to this? Hope somebody helps, thanks a lot in advance :)

    Read the article

  • Too complex/too many objects?

    - by Mike Fairhurst
    I know that this will be a difficult question to answer without context, but hopefully there are at least some good guidelines to share on this. The questions are at the bottom if you want to skip the details. Most are about OOP in general. Begin context. I am a jr dev on a PHP application, and in general the devs I work with consider themselves to use many more OO concepts than most PHP devs. Still, in my research on clean code I have read about so many ways of using OO features to make code flexible, powerful, expressive, testable, etc. that is just plain not in use here. The current strongly OO API that I've proposed is being called too complex, even though it is trivial to implement. The problem I'm solving is that our permission checks are done via a message object (my API, they wanted to use arrays of constants) and the message object does not hold the validation object accountable for checking all provided data. Metaphorically, if your perm containing 'allowable' and 'rare but disallowed' is sent into a validator, the validator may not know to look for 'rare but disallowed', but approve 'allowable', which will actually approve the whole perm check. We have like 11 validators, too many to easily track at such minute detail. So I proposed an AtomicPermission class. To fix the previous example, the perm would instead contain two atomic permissions, one wrapping 'allowable' and the other wrapping 'rare but disallowed'. Where previously the validator would say 'the check is OK because it contains allowable,' now it would instead say '"allowable" is ok', at which point the check ends...and the check fails, because 'rare but disallowed' was not specifically okay-ed. The implementation is just 4 trivial objects, and rewriting a 10 line function into a 15 line function. abstract class PermissionAtom { public function allow(); // maybe deny() as well public function wasAllowed(); } class PermissionField extends PermissionAtom { public function getName(); public function getValue(); } class PermissionIdentifier extends PermissionAtom { public function getIdentifier(); } class PermissionAction extends PermissionAtom { public function getType(); } They say that this is 'not going to get us anything important' and it is 'too complex' and 'will be difficult for new developers to pick up.' I respectfully disagree, and there I end my context to begin the broader questions. So the question is about my OOP, are there any guidelines I should know: is this too complicated/too much OOP? Not that I expect to get more than 'it depends, I'd have to see if...' when is OO abstraction too much? when is OO abstraction too little? how can I determine when I am overthinking a problem vs fixing one? how can I determine when I am adding bad code to a bad project? how can I pitch these APIs? I feel the other devs would just rather say 'its too complicated' than ask 'can you explain it?' whenever I suggest a new class.

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • Where is my VMware-ws FreeNAS CIFS(ZFS) bottle-neck?

    - by maka
    Background: I'm building a quiet HTPC + NAS that is also supposed to be used for general computer usage. I'm so far generally happy with things, it was just that I was expecting a little better IO performance. I have no clue if my expectations are unreal. The NAS is there as a general purpose file storage and as a media server for XBMC and other devices. ZFS is a requirement. Question: Where is my bottle-neck, and is there anything I can do config wise, to improve my performance? I'm thinking VM-disk settings could be something but I really have no idea where to go since I'm neither experienced with FreeNAS nor VMware-WS. Tests: When I'm on the host OS and copy files (from the SSD) to the CIFS share, I get around 30 Mbytes/sec read and write. When I'm on my laptop laptop, wired to the network, I get about the same specs. The test I've done are with a 16 GB ISO, and with about 200 MB of RARs and I've tried avoiding the RAM-cache by reading different files than the ones I'm writing ( 10 GB). It feels like having less CPU cores is a lot more efficient, since the resource manager in Windows reports less CPU-usage. With 4 cores in VMware, CPU usage was 50-80%, with 1 core it was 25-60%. EDIT: HD ActiveTime was quite high on SSD so I moved the page file, disabled hibernate and enabled Win DiskCache both on SSD and RAID. This resulted in no real performance difference for one file, but if i transferred 2 files the total speed went up to 50 Mbytes/s vs ~40. The ActiveTime avg also went down a lot (to ~20%) but has now higher bursts. DiskIO is on ~ 30-35 Mbytes/s avgs, with ~100Mb bursts. Network is on 200-250Mbits/s with ~45 active TCP connections. Hardware Asus F2A85-M Pro A10-5700 16GB DDR3 1600 OCZ Vertex 2 128GB SSD 2x Generic 1tb 7200 RPM drives as RAID0 (in win7) Intel Gigabit Desktop CT Software Host OS: Win7 (SSD) VMware Worksation 9 (SSD) FreeNAS 8.3 VM (20GB VDisk on SSD) CPU: I've tried 1, 2 and 4 cores. Virtualisation engine, Preferred mode: Automatic 10,24Gb ram 50Gb SCSI VDisk on the RAID0, VDisk is formatted as ZFS and exposed through CIFS through FreeNAS. NIC Bridge, Replicate physical network state Below are two typical process print-outs while I'm transfering one file to the CIFS share. last pid: 2707; load averages: 0.60, 0.43, 0.24 up 0+00:07:05 00:34:26 32 processes: 2 running, 30 sleeping Mem: 101M Active, 53M Inact, 1620M Wired, 2188K Cache, 149M Buf, 8117M Free Swap: 4096M Total, 4096M Free PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND 2640 root 1 102 0 50164K 10364K RUN 0:25 25.98% smbd 1897 root 6 44 0 168M 74808K uwait 0:02 0.00% python last pid: 2746; load averages: 0.93, 0.60, 0.33 up 0+00:08:53 00:36:14 33 processes: 2 running, 31 sleeping Mem: 101M Active, 53M Inact, 4722M Wired, 2188K Cache, 152M Buf, 5015M Free Swap: 4096M Total, 4096M Free PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND 2640 root 1 76 0 50164K 10364K RUN 0:52 16.99% smbd 1897 root 6 44 0 168M 74816K uwait 0:02 0.00% python I'm sorry if my question isn't phrased right, I'm really bad at these kind of things, and it is the first time I post here at SU. I also appreciate any other suggestions to something, I could have missed.

    Read the article

  • Algorithm for dynamic combinations

    - by sOltan
    My code has a list called INPUTS, that contains a dynamic number of lists, let's call them A, B, C, .. N. These lists contain a dynamic number of Events I would like to call a function with each combination of Events. To illustrate with an example: INPUTS: A(0,1,2), B(0,1), C(0,1,2,3) I need to call my function this many times for each combination (the input count is dynamic, in this example it is three parameter, but it can be more or less) function(A[0],B[0],C[0]) function(A[0],B[1],C[0]) function(A[0],B[0],C[1]) function(A[0],B[1],C[1]) function(A[0],B[0],C[2]) function(A[0],B[1],C[2]) function(A[0],B[0],C[3]) function(A[0],B[1],C[3]) function(A[1],B[0],C[0]) function(A[1],B[1],C[0]) function(A[1],B[0],C[1]) function(A[1],B[1],C[1]) function(A[1],B[0],C[2]) function(A[1],B[1],C[2]) function(A[1],B[0],C[3]) function(A[1],B[1],C[3]) function(A[2],B[0],C[0]) function(A[2],B[1],C[0]) function(A[2],B[0],C[1]) function(A[2],B[1],C[1]) function(A[2],B[0],C[2]) function(A[2],B[1],C[2]) function(A[2],B[0],C[3]) function(A[2],B[1],C[3]) This is what I have thought of so far: My approach so far is to build a list of combinations. The element combination is itself a list of "index" to the input arrays A, B and C. For our example: my list iCOMBINATIONS contains the following iCOMBO lists (0,0,0) (0,1,0) (0,0,1) (0,1,1) (0,0,2) (0,1,2) (0,0,3) (0,1,3) (1,0,0) (1,1,0) (1,0,1) (1,1,1) (1,0,2) (1,1,2) (1,0,3) (1,1,3) (2,0,0) (2,1,0) (2,0,1) (2,1,1) (2,0,2) (2,1,2) (2,0,3) (2,1,3) Then I would do this: foreach( iCOMBO in iCOMBINATIONS) { foreach ( P in INPUTS ) { COMBO.Clear() foreach ( i in iCOMBO ) { COMBO.Add( P[ iCOMBO[i] ] ) } function( COMBO ) --- (instead of passing the events separately) } } But I need to find a way to build the list iCOMBINATIONS for any given number of INPUTS and their events. Any ideas? Is there actually a better algorithm than this? any pseudo code to help me with will be great. C# (or VB) Thank You

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >