Search Results

Search found 17528 results on 702 pages for 'row height'.

Page 116/702 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • SQL SERVER – SSIS Look Up Component – Cache Mode – Notes from the Field #028

    - by Pinal Dave
    [Notes from Pinal]: Lots of people think that SSIS is all about arranging various operations together in one logical flow. Well, the understanding is absolutely correct, but the implementation of the same is not as easy as it seems. Similarly most of the people think lookup component is just component which does look up for additional information and does not pay much attention to it. Due to the same reason they do not pay attention to the same and eventually get very bad performance. Linchpin People are database coaches and wellness experts for a data driven world. In this 28th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to write a good lookup component with Cache Mode. In SQL Server Integration Services, the lookup component is one of the most frequently used tools for data validation and completion.  The lookup component is provided as a means to virtually join one set of data to another to validate and/or retrieve missing values.  Properly configured, it is reliable and reasonably fast. Among the many settings available on the lookup component, one of the most critical is the cache mode.  This selection will determine whether and how the distinct lookup values are cached during package execution.  It is critical to know how cache modes affect the result of the lookup and the performance of the package, as choosing the wrong setting can lead to poorly performing packages, and in some cases, incorrect results. Full Cache The full cache mode setting is the default cache mode selection in the SSIS lookup transformation.  Like the name implies, full cache mode will cause the lookup transformation to retrieve and store in SSIS cache the entire set of data from the specified lookup location.  As a result, the data flow in which the lookup transformation resides will not start processing any data buffers until all of the rows from the lookup query have been cached in SSIS. The most commonly used cache mode is the full cache setting, and for good reason.  The full cache setting has the most practical applications, and should be considered the go-to cache setting when dealing with an untested set of data. With a moderately sized set of reference data, a lookup transformation using full cache mode usually performs well.  Full cache mode does not require multiple round trips to the database, since the entire reference result set is cached prior to data flow execution. There are a few potential gotchas to be aware of when using full cache mode.  First, you can see some performance issues – memory pressure in particular – when using full cache mode against large sets of reference data.  If the table you use for the lookup is very large (either deep or wide, or perhaps both), there’s going to be a performance cost associated with retrieving and caching all of that data.  Also, keep in mind that when doing a lookup on character data, full cache mode will always do a case-sensitive (and in some cases, space-sensitive) string comparison even if your database is set to a case-insensitive collation.  This is because the in-memory lookup uses a .NET string comparison (which is case- and space-sensitive) as opposed to a database string comparison (which may be case sensitive, depending on collation).  There’s a relatively easy workaround in which you can use the UPPER() or LOWER() function in the pipeline data and the reference data to ensure that case differences do not impact the success of your lookup operation.  Again, neither of these present a reason to avoid full cache mode, but should be used to determine whether full cache mode should be used in a given situation. Full cache mode is ideally useful when one or all of the following conditions exist: The size of the reference data set is small to moderately sized The size of the pipeline data set (the data you are comparing to the lookup table) is large, is unknown at design time, or is unpredictable Each distinct key value(s) in the pipeline data set is expected to be found multiple times in that set of data Partial Cache When using the partial cache setting, lookup values will still be cached, but only as each distinct value is encountered in the data flow.  Initially, each distinct value will be retrieved individually from the specified source, and then cached.  To be clear, this is a row-by-row lookup for each distinct key value(s). This is a less frequently used cache setting because it addresses a narrower set of scenarios.  Because each distinct key value(s) combination requires a relational round trip to the lookup source, performance can be an issue, especially with a large pipeline data set to be compared to the lookup data set.  If you have, for example, a million records from your pipeline data source, you have the potential for doing a million lookup queries against your lookup data source (depending on the number of distinct values in the key column(s)).  Therefore, one has to be keenly aware of the expected row count and value distribution of the pipeline data to safely use partial cache mode. Using partial cache mode is ideally suited for the conditions below: The size of the data in the pipeline (more specifically, the number of distinct key column) is relatively small The size of the lookup data is too large to effectively store in cache The lookup source is well indexed to allow for fast retrieval of row-by-row values No Cache As you might guess, selecting no cache mode will not add any values to the lookup cache in SSIS.  As a result, every single row in the pipeline data set will require a query against the lookup source.  Since no data is cached, it is possible to save a small amount of overhead in SSIS memory in cases where key values are not reused.  In the real world, I don’t see a lot of use of the no cache setting, but I can imagine some edge cases where it might be useful. As such, it’s critical to know your data before choosing this option.  Obviously, performance will be an issue with anything other than small sets of data, as the no cache setting requires row-by-row processing of all of the data in the pipeline. I would recommend considering the no cache mode only when all of the below conditions are true: The reference data set is too large to reasonably be loaded into SSIS memory The pipeline data set is small and is not expected to grow There are expected to be very few or no duplicates of the key values(s) in the pipeline data set (i.e., there would be no benefit from caching these values) Conclusion The cache mode, an often-overlooked setting on the SSIS lookup component, represents an important design decision in your SSIS data flow.  Choosing the right lookup cache mode directly impacts the fidelity of your results and the performance of package execution.  Know how this selection impacts your ETL loads, and you’ll end up with more reliable, faster packages. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • how to do event checks for loops?

    - by yao jiang
    I am having some trouble getting the logic down for this. Currently, I have an app that animates the astar pathfinding algorithm. On start of the app, the ui will show the following: User can press "space" to randomly choose start/end coords, then the app will animate it. Or, user can choose the start/end by left-click/right-click. During the animation, the user can also left-click to generate blocks, or right-click to choose a new destiantion. Where I am stuck at is how to handle the events while the app is animating. Right now, I am checking events in the main loop, then when the app is animating, I do event checks again. While it works fine, I feel that I am probably doing it wrong. What is the proper way of setting up the main loop that will handle the events while the app is animating? In main loop, the app start animating once user choose start/end. In my draw function, I am putting another event checker in there. def clear(rows): for r in range(rows): for c in range(rows): if r%3 == 1 and c%3 == 1: color = brown; grid[r][c] = 1; buildCoor.append(r); buildCoor.append(c); else: color = white; grid[r][c] = 0; pick_image(screen, color, width*c, height*r); pygame.display.flip(); os.system('cls'); # draw out the grid def draw(start, end, grid, route_coord): # draw the end coords color = red; pick_image(screen, color, width*end[1],height*end[0]); pygame.display.flip(); # then draw the rest of the route for i in range(len(route_coord)): # pausing because we want animation time.sleep(speed); # get the x/y coords x,y = route_coord[i]; event_on = False; if grid[x][y] == 2: color = green; elif grid[x][y] == 3: color = blue; for event in pygame.event.get(): if event.type == pygame.MOUSEBUTTONDOWN: if event.button == 3: print "destination change detected, rerouting"; # get mouse position, px coords pos = pygame.mouse.get_pos(); # get grid coord c = pos[0] // width; r = pos[1] // height; grid[r][c] = 4; end = [r, c]; elif event.button == 1: print "user generated event"; pos = pygame.mouse.get_pos(); # get grid coord c = pos[0] // width; r = pos[1] // height; # mark it as a block for now grid[r][c] = 1; event_on = True; if check_events([x,y]) or event_on: # there is an event # mark it as a block for now grid[y][x] = 1; pick_image(screen, event_x, width*y, height*x); pygame.display.flip(); # then find a new route new_start = route_coord[i-1]; marked_grid, route_coord = find_route(new_start, end, grid); draw(new_start, end, grid, route_coord); return; # just end draw here so it wont throw the "index out of range" error elif grid[x][y] == 4: color = red; pick_image(screen, color, width*y, height*x); pygame.display.flip(); # clear route coord list, otherwise itll just add more unwanted coords route_coord_list[:] = []; clear(rows); # main loop while not done: # check the events for event in pygame.event.get(): # mouse events if event.type == pygame.MOUSEBUTTONDOWN: # get mouse position, px coords pos = pygame.mouse.get_pos(); # get grid coord c = pos[0] // width; r = pos[1] // height; # find which button pressed, highlight grid accordingly if event.button == 1: # left click, start coords if grid[r][c] == 2: grid[r][c] = 0; color = white; elif grid[r][c] == 0 or grid[r][c] == 4: grid[r][c] = 2; start = [r,c]; color = green; else: grid[r][c] = 1; color = brown; elif event.button == 3: # right click, end coords if grid[r][c] == 4: grid[r][c] = 0; color = white; elif grid[r][c] == 0 or grid[r][c] == 2: grid[r][c] = 4; end = [r,c]; color = red; else: grid[r][c] = 1; color = brown; pick_image(screen, color, width*c, height*r); # keyboard events elif event.type == pygame.KEYDOWN: clear(rows); # one way to quit program if event.key == pygame.K_ESCAPE: print "program will now exit."; done = True; # space key for random start/end elif event.key == pygame.K_SPACE: # first clear the ui clear(rows); # now choose random start/end coords buildLoc = zip(buildCoor,buildCoor[1:])[::2]; #print buildLoc; (start_x, start_y, end_x, end_y) = pick_point(); while (start_x, start_y) in buildLoc or (end_x, end_y) in buildLoc: (start_x, start_y, end_x, end_y) = pick_point(); clear(rows); print "chosen random start/end coords: ", (start_x, start_y, end_x, end_y); if (start_x, start_y) in buildLoc or (end_x, end_y) in buildLoc: print "error"; # draw the route marked_grid, route_coord = find_route([start_x,start_y],[end_x,end_y], grid); draw([start_x, start_y], [end_x, end_y], marked_grid, route_coord); # return key for user defined start/end elif event.key == pygame.K_RETURN: # first clear the ui clear(rows); # get the user defined start/end print "user defined start/end are: ", (start[0], start[1], end[0], end[1]); grid[start[0]][start[1]] = 1; grid[end[0]][end[1]] = 2; # draw the route marked_grid, route_coord = find_route(start, end, grid); draw(start, end, marked_grid, route_coord); # c to clear the screen elif event.key == pygame.K_c: print "clearing screen."; clear(rows); # go fullscreen elif event.key == pygame.K_f: if not full_sc: pygame.display.set_mode([1366, 768], pygame.FULLSCREEN); full_sc = True; rows = 15; clear(rows); else: pygame.display.set_mode(size); full_sc = False; # +/- key to change speed of animation elif event.key == pygame.K_LEFTBRACKET: if speed >= 0.1: print SPEED_UP; speed = speed_up(speed); print speed; else: print FASTEST; print speed; elif event.key == pygame.K_RIGHTBRACKET: if speed < 1.0: print SPEED_DOWN; speed = slow_down(speed); print speed; else: print SLOWEST print speed; # second method to quit program elif event.type == pygame.QUIT: print "program will now exit."; done = True; # limit to 20 fps clock.tick(20); # update the screen pygame.display.flip();

    Read the article

  • MapRedux - PowerShell and Big Data

    - by Dittenhafer Solutions
    MapRedux – #PowerShell and #Big Data Have you been hearing about “big data”, “map reduce” and other large scale computing terms over the past couple of years and been curious to dig into more detail? Have you read some of the Apache Hadoop online documentation and unfortunately concluded that it wasn't feasible to setup a “test” hadoop environment on your machine? More recently, I have read about some of Microsoft’s work to enable Hadoop on the Azure cloud. Being a "Microsoft"-leaning technologist, I am more inclinded to be successful with experimentation when on the Windows platform. Of course, it is not that I am "religious" about one set of technologies other another, but rather more experienced. Anyway, within the past couple of weeks I have been thinking about PowerShell a bit more as the 2012 PowerShell Scripting Games approach and it occured to me that PowerShell's support for Windows Remote Management (WinRM), and some other inherent features of PowerShell might lend themselves particularly well to a simple implementation of the MapReduce framework. I fired up my PowerShell ISE and started writing just to see where it would take me. Quite simply, the ScriptBlock feature combined with the ability of Invoke-Command to create remote jobs on networked servers provides much of the plumbing of a distributed computing environment. There are some limiting factors of course. Microsoft provided some default settings which prevent PowerShell from taking over a network without administrative approval first. But even with just one adjustment, a given Windows-based machine can become a node in a MapReduce-style distributed computing environment. Ok, so enough introduction. Let's talk about the code. First, any machine that will participate as a remote "node" will need WinRM enabled for remote access, as shown below. This is not exactly practical for hundreds of intended nodes, but for one (or five) machines in a test environment it does just fine. C:> winrm quickconfig WinRM is not set up to receive requests on this machine. The following changes must be made: Set the WinRM service type to auto start. Start the WinRM service. Make these changes [y/n]? y Alternatively, you could take the approach described in the Remotely enable PSRemoting post from the TechNet forum and use PowerShell to create remote scheduled tasks that will call Enable-PSRemoting on each intended node. Invoke-MapRedux Moving on, now that you have one or more remote "nodes" enabled, you can consider the actual Map and Reduce algorithms. Consider the following snippet: $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose Invoke-MapRedux takes an instance of a MapReduceItem which references the Map and Reduce scriptblocks, an array of computer names which are the remote nodes, and the initial data set to be processed. As simple as that, you can start working with concepts of big data and the MapReduce paradigm. Now, how did we get there? I have published the initial version of my PsMapRedux PowerShell Module on GitHub. The PsMapRedux module provides the Invoke-MapRedux function described above. Feel free to browse the underlying code and even contribute to the project! In a later post, I plan to show some of the inner workings of the module, but for now let's move on to how the Map and Reduce functions are defined. Map Both the Map and Reduce functions need to follow a prescribed prototype. The prototype for a Map function in the MapRedux module is as follows. A simple scriptblock that takes one PsObject parameter and returns a hashtable. It is important to note that the PsObject $dataset parameter is a MapRedux custom object that has a "Data" property which offers an array of data to be processed by the Map function. $aMap = { Param ( [PsObject] $dataset ) # Indicate the job is running on the remote node. Write-Host ($env:computername + "::Map"); # The hashtable to return $list = @{}; # ... Perform the mapping work and prepare the $list hashtable result with your custom PSObject... # ... The $dataset has a single 'Data' property which contains an array of data rows # which is a subset of the originally submitted data set. # Return the hashtable (Key, PSObject) Write-Output $list; } Reduce Likewise, with the Reduce function a simple prototype must be followed which takes a $key and a result $dataset from the MapRedux's partitioning function (which joins the Map results by key). Again, the $dataset is a MapRedux custom object that has a "Data" property as described in the Map section. $aReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) # The hashtable to return $redux = @{}; # Return Write-Output $redux; } All Together Now When everything is put together in a short example script, you implement your Map and Reduce functions, query for some starting data, build the MapReduxItem via New-MapReduxItem and call Invoke-MapRedux to get the process started: # Import the MapRedux and SQL Server providers Import-Module "MapRedux" Import-Module “sqlps” -DisableNameChecking # Query the database for a dataset Set-Location SQLSERVER:\sql\dbserver1\default\databases\myDb $query = "SELECT MyKey, Date, Value1 FROM BigData ORDER BY MyKey"; Write-Host "Query: $query" $dataset = Invoke-SqlCmd -query $query # Build the Map function $MyMap = { Param ( [PsObject] $dataset ) Write-Host ($env:computername + "::Map"); $list = @{}; foreach($row in $dataset.Data) { # Write-Host ("Key: " + $row.MyKey.ToString()); if($list.ContainsKey($row.MyKey) -eq $true) { $s = $list.Item($row.MyKey); $s.Sum += $row.Value1; $s.Count++; } else { $s = New-Object PSObject; $s | Add-Member -Type NoteProperty -Name MyKey -Value $row.MyKey; $s | Add-Member -type NoteProperty -Name Sum -Value $row.Value1; $list.Add($row.MyKey, $s); } } Write-Output $list; } $MyReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) $redux = @{}; $count = 0; foreach($s in $dataset.Data) { $sum += $s.Sum; $count += 1; } # Reduce $redux.Add($s.MyKey, $sum / $count); # Return Write-Output $redux; } # Create the item data $Mr = New-MapReduxItem "My Test MapReduce Job" $MyMap $MyReduce # Array of processing nodes... $MyNodes = ("node1", "node2", "node3", "node4", "localhost") # Run the Map Reduce routine... $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose # Show the results Set-Location C:\ $MyMrResults | Out-GridView Conclusion I hope you have seen through this article that PowerShell has a significant infrastructure available for distributed computing. While it does take some code to expose a MapReduce-style framework, much of the work is already done and PowerShell could prove to be the the easiest platform to develop and run big data jobs in your corporate data center, potentially in the Azure cloud, or certainly as an academic excerise at home or school. Follow me on Twitter to stay up to date on the continuing progress of my Powershell MapRedux module, and thanks for reading! Daniel

    Read the article

  • (Unity)Getting a mirrored mesh from my data structure

    - by Steve
    Here's the background: I'm in the beginning stages of an RTS game in Unity. I have a procedurally generated terrain with a perlin-noise height map, as well as a function to generate a river. The problem is that the graphical creation of the map is taking the data structure of the map and rotating it by 180 degrees. I noticed this problem when i was creating my rivers. I would set the River's height to flat, and noticed that the actual tiles that were flat in the graphical representation were flipped and mirrored. Here's 3 screenshots of the map from different angles: http://imgur.com/a/VLHHq As you can see, if you flipped (graphically) the river by 180 degrees on the z axis, it would fit where the terrain is flattened. I have a suspicion it is being caused by a misunderstanding on my part of how vertices work. Alas, here is a snippet of the code that is used: This code here creates a new array of Tile objects, which hold the information for each tile, including its type, coordinate, height, and it's 4 vertices public DTileMap (int size_x, int size_y) { this.size_x = size_x; this.size_y = size_y; //Initialize Map_Data Array of Tile Objects map_data = new Tile[size_x, size_y]; for (int j = 0; j < size_y; j++) { for (int i = 0; i < size_x; i++) { map_data [i, j] = new Tile (); map_data[i,j].coordinate.x = (int)i; map_data[i,j].coordinate.y = (int)j; map_data[i,j].vertices[0] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -j * GTileMap.TileMap.tileSize); map_data[i,j].vertices[1] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[2] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[3] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); } } This code sets the river tiles to height 0 foreach (Tile t in map_data) { if (t.realType == "Water") { t.vertices[0].y = 0f; t.vertices[1].y = 0f; t.vertices[2].y = 0f; t.vertices[3].y = 0f; } } And below is the code to generate the actual graphics from the data: public void BuildMesh () { DTileMap.DTileMap map = new DTileMap.DTileMap (size_x, size_z); int numTiles = size_x * size_z; int numTris = numTiles * 2; int vsize_x = size_x + 1; int vsize_z = size_z + 1; int numVerts = vsize_x * vsize_z; // Generate the mesh data Vector3[] vertices = new Vector3[ numVerts ]; Vector3[] normals = new Vector3[numVerts]; Vector2[] uv = new Vector2[numVerts]; int[] triangles = new int[ numTris * 3 ]; int x, z; for (z=0; z < vsize_z; z++) { for (x=0; x < vsize_x; x++) { normals [z * vsize_x + x] = Vector3.up; uv [z * vsize_x + x] = new Vector2 ((float)x / size_x, 1f - (float)z / size_z); } } for (z=0; z < vsize_z; z+=1) { for (x=0; x < vsize_x; x+=1) { if (x == vsize_x - 1 && z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z - 1].vertices [3]; } else if (z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z - 1].vertices [2]; } else if (x == vsize_x - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z].vertices [1]; } else { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [0]; vertices [z * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [1]; vertices [(z+1) * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [2]; vertices [(z+1) * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [3]; } } } } for (z=0; z < size_z; z++) { for (x=0; x < size_x; x++) { int squareIndex = z * size_x + x; int triOffset = squareIndex * 6; triangles [triOffset + 0] = z * vsize_x + x + 0; triangles [triOffset + 2] = z * vsize_x + x + vsize_x + 0; triangles [triOffset + 1] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 3] = z * vsize_x + x + 0; triangles [triOffset + 5] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 4] = z * vsize_x + x + 1; } } // Create a new Mesh and populate with the data Mesh mesh = new Mesh (); mesh.vertices = vertices; mesh.triangles = triangles; mesh.normals = normals; mesh.uv = uv; // Assign our mesh to our filter/renderer/collider MeshFilter mesh_filter = GetComponent<MeshFilter> (); MeshCollider mesh_collider = GetComponent<MeshCollider> (); mesh_filter.mesh = mesh; mesh_collider.sharedMesh = mesh; calculateMeshTangents (mesh); BuildTexture (map); } If this looks familiar to you, its because i got most of it from Quill18. I've been slowly adapting it for my uses. And please include any suggestions you have for my code. I'm still in the very early prototyping stage.

    Read the article

  • Getting a mirrored mesh from my data structure

    - by Steve
    Here's the background: I'm in the beginning stages of an RTS game in Unity. I have a procedurally generated terrain with a perlin-noise height map, as well as a function to generate a river. The problem is that the graphical creation of the map is taking the data structure of the map and rotating it by 180 degrees. I noticed this problem when I was creating my rivers. I would set the River's height to flat, and noticed that the actual tiles that were flat in the graphical representation were flipped and mirrored. Here's 3 screenshots of the map from different angles: http://imgur.com/a/VLHHq As you can see, if you flipped (graphically) the river by 180 degrees on the z axis, it would fit where the terrain is flattened. I have a suspicion it is being caused by a misunderstanding on my part of how vertices work. Alas, here is a snippet of the code that is used: This code here creates a new array of Tile objects, which hold the information for each tile, including its type, coordinate, height, and it's 4 vertices public DTileMap (int size_x, int size_y) { this.size_x = size_x; this.size_y = size_y; //Initialize Map_Data Array of Tile Objects map_data = new Tile[size_x, size_y]; for (int j = 0; j < size_y; j++) { for (int i = 0; i < size_x; i++) { map_data [i, j] = new Tile (); map_data[i,j].coordinate.x = (int)i; map_data[i,j].coordinate.y = (int)j; map_data[i,j].vertices[0] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -j * GTileMap.TileMap.tileSize); map_data[i,j].vertices[1] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[2] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[3] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); } } This code sets the river tiles to height 0 foreach (Tile t in map_data) { if (t.realType == "Water") { t.vertices[0].y = 0f; t.vertices[1].y = 0f; t.vertices[2].y = 0f; t.vertices[3].y = 0f; } } And below is the code to generate the actual graphics from the data: public void BuildMesh () { DTileMap.DTileMap map = new DTileMap.DTileMap (size_x, size_z); int numTiles = size_x * size_z; int numTris = numTiles * 2; int vsize_x = size_x + 1; int vsize_z = size_z + 1; int numVerts = vsize_x * vsize_z; // Generate the mesh data Vector3[] vertices = new Vector3[ numVerts ]; Vector3[] normals = new Vector3[numVerts]; Vector2[] uv = new Vector2[numVerts]; int[] triangles = new int[ numTris * 3 ]; int x, z; for (z=0; z < vsize_z; z++) { for (x=0; x < vsize_x; x++) { normals [z * vsize_x + x] = Vector3.up; uv [z * vsize_x + x] = new Vector2 ((float)x / size_x, 1f - (float)z / size_z); } } for (z=0; z < vsize_z; z+=1) { for (x=0; x < vsize_x; x+=1) { if (x == vsize_x - 1 && z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z - 1].vertices [3]; } else if (z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z - 1].vertices [2]; } else if (x == vsize_x - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z].vertices [1]; } else { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [0]; vertices [z * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [1]; vertices [(z+1) * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [2]; vertices [(z+1) * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [3]; } } } } for (z=0; z < size_z; z++) { for (x=0; x < size_x; x++) { int squareIndex = z * size_x + x; int triOffset = squareIndex * 6; triangles [triOffset + 0] = z * vsize_x + x + 0; triangles [triOffset + 2] = z * vsize_x + x + vsize_x + 0; triangles [triOffset + 1] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 3] = z * vsize_x + x + 0; triangles [triOffset + 5] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 4] = z * vsize_x + x + 1; } } // Create a new Mesh and populate with the data Mesh mesh = new Mesh (); mesh.vertices = vertices; mesh.triangles = triangles; mesh.normals = normals; mesh.uv = uv; // Assign our mesh to our filter/renderer/collider MeshFilter mesh_filter = GetComponent<MeshFilter> (); MeshCollider mesh_collider = GetComponent<MeshCollider> (); mesh_filter.mesh = mesh; mesh_collider.sharedMesh = mesh; calculateMeshTangents (mesh); BuildTexture (map); } If this looks familiar to you, its because i got most of it from Quill18. I've been slowly adapting it for my uses. And please include any suggestions you have for my code. I'm still in the very early prototyping stage.

    Read the article

  • Linq to Datarow, Select multiple columns as distinct?

    - by Beta033
    basically i'm trying to reproduce the following mssql query as LINQ SELECT DISTINCT [TABLENAME], [COLUMNNAME] FROM [DATATABLE] the closest i've got is Dim query = (From row As DataRow In ds.Tables("DATATABLE").Rows _ Select row("COLUMNNAME") ,row("TABLENAME").Distinct when i do the above i get the error Range variable name can be inferred only from a simple or qualified name with no arguments. i was sort of expecting it to return a collection that i could then iterate through and perform actions for each entry. maybe a datarow collection? As a complete LINQ newb, i'm not sure what i'm missing. i've tried variations on Select new with { row("COLUMNNAME") ,row("TABLENAME")} and get: Anonymous type member name can be inferred only from a simple or qualified name with no arguments. Also, does anyone know of any good books/resources to get fluent?

    Read the article

  • Difference between LASTDATE and MAX for semi-additive measures in #DAX

    - by Marco Russo (SQLBI)
    I recently wrote an article on SQLBI about the semi-additive measures in DAX. I included the formulas common calculations and there is an interesting point that worth a longer digression: the difference between LASTDATE and MAX (which is similar to FIRSTDATE and MIN – I just describe the former, for the latter just replace the correspondent names). LASTDATE is a dax function that receives an argument that has to be a date column and returns the last date active in the current filter context. Apparently, it is the same value returned by MAX, which returns the maximum value of the argument in the current filter context. Of course, MAX can receive any numeric type (including date), whereas LASTDATE only accepts a column of type date. But overall, they seems identical in the result. However, the difference is a semantic one. In fact, this expression: LASTDATE ( 'Date'[Date] ) could be also rewritten as: FILTER ( VALUES ( 'Date'[Date] ), 'Date'[Date] = MAX ( 'Date'[Date] ) ) LASTDATE is a function that returns a table with a single column and one row, whereas MAX returns a scalar value. In DAX, any expression with one row and one column can be automatically converted into the corresponding scalar value of the single cell returned. The opposite is not true. So you can use LASTDATE in any expression where a table or a scalar is required, but MAX can be used only where a scalar expression is expected. Since LASTDATE returns a table, you can use it in any expression that expects a table as an argument, such as COUNTROWS. In fact, you can write this expression: COUNTROWS ( LASTDATE ( 'Date'[Date] ) ) which will always return 1 or BLANK (if there are no dates active in the current filter context). You cannot pass MAX as an argument of COUNTROWS. You can pass to LASTDATE a reference to a column or any table expression that returns a column. The following two syntaxes are semantically identical: LASTDATE ( 'Date'[Date] ) LASTDATE ( VALUES ( 'Date'[Date] ) ) The result is the same and the use of VALUES is not required because it is implicit in the first syntax, unless you have a row context active. In that case, be careful that using in a row context the LASTDATE function with a direct column reference will produce a context transition (the row context is transformed into a filter context) that hides the external filter context, whereas using VALUES in the argument preserve the existing filter context without applying the context transition of the row context (see the columns LastDate and Values in the following query and result). You can use any other table expressions (including a FILTER) as LASTDATE argument. For example, the following expression will always return the last date available in the Date table, regardless of the current filter context: LASTDATE ( ALL ( 'Date'[Date] ) ) The following query recap the result produced by the different syntaxes described. EVALUATE     CALCULATETABLE(         ADDCOLUMNS(              VALUES ('Date'[Date] ),             "LastDate", LASTDATE( 'Date'[Date] ),             "Values", LASTDATE( VALUES ( 'Date'[Date] ) ),             "Filter", LASTDATE( FILTER ( VALUES ( 'Date'[Date] ), 'Date'[Date] = MAX ( 'Date'[Date] ) ) ),             "All", LASTDATE( ALL ( 'Date'[Date] ) ),             "Max", MAX( 'Date'[Date] )         ),         'Date'[Calendar Year] = 2008     ) ORDER BY 'Date'[Date] The LastDate columns repeat the current date, because the context transition happens within the ADDCOLUMNS. The Values column preserve the existing filter context from being replaced by the context transition, so the result corresponds to the last day in year 2008 (which is filtered in the external CALCULATETABLE). The Filter column works like the Values one, even if we use the FILTER instead of the LASTDATE approach. The All column shows the result of LASTDATE ( ALL ( ‘Date’[Date] ) ) that ignores the filter on Calendar Year (in fact the date returned is in year 2010). Finally, the Max column shows the result of the MAX formula, which is the easiest to use and only don’t return a table if you need it (like in a filter argument of CALCULATE or CALCULATETABLE, where using LASTDATE is shorter). I know that using LASTDATE in complex expressions might create some issue. In my experience, the fact that a context transition happens automatically in presence of a row context is the main reason of confusion and unexpected results in DAX formulas using this function. For a reference of DAX formulas using MAX and LASTDATE, read my article about semi-additive measures in DAX.

    Read the article

  • Can't focus fancybox iframe input

    - by bswinnerton
    So I'm using fancybox to load up a login iframe, and I would like it onComplete to bring focus to the username field. If someone could take a look at this code and let me know what's wrong, that'd be great. Gracias. /* Function to resize the height of the fancybox window */ (function($){ $.fn.resize = function(width, height) { if (!width || (width == "inherit")) inner_width = parent.$("#fancybox-inner").width(); if (!height || (height == "inherit")) inner_height = parent.$("#fancybox-inner").height(); inner_width = width; outer_width = (inner_width + 14); inner_height = height; outer_height = (inner_height + 14); parent.$("#fancybox-inner").css({'width':inner_width, 'height':inner_height}); parent.$("#fancybox-outer").css({'width':outer_width, 'height':outer_height}); } })(jQuery); $(document).ready(function(){ var pasturl = parent.location.href; $("a.iframe#register").fancybox({ 'transitionIn' : 'fade', 'transitionOut' : 'fade', 'speedIn' : 600, 'speedOut' : 350, 'width' : 450, 'height' : 385, 'scrolling' : 'no', 'autoScale' : false, 'autoDimensions' : false, 'overlayShow' : true, 'overlayOpacity' : 0.7, 'padding' : 7, 'hideOnContentClick': false, 'titleShow' : false, 'onStart' : function() { $.fn.resize(450,385); }, 'onComplete' : function() { $("#realname").focus(); }, 'onClosed' : function() { $(".warningmsg").hide(); $(".errormsg").hide(); $(".successfulmsg").hide(); } }); $("a.iframe#login").fancybox({ 'transitionIn' : 'fade', 'transitionOut' : 'fade', 'speedIn' : 600, 'speedOut' : 350, 'width' : 400, 'height' : 250, 'scrolling' : 'no', 'autoScale' : false, 'overlayShow' : true, 'overlayOpacity' : 0.7, 'padding' : 7, 'hideOnContentClick': false, 'titleShow' : false, 'onStart' : function() { $.fn.resize(400,250); }, 'onComplete' : function() { $("#login_username").focus(); }, 'onClosed' : function() { $(".warningmsg").hide(); $(".errormsg").hide(); $(".successfulmsg").hide(); } }); $("#register").click(function() { $("#login_form").hide(); $(".registertext").hide(); $.fn.resize(450,385); $("label").addClass("#register_form label"); }); $("#login").click(function() { $.fn.resize(400,250); $("label").addClass("#login_form label"); }); $("#register_form").bind("submit", function() { $(".warningmsg").hide(); $(".errormsg").hide(); $(".successfulmsg").hide(); if ($("#realname").val().length < 1 || $("#password").val().length < 1 || $("#username").val().length < 1) { $("#no_fields").addClass("warningmsg").show().resize(inherit,405); return false; } if ($("#password").val() != $("#password2").val()) { $("#no_pass_match").addClass("errormsg").show().resize(); return false; } $.fancybox.showActivity(); $.post( "../../admin/users/create_submit.php", { realname:$('#realname').val(), email:$('#email').val(), username:$('#username').val(), password:MD5($('#password').val()), rand:Math.random() } ,function(data){ if(data == "OK"){ $(".registerbox").hide(); $.fancybox.hideActivity(); $.fn.resize(inherit,300); $("#successful_login").addClass("successfulmsg").show(); } else if(data == "user_taken"){ $.fancybox.hideActivity(); $("#user_taken").addClass("errormsg").show().resize(inherit,405); $("#username").val(""); } else { $.fancybox.hideActivity(); document.write("Well, that was weird. Give me a shout at [email protected]."); } return false; }); return false; }); $("#login_form").bind("submit", function() { $(".warningmsg").hide(); $(".errormsg").hide(); $(".successfulmsg").hide(); if ($("#login_username").val().length < 1 || $("#login_password").val().length < 1) { $("#no_fields").addClass("warningmsg").show().resize(inherit,280); return false; } $.fancybox.showActivity(); $.post( "../../admin/users/login_submit.php", { username:$('#login_username').val(), password:MD5($('#login_password').val()), rand:Math.random() } ,function(data){ if(data == "authenticated"){ $(".loginbox").hide(); $(".registertext").hide(); $.fancybox.hideActivity(); $("#successful_login").addClass("successfulmsg").show(); parent.document.location.href=pasturl; } else if(data == "no_user"){ $.fancybox.hideActivity(); $("#no_user").addClass("errormsg").show().resize(); $("#login_username").val(""); $("#login_password").val(""); } else if(data == "wrong_password"){ $.fancybox.hideActivity(); $("#wrong_password").addClass("warningmsg").show().resize(); $("#login_password").val(""); } else { $.fancybox.hideActivity(); document.write("Well, that was weird."); } return false; }); return false; }); }); And here is the HTML: <p><a class="iframe" id="login" href="/login/">Login</a></p>

    Read the article

  • Is there a dojo enhanced grid example with context menu

    - by user102023
    I am looking for an example of a dojo enhanced grid that contains a context menu on either a cell or row menu where the cell or row data is accessed. I have managed to create an enhanced grid with a row context menu. I can create a function that captures the event of clicking on the row menu item. However, I am not sure how to access the row data in the context of the menu item handler. I have not seen any example in the tests of the nightly build. Is there an example of this available online?

    Read the article

  • Horizontal UITableView

    - by imran
    I want implement a layout in my ipad application that has a uitable view that scrolls left and right rather then up and down : So rather than row 1 row 2 row 3 ( scrolling vertically ) It would be : row 1, row2, row 3 (scrolling horizontally ) I've seen that UItableView is designed to only do vertical scrolling so doing a transform does not give the desired effect. Is there a standard way to do this taking advantage of a datasource provider like uitableview provides? I basically want to do somthing similar to what the BBC News reader app ( http://itunes.apple.com/us/app/bbc-news/id364147881?mt=8 ) on the Ipad does with the list of stories to select from. Thanks

    Read the article

  • Move to next page if Table can not adjust in current page using iText

    - by Saurabh
    I am generating a PDF report using iText. The report is having four parts as - Project Info1 (A PDF Table of variable height) Project Info2 (A PDF Table of variable height) Requestor Info (A PDF Table of Fixed height) Location Info (A PDF Table of Fixed height) The top two information are of variable height and they can acquire certain pages. The other two are of fixed size which comes at last. I want to move the last (one or both) boxes to the next page if there is no space available to fit those boxes in current page. If space for one box is available then place one to the current page and move other box to the next table. Is there way to decide this?

    Read the article

  • Basic Boost Regex question

    - by shuttle87
    I'm trying to write some c++ code that tests if a string is in a particular format. In this program there is a height followed by some decimal numbers: for example "height 123.45" or "height 12" would return true but "SomeOtherString 123.45" would return false. My first attempt at this was to write the following: string action; cin >> action; boost::regex EXPR( "^height \\d*(\\.\\d{1,2})?$/" ) ;//height format regex bool height_format_matches = boost::regex_match( action, EXPR ) ; if(height_format_matches==true){ \\do some stuff } However height_format_matches never seemed to be true. Any help is greatly appreciated!

    Read the article

  • SQL SERVER – Fundamentals of Columnstore Index

    - by pinaldave
    There are two kind of storage in database. Row Store and Column Store. Row store does exactly as the name suggests – stores rows of data on a page – and column store stores all the data in a column on the same page. These columns are much easier to search – instead of a query searching all the data in an entire row whether the data is relevant or not, column store queries need only to search much lesser number of the columns. This means major increases in search speed and hard drive use. Additionally, the column store indexes are heavily compressed, which translates to even greater memory and faster searches. I am sure this looks very exciting and it does not mean that you convert every single index from row store to column store index. One has to understand the proper places where to use row store or column store indexes. Let us understand in this article what is the difference in Columnstore type of index. Column store indexes are run by Microsoft’s VertiPaq technology. However, all you really need to know is that this method of storing data is columns on a single page is much faster and more efficient. Creating a column store index is very easy, and you don’t have to learn new syntax to create them. You just need to specify the keyword “COLUMNSTORE” and enter the data as you normally would. Keep in mind that once you add a column store to a table, though, you cannot delete, insert or update the data – it is READ ONLY. However, since column store will be mainly used for data warehousing, this should not be a big problem. You can always use partitioning to avoid rebuilding the index. A columnstore index stores each column in a separate set of disk pages, rather than storing multiple rows per page as data traditionally has been stored. The difference between column store and row store approaches is illustrated below: In case of the row store indexes multiple pages will contain multiple rows of the columns spanning across multiple pages. In case of column store indexes multiple pages will contain multiple single columns. This will lead only the columns needed to solve a query will be fetched from disk. Additionally there is good chance that there will be redundant data in a single column which will further help to compress the data, this will have positive effect on buffer hit rate as most of the data will be in memory and due to same it will not need to be retrieved. Let us see small example of how columnstore index improves the performance of the query on a large table. As a first step let us create databaseset which is large enough to show performance impact of columnstore index. The time taken to create sample database may vary on different computer based on the resources. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 Now let us do quick performance test. I have kept STATISTICS IO ON for measuring how much IO following queries take. In my test first I will run query which will use regular index. We will note the IO usage of the query. After that we will create columnstore index and will measure the IO of the same. -- Performance Test -- Comparing Regular Index with ColumnStore Index USE AdventureWorks GO SET STATISTICS IO ON GO -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO -- Table 'MySalesOrderDetail'. Scan count 1, logical reads 342261, physical reads 0, read-ahead reads 0. -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO -- Select Table with Columnstore Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO It is very clear from the results that query is performance extremely fast after creating ColumnStore Index. The amount of the pages it has to read to run query is drastically reduced as the column which are needed in the query are stored in the same page and query does not have to go through every single page to read those columns. If we enable execution plan and compare we can see that column store index performance way better than regular index in this case. Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO In future posts we will see cases where Columnstore index is not appropriate solution as well few other tricks and tips of the columnstore index. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to get MAX value of a version-number (varchar) column in T-SQL

    - by Ogre Psalm33
    I have a table defined like this: Column: Version Message Type: varchar(20) varchar(100) ---------------------------------- Row 1: 2.2.6 Message 1 Row 2: 2.2.7 Message 2 Row 3: 2.2.12 Message 3 Row 4: 2.3.9 Message 4 Row 5: 2.3.15 Message 5 I want to write a T-Sql query that will get message for the MAX version number, where the "Version" column represents a software version number. I.e., 2.2.12 is greater than 2.2.7, and 2.3.15 is greater than 2.3.9, etc. Unfortunately, I can't think of an easy way to do that without using CHARINDEX or some complicated other split-like logic. Running this query: SELECT MAX(Version) FROM my_table will yield the erroneous result: 2.3.9 When it should really be 2.3.15. Any bright ideas that don't get too complex?

    Read the article

  • Liskov Substitution Principle and the Oft Forgot Third Wheel

    - by Stacy Vicknair
    Liskov Substitution Principle (LSP) is a principle of object oriented programming that many might be familiar with from the SOLID principles mnemonic from Uncle Bob Martin. The principle highlights the relationship between a type and its subtypes, and, according to Wikipedia, is defined by Barbara Liskov and Jeanette Wing as the following principle:   Let be a property provable about objects of type . Then should be provable for objects of type where is a subtype of .   Rectangles gonna rectangulate The iconic example of this principle is illustrated with the relationship between a rectangle and a square. Let’s say we have a class named Rectangle that had a property to set width and a property to set its height. 1: Public Class Rectangle 2: Overridable Property Width As Integer 3: Overridable Property Height As Integer 4: End Class   We all at some point here that inheritance mocks an “IS A” relationship, and by gosh we all know square IS A rectangle. So let’s make a square class that inherits from rectangle. However, squares do maintain the same length on every side, so let’s override and add that behavior. 1: Public Class Square 2: Inherits Rectangle 3:  4: Private _sideLength As Integer 5:  6: Public Overrides Property Width As Integer 7: Get 8: Return _sideLength 9: End Get 10: Set(value As Integer) 11: _sideLength = value 12: End Set 13: End Property 14:  15: Public Overrides Property Height As Integer 16: Get 17: Return _sideLength 18: End Get 19: Set(value As Integer) 20: _sideLength = value 21: End Set 22: End Property 23: End Class   Now, say we had the following test: 1: Public Sub SetHeight_DoesNotAffectWidth(rectangle As Rectangle) 2: 'arrange 3: Dim expectedWidth = 4 4: rectangle.Width = 4 5:  6: 'act 7: rectangle.Height = 7 8:  9: 'assert 10: Assert.AreEqual(expectedWidth, rectangle.Width) 11: End Sub   If we pass in a rectangle, this test passes just fine. What if we pass in a square?   This is where we see the violation of Liskov’s Principle! A square might "IS A” to a rectangle, but we have differing expectations on how a rectangle should function than how a square should! Great expectations Here’s where we pat ourselves on the back and take a victory lap around the office and tell everyone about how we understand LSP like a boss. And all is good… until we start trying to apply it to our work. If I can’t even change functionality on a simple setter without breaking the expectations on a parent class, what can I do with subtyping? Did Liskov just tell me to never touch subtyping again? The short answer: NO, SHE DIDN’T. When I first learned LSP, and from those I’ve talked with as well, I overlooked a very important but not appropriately stressed quality of the principle: our expectations. Our inclination is to want a logical catch-all, where we can easily apply this principle and wipe our hands, drop the mic and exit stage left. That’s not the case because in every different programming scenario, our expectations of the parent class or type will be different. We have to set reasonable expectations on the behaviors that we expect out of the parent, then make sure that those expectations are met by the child. Any expectations not explicitly expected of the parent aren’t expected of the child either, and don’t register as a violation of LSP that prevents implementation. You can see the flexibility mentioned in the Wikipedia article itself: A typical example that violates LSP is a Square class that derives from a Rectangle class, assuming getter and setter methods exist for both width and height. The Square class always assumes that the width is equal with the height. If a Square object is used in a context where a Rectangle is expected, unexpected behavior may occur because the dimensions of a Square cannot (or rather should not) be modified independently. This problem cannot be easily fixed: if we can modify the setter methods in the Square class so that they preserve the Square invariant (i.e., keep the dimensions equal), then these methods will weaken (violate) the postconditions for the Rectangle setters, which state that dimensions can be modified independently. Violations of LSP, like this one, may or may not be a problem in practice, depending on the postconditions or invariants that are actually expected by the code that uses classes violating LSP. Mutability is a key issue here. If Square and Rectangle had only getter methods (i.e., they were immutable objects), then no violation of LSP could occur. What this means is that the above situation with a rectangle and a square can be acceptable if we do not have the expectation for width to leave height unaffected, or vice-versa, in our application. Conclusion – the oft forgot third wheel Liskov Substitution Principle is meant to act as a guidance and warn us against unexpected behaviors. Objects can be stateful and as a result we can end up with unexpected situations if we don’t code carefully. Specifically when subclassing, make sure that the subclass meets the expectations held to its parent. Don’t let LSP think you cannot deviate from the behaviors of the parent, but understand that LSP is meant to highlight the importance of not only the parent and the child class, but also of the expectations WE set for the parent class and the necessity of meeting those expectations in order to help prevent sticky situations.   Code examples, in both VB and C# Technorati Tags: LSV,Liskov Substitution Principle,Uncle Bob,Robert Martin,Barbara Liskov,Liskov

    Read the article

  • ASP.Net - DataRepeater with multiple templates and multiple datasources in one

    - by NicoJuicy
    This is my problem. I'm using different sql-queries for fetching some people, some come from my database and some come from an external database. They are all sorted in the same "list", only the difference is, that the people who come from our database will have a different lay-out and there will be less of them in a row (eg. 1 in a row), the list of people who come from the external database will be ordered by 3 (in a row). How can i implement this using a repeater? And how would the pagination work? Any "logical", working alternatives will be appreciated also, but i prefer to keep my current workflow to solve this problem. Short: - Multiple datasources - Multiple templates for the different datasources (1 in a row, 3 in a row) - Pagination in this problem?

    Read the article

  • Can NSDictionary be used with TableView on iPhone?

    - by bobo
    In a UITableViewController subclass, there are some methods that need to be implemented in order to load the data and handle the row selection event: - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; //there is only one section needed for my table view } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return [myList count]; //myList is a NSDictionary already populated in viewDidLoad method } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease ]; } // indexPath.row returns an integer index, // but myList uses keys that are not integer, // I don't know how I can retrieve the value and assign it to the cell.textLabel.text return cell; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { // Handle row on select event, // but indexPath.row only returns the index, // not a key of the myList NSDictionary, // this prevents me from knowing which row is selected } How is NSDictionary supposed to work with TableView? What is the simplest way to get this done?

    Read the article

  • How to get rid of previous reflection when reflecting a UIImageView (with changing pictures)?

    - by epale
    Hi everyone, I have managed to use the reflection sample app from apple to create a reflection from a UIImageView. But the problem is that when I change the picture inside the UIImageView, the reflection from the previous displayed picture remains on the screen. The new reflection on the next picture then overlaps the previous reflection. How do I ensure that the previous reflection is removed when I change to the next picture? Thank you so much. I hope my question is not too basic. Here is the codes which i used so far: //reflection self.view.autoresizesSubviews = YES; self.view.userInteractionEnabled = YES; // create the reflection view CGRect reflectionRect = currentView.frame; // the reflection is a fraction of the size of the view being reflected reflectionRect.size.height = reflectionRect.size.height * kDefaultReflectionFraction; // and is offset to be at the bottom of the view being reflected reflectionRect = CGRectOffset(reflectionRect, 0, currentView.frame.size.height); reflectionView = [[UIImageView alloc] initWithFrame:reflectionRect]; // determine the size of the reflection to create NSUInteger reflectionHeight = currentView.bounds.size.height * kDefaultReflectionFraction; // create the reflection image, assign it to the UIImageView and add the image view to the containerView reflectionView.image = [self reflectedImage:currentView withHeight:reflectionHeight]; reflectionView.alpha = kDefaultReflectionOpacity; [self.view addSubview:reflectionView]; //reflection */ Then the codes below are used to form the reflection: CGImageRef CreateGradientImage(int pixelsWide, int pixelsHigh) { CGImageRef theCGImage = NULL; // gradient is always black-white and the mask must be in the gray colorspace CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray(); // create the bitmap context CGContextRef gradientBitmapContext = CGBitmapContextCreate(nil, pixelsWide, pixelsHigh, 8, 0, colorSpace, kCGImageAlphaNone); // define the start and end grayscale values (with the alpha, even though // our bitmap context doesn't support alpha the gradient requires it) CGFloat colors[] = {0.0, 1.0, 1.0, 1.0}; // create the CGGradient and then release the gray color space CGGradientRef grayScaleGradient = CGGradientCreateWithColorComponents(colorSpace, colors, NULL, 2); CGColorSpaceRelease(colorSpace); // create the start and end points for the gradient vector (straight down) CGPoint gradientStartPoint = CGPointZero; CGPoint gradientEndPoint = CGPointMake(0, pixelsHigh); // draw the gradient into the gray bitmap context CGContextDrawLinearGradient(gradientBitmapContext, grayScaleGradient, gradientStartPoint, gradientEndPoint, kCGGradientDrawsAfterEndLocation); CGGradientRelease(grayScaleGradient); // convert the context into a CGImageRef and release the context theCGImage = CGBitmapContextCreateImage(gradientBitmapContext); CGContextRelease(gradientBitmapContext); // return the imageref containing the gradient return theCGImage; } CGContextRef MyCreateBitmapContext(int pixelsWide, int pixelsHigh) { CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); // create the bitmap context CGContextRef bitmapContext = CGBitmapContextCreate (nil, pixelsWide, pixelsHigh, 8, 0, colorSpace, // this will give us an optimal BGRA format for the device: (kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst)); CGColorSpaceRelease(colorSpace); return bitmapContext; } (UIImage *)reflectedImage:(UIImageView *)fromImage withHeight:(NSUInteger)height { if (!height) return nil; // create a bitmap graphics context the size of the image CGContextRef mainViewContentContext = MyCreateBitmapContext(fromImage.bounds.size.width, height); // offset the context - // This is necessary because, by default, the layer created by a view for caching its content is flipped. // But when you actually access the layer content and have it rendered it is inverted. Since we're only creating // a context the size of our reflection view (a fraction of the size of the main view) we have to translate the // context the delta in size, and render it. // CGFloat translateVertical= fromImage.bounds.size.height - height; CGContextTranslateCTM(mainViewContentContext, 0, -translateVertical); // render the layer into the bitmap context CALayer *layer = fromImage.layer; [layer renderInContext:mainViewContentContext]; // create CGImageRef of the main view bitmap content, and then release that bitmap context CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext); CGContextRelease(mainViewContentContext); // create a 2 bit CGImage containing a gradient that will be used for masking the // main view content to create the 'fade' of the reflection. The CGImageCreateWithMask // function will stretch the bitmap image as required, so we can create a 1 pixel wide gradient CGImageRef gradientMaskImage = CreateGradientImage(1, height); // create an image by masking the bitmap of the mainView content with the gradient view // then release the pre-masked content bitmap and the gradient bitmap CGImageRef reflectionImage = CGImageCreateWithMask(mainViewContentBitmapContext, gradientMaskImage); CGImageRelease(mainViewContentBitmapContext); CGImageRelease(gradientMaskImage); // convert the finished reflection image to a UIImage UIImage *theImage = [UIImage imageWithCGImage:reflectionImage]; // image is retained by the property setting above, so we can release the original CGImageRelease(reflectionImage); return theImage; } */

    Read the article

  • CSS Sprite vertical repeat problem

    - by ShiVik
    Hello all I am trying to include css sprites in my webapp. The thing is I have arranged my website background vertically in sprite image. Now, one portion of the sprite needs to be repeated vertically. I was trying the following code... #page-wrapper {   margin: 0px auto;   background-image: url(../images/background.png);   height: 100%;   width: 1000px; } #page-wrapper #content {   background-position: 0px -80px;   background-repeat: repeat-y;   height: 1px; } I am confused in the height property of content class. How should I define the height of the section which I want to repeat and the height of the div(#content)? Regards Vikram

    Read the article

  • Convert table base design to table less design in best way

    - by Brij
    What is the best optimized way to convert following in table less design? the layout should be cross browser compatible and SEO Friendly. <table cellpadding="0" cellspacing="0"> <tr> <td>Row 1 Column 1</td> <td>Row 1 Column 2</td> <td>Row 1 Column 3</td> </tr> <tr> <td colspan="3" align="center">Row 2</td> </tr> <tr> <td>Row 3</td> <td align="right" colspan="2"><img src="test.jpg" alt="test" /></td> </tr> </table>

    Read the article

  • LinearLayout of a WebView and a Button.

    - by Timmmm
    Hi, I recently struggled with an apparently simple Android layout: I wanted a WebView above a Button. It worked fine with the following parameters: WebView: Height: wrap-content Weight: unset (by the way, what is the default?) Button: Height: wrap-content Weight: unset However if the web page became too big it spilled out over the button. I tried various combinations of weights and heights, and all except one either completely hide the button, or partially cover it. This is the one that works (copied from http://code.google.com/p/apps-for-android/source/browse/trunk/Samples/WebViewDemo/res/layout/main.xml): WebView: Height: 0 Weight: 1 Button: Height: wrap-content Weight: unset If you change any of those, e.g. give button a weight or change the WebView height to wrap-content then it doesn't work. My question is: Why? Can someone please explain what on Earth the android layout system is thinking?

    Read the article

  • refresh table view iphone

    - by Florent
    Hi all !! So i've set a table view, i have set a system which set if the row have been already selected, i set checkmarck acessory for a row which have been seen, i write the row in a plist to an int value. It work good but only when i restart the app or reload the table view in my navigation controller. I mean when i select a row it pushes a view controller, then when i go back to the table view checkmark disappear and we do not know if the row have already been selected only when the app restart. So is there a way to refresh the table view ? ? in the view will appear for example ? ? thanks to all !!!!

    Read the article

  • recursive function to get all the child categories

    - by user253530
    Here is what I'm trying to do: - i need a function that when passed as an argument an ID (for a category of things) will provide all the subcategories and the sub-sub categories and sub-sub-sub..etc. - i was thinking to use a recursive function since i don't know the number of subcategories their sub-subcategories and so on so here is what i've tried to do so far function categoryChild($id) { $s = "SELECT * FROM PLD_CATEGORY WHERE PARENT_ID = $id"; $r = mysql_query($s); if(mysql_num_rows($r) > 0) { while($row = mysql_fetch_array($r)) echo $row['ID'].",".categoryChild($row['ID']); } else { $row = mysql_fetch_array($r); return $row['ID']; } } If i use return instead of echo, i won't get the same result. I need some help in order to fix this or rewrite it from scratch

    Read the article

  • Optimized way to convert table base to table less design

    - by Brij
    What is the best optimized way to convert following in table less design? the layout should be cross browser compatible and SEO Friendly. <table cellpadding="0" cellspacing="0"> <tr> <td>Row 1 Column 1</td> <td>Row 1 Column 2</td> <td>Row 1 Column 3</td> </tr> <tr> <td colspan="3" align="center">Row 2</td> </tr> <tr> <td>Row 3</td> <td align="right" colspan="2"><img src="test.jpg" alt="test" /></td> </tr> </table>

    Read the article

  • Why does [NSOutlineView clickedRow] always return -1?

    - by jxpx777
    I have a fairly pedestrian non-editable NSOutlineView setup. In the bindings for the outline view, I have set the binding to my file's owner (MyDocument FWIW) with a selector of outlineViewWasDoubleClicked The method exists and is called, but when I call -clickedRow it consistently returns -1 rather than the row number of the row that I double clicked to trigger the method. My _outlineView is an IBOutlet and I've verified that it is hooked up correctly by using -selectedRow for the method rather than -clickedRow (I would rather use -clickedRow though because it seems unintuitive for the user to have a row selected, double click another row to do something with it and have the method triggered with the row they had selected.) My best guess right now is that the -clickedRow value is getting cleared out before my method fires, but I don't know where or what might be gobbling it up. Thanks in advance for any help.

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >