Search Results

Search found 88103 results on 3525 pages for 'bam data control multiple'.

Page 222/3525 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • Most elegant way to break CSV columns into separate data structures using Python?

    - by Nick L
    I'm trying to pick up Python. As part of the learning process I'm porting a project I wrote in Java to Python. I'm at a section now where I have a list of CSV headers of the form: headers = [a, b, c, d, e, .....] and separate lists of groups that these headers should be broken up into, e.g.: headers_for_list_a = [b, c, e, ...] headers_for_list_b = [a, d, k, ...] . . . I want to take the CSV data and turn it into dict's based on these groups, e.g.: list_a = [ {b:val_1b, c:val_1c, e:val_1e, ... }, {b:val_2b, c:val_2c, e:val_2e, ... }, {b:val_3b, c:val_3c, e:val_3e, ... }, . . . ] where for example, val_1b is the first row of the 'b' column, val_3c is the third row of the 'c' column, etc. My first "Java instinct" is to do something like: for row in data: for col_num, val in enumerate(row): col_name = headers[col_num] if col_name in group_a: dict_a[col_name] = val elif headers[col_cum] in group_b: dict_b[col_name] = val ... list_a.append(dict_a) list_b.append(dict_b) ... However, this method seems inefficient/unwieldy and doesn't posses the elegance that Python programmers are constantly talking about. Is there a more "Zen-like" way I should try- keeping with the philosophy of Python?

    Read the article

  • Avoid an "out of memory error" in Java(eclipse), when using large data structure?

    - by gnomed
    OK, so I am writing a program that unfortunately needs to use a huge data structure to complete its work, but it is failing with a "out of memory error" during its initialization. While I understand entirely what that means and why it is a problem, I am having trouble overcoming it, since my program needs to use this large structure and I don't know any other way to store it. The program first indexes a large corpus of text files that I provide. This works fine. Then it uses this index to initialize a large 2D array. This array will have nXn entries, where "n" is the number of unique words in the corpus of text. For the relatively small chunk I am testing it on(about 60 files) it needs to make approximately 30,000x30,000 entries. this will probably be bigger once I run it on my full intended corpus too. It consistently fails every time, after it indexes, while it is initializing the data structure(to be worked on later). Things I have done include: revamp my code to use a primitive "int[]" instead of a "TreeMap" eliminate redundant structures, etc... Also, I have run eclipse with "eclipse -vmargs -Xmx2g" to max out my allocated memory I am fairly confident this is not going to be a simple line of code solution, but is most likely going to require a very new approach. I am looking for what that approach is, any ideas? Thanks, B.

    Read the article

  • A data structure based on the R-Tree: creating new child nodes when a node is full, but what if I ha

    - by Tom
    I realize my title is not very clear, but I am having trouble thinking of a better one. If anyone wants to correct it, please do. I'm developing a data structure for my 2 dimensional game with an infinite universe. The data structure is based on a simple (!) node/leaf system, like the R-Tree. This is the basic concept: you set howmany childs you want a node (a container) to have maximum. If you want to add a leaf, but the node the leaf should be in is full, then it will create a new set of nodes within this node and move all current leafs to their new (more exact) node. This way, very populated areas will have a lot more subdivisions than a very big but rarely visited area. This works for normal objects. The only problem arises when I have more than maxChildsPerNode objects with the exact same X,Y location: because the node is full, it will create more exact subnodes, but the old leafs will all be put in the exact same node again because they have the exact same position -- resulting in an infinite loop of creating more nodes and more nodes. So, what should I do when I want to add more leafs than maxChildsPerNode with the exact same position to my tree? PS. if I failed to explain my problem, please tell me, so I can try to improve the explanation.

    Read the article

  • stdio data from write not making it into a file

    - by user1551209
    I'm having a problem with using stdio commands for manipulating data in a file. I short, when I write data into a file, write returns an int indicating that it was successful, but when I read it back out I only get the old data. Here's a stripped down version of the code: fd = open(filename,O_RDWR|O_APPEND); struct dE *cDE = malloc(sizeof(struct dE)); //Read present data printf("\nreading values at %d\n",off); printf("SeekStatus <%d>\n",lseek(fd,off,SEEK_SET)); printf("ReadStatus <%d>\n",read(fd,cDE,deSize)); printf("current Key/Data <%d/%s>\n",cDE->key,cDE->data); printf("\nwriting new values\n"); //Change the values locally cDE->key = //something new cDE->data = //something new //Write them back printf("SeekStatus <%d>\n",lseek(fd,off,SEEK_SET)); printf("WriteStatus <%d>\n",write(fd,cDE,deSize)); //Re-read to make sure that it got written back printf("\nre-reading values at %d\n",off); printf("SeekStatus <%d>\n",lseek(fd,off,SEEK_SET)); printf("ReadStatus <%d>\n",read(fd,cDE,deSize)); printf("current Key/Data <%d/%s>\n",cDE->key,cDE->data); Furthermore, here's the dE struct in case you're wondering: struct dE { int key; char data[DataSize]; }; This prints: reading values at 1072 SeekStatus <1072> ReadStatus <32> current Key/Data <27/old> writing new values SeekStatus <1072> WriteStatus <32> re-reading values at 1072 SeekStatus <1072> ReadStatus <32> current Key/Data <27/old>

    Read the article

  • How can I read and parse chunks of data into a Perl hash of arrays?

    - by neversaint
    I have data that looks like this: #info #info2 1:SRX004541 Submitter: UT-MGS, UT-MGS Study: Glossina morsitans transcript sequencing project(SRP000741) Sample: Glossina morsitans(SRS002835) Instrument: Illumina Genome Analyzer Total: 1 run, 8.3M spots, 299.9M bases Run #1: SRR016086, 8330172 spots, 299886192 bases 2:SRX004540 Submitter: UT-MGS Study: Anopheles stephensi transcript sequencing project(SRP000747) Sample: Anopheles stephensi(SRS002864) Instrument: Solexa 1G Genome Analyzer Total: 1 run, 8.4M spots, 401M bases Run #1: SRR017875, 8354743 spots, 401027664 bases 3:SRX002521 Submitter: UT-MGS Study: Massive transcriptional start site mapping of human cells under hypoxic conditions.(SRP000403) Sample: Human DLD-1 tissue culture cell line(SRS001843) Instrument: Solexa 1G Genome Analyzer Total: 6 runs, 27.1M spots, 977M bases Run #1: SRR013356, 4801519 spots, 172854684 bases Run #2: SRR013357, 3603355 spots, 129720780 bases Run #3: SRR013358, 3459692 spots, 124548912 bases Run #4: SRR013360, 5219342 spots, 187896312 bases Run #5: SRR013361, 5140152 spots, 185045472 bases Run #6: SRR013370, 4916054 spots, 176977944 bases What I want to do is to create a hash of array with first line of each chunk as keys and SR## part of lines with "^Run" as its array member: $VAR = { 'SRX004541' => ['SRR016086'], # etc } But why my construct doesn't work. And it must be a better way to do it. use Data::Dumper; my %bighash; my $head = ""; my @temp = (); while ( <> ) { chomp; next if (/^\#/); if ( /^\d{1,2}:(\w+)/ ) { print "$1\n"; $head = $1; } elsif (/^Run \#\d+: (\w+),.*/){ print "\t$1\n"; push @temp, $1; } elsif (/^$/) { push @{$bighash{$head}}, [@temp]; @temp =(); } } print Dumper \%bighash ;

    Read the article

  • What are your suggestions for best practises for regular data updates in a website database?

    - by bboyle1234
    My shared-hosting asp.net website must automatically run data update routines at regular times of day. Once it has finished running certain update routines, it can run update routines that are dependent on the previous updates. I have done this type of work before, using quite complicated setups. Some features of the framework I created are: A cron job from another server makes a request which starts a data update routine on the main server Each updater is loaded from web.config Each updater overrides a "canRunUpdate" method that determines whether its dependencies have finished updating Each updater overrides a "hasFinishedUpdate" method Each updater overrides a "runUpdate" method Updaters start and run in parallel threads The initial request from the cron job server started each updater in its own thread and then ended. As a result, the threads containing the updaters would be terminated before the updaters were finished. Therefore I had to give the updaters the ability to save partial results and continue the update job next time they are started up. As a result, the cron server had to call the updater many times to ensure the job is done. Sometimes the cron server would continue making update requests long after all the updates were completed. Sometimes the cron server would finish calling the update requests and leave some updates uncompleted. It's not the best system. I'm looking for inspiration. Any ideas please? Thank you :)

    Read the article

  • Passing XML data and user from coldfusion page to .NET page

    - by Mark Rullo
    I'd appreciate some input on this situation, I can't figure out the best way to do this. I have some data that's being prepared for me in a ColdFusion app and in an IFrame within the CF app we want to display some graphs (not strictly an image, it's an entire page) being generated on the .NET side of things. I'd like to pass XML data from the CF side to .NET as well as the user. On the .NET side I'm putting the data in a session so the user can sift through it without the need to have it re-queried and re-passed from CF. What I've tried: Generating XML with CF, putting it in a hidden form field, auto-submitting (with JS) a the form to the .NET side. The issue I'm having with this approach is the encoding being done on the form post. The data has entries like <entry data="hello &amp; goodbye">. It's an issue because it's being URL encdeded, Posted, and when I get it on the .NET side I get <entry data="hello & goodbye"> which isn't properly formed XML. What I'd like to avoid: An intermediary DB approach (dropping the data in a DB on CF, picking it up with .NET) I'd like to only display what is passed to the page. I have security concerns with the data, it's very sensitive. Passing the data to a webservice, returning a GUID, forwarding the user with a URL Parameter to access the passed in data. I think that'd be risky if someone happened on a link to that data. I can't take that risk. I was thinking of passing the data with JSON, but I'm very unfamiliar with it. Thoughts? Thanks for your time folks.

    Read the article

  • How do I display core data on second view controller?

    - by jon
    I am working on my first core data iPhone application. I am using a navigation controller, and the root view controller displays 4 rows. Clicking the first row takes me to a second table view controller. However, when I click the back button, repeat the row tap, click the back button again, and tap the row a third time, I get an error. I have been researching this for a week with no success. I can reproduce the error easily: Create a new Navigation-based Application, use Core Data for storage, call it MyTest which creates MyTestAppDelegate and RootViewController. Add new UIViewController subclass, with UITableViewController and xib, call it ListViewController. Copy code from RootViewController.h and .m to ListViewController.h and .m., changing the file names appropriately. To simplify the code, I removed the trailing “_” from all variables. In RootViewController, I added #import ListViewController.h, set up an array to display 4 rows and navigate to ListViewController when clicking the first row. In ListViewController.m, I added #import MyTestAppDelegate.h” and the following code: - (void)viewDidLoad { [super viewDidLoad]; if (managedObjectContext == nil) { managedObjectContext = [(MyTestAppDelegate *)[[UIApplication sharedApplication] delegate] managedObjectContext]; } .. } The sequence that causes the error is tap row, return, tap row, return, tap row - error. managedObjectContext is synthesized for the third time. I appreciate your patience and your help, as this makes no sense to me. ADDENDUM: I may have a partial solution. http://www.iphonedevsdk.com/forum/iphone-sdk-development/41688-accessing-app-delegates-managed-object-context.html If I do not release the managedObjectContext in the .m file, the error goes away. Is that ok or will that cause me issues? - (void)dealloc { [fetchedResultsController release]; // [managedObjectContext release]; [super dealloc]; } ADDENDUM 2: See solution below. Sorry for the formatting issues - this was my first post.

    Read the article

  • Separating code logic from the actual data structures. Best practices?

    - by Patrick
    I have an application that loads lots of data into memory (this is because it needs to perform some mathematical simulation on big data sets). This data comes from several database tables, that all refer to each other. The consistency rules on the data are rather complex, and looking up all the relevant data requires quite some hashes and other additional data structures on the data. Problem is that this data may also be changed interactively by the user in a dialog. When the user presses the OK button, I want to perform all the checks to see that he didn't introduce inconsistencies in the data. In practice all the data needs to be checked at once, so I cannot update my data set incrementally and perform the checks one by one. However, all the checking code work on the actual data set loaded in memory, and use the hashing and other data structures. This means I have to do the following: Take the user's changes from the dialog Apply them to the big data set Perform the checks on the big data set Undo all the changes if the checks fail I don't like this solution since other threads are also continuously using the data set, and I don't want to halt them while performing the checks. Also, the undo means that the old situation needs to be put aside, which is also not possible. An alternative is to separate the checking code from the data set (and let it work on explicitly given data, e.g. coming from the dialog) but this means that the checking code cannot use hashing and other additional data structures, because they only work on the big data set, making the checks much slower. What is a good practice to check user's changes on complex data before applying them to the 'application's' data set?

    Read the article

  • Fastest way to read/store lots of multidimensional data? (Java)

    - by RemiX
    I have three questions about three nested loops: for (int x=0; x<400; x++) { for (int y=0; y<300; y++) { for (int z=0; z<400; z++) { // compute and store value } } } And I need to store all computed values. My standard approach would be to use a 3D-array: values[x][y][z] = 1; // test value but this turns out to be slow: it takes 192 ms to complete this loop, where a single int-assignment int value = 1; // test value takes only 66 ms. 1) Why is an array so relatively slow? 2) And why does it get even slower when I put this in the inner loop: values[z][y][x] = 1; // (notice x and z switched) This takes more than 4 seconds! 3) Most importantly: Can I use a data structure that is as quick as the assignment of a single integer, but can store as much data as the 3D-array?

    Read the article

  • Excel data into PowerPoint slides

    - by nqw1
    I have already found some helpful sites but I'm still unable to do what I want. My Excel file contains few columns and multiple rows. All the data from one row would be in one slide but data from different cells in that one row should go to a specific elements in PP slide. At first, is it possible to export data from an Excel cell into a specific text box in PP? For example, I would like to have all data from the first column of each row go to a Text box 1. Let's say I have 100 rows so I would have 100 slides and each slide would have Text bow 1 with correct data. Text box of slide 66 would have data from the first column of row 66. Then all data from the second column of each row would go to a text bow 2 and so on. I tried to do some macros with bad success. I also tried to use Word outlines and export them into PP (New slide - Slides from Outline) but there seems to be a bug since I got 250 pages of gibberish. I had only two paragraphs and both had one word. First paragraph used Heading 1 style and second paragraph used Normal style. Sites what I have found, use VB and/or some other programming language to create slides from Excel sheets. I have tried to add those VB codes into my macros but none of them hasn't worked so far. Probably I just don't know how to use them correctly :) Here's some helpful sites: VBA: Create PowerPoint Slide for Each Row in Excel Workbook Creating a Presentation Report Based on Data Question in Stackoverflow I use Office 2011 on Mac. Any help would be appreciated!

    Read the article

  • How to export SQL Server data from corrupted database (with disk write error)

    - by damitamit
    IT realised there was a disk write error on our production SQL Server 2005 and hence was causing the backups to fail. By the time they had realised this the nightly backup was old, so were not able to just restore the backup on another server. The database is still running and being used constantly. However DBCC CheckDB fails. Also the SQL Server backup task fails, Copy Database fails, Export Data Wizard fails. However it seems all the data can be read from the tables (i.e using bcp etc) Another observation I have made is that the Transaction Log is nearly double the size of the Database. (Does that mean all the changes arent being written to the MDF?) What would be the best plan of attack to get the database to a state where backups are working and the data is safe? Take the database offline and use the MDF/LDF to somehow create the database on another sql server? Export the data from the database using bcp. Create the database (use the Generate Scripts function on the corrupt db to create the schema on the new db) on another sql server and use bcp again to import the data. Some other option that is the right course of action in this situation? The IT manager says the data is safe as if the server fails, the data can be restored from the mdf/ldf. I'm not sure so insisted that we start exporting the data each night as a failsafe (using bcp for example). IT are also having issues on the hardware side of things as supposedly the disk error in on a virtualized disk and can't be rebuilt like a normal raid array (or something like that). Please excuse my use of incorrect terminology and incorrect assumptions on how Sql Server operates. I'm the application developer and have been called to help (as it seems IT know less about SQL Server than I do). Many Thanks, Amit

    Read the article

  • Performance data collection for short-running, ephemeral servers

    - by ErikA
    We're building a medical image processing software stack, currently hosted on various AWS resources. As part of this application, we have a handful of long-running servers (database, load balancers, web application, etc.). Collecting performance data on those servers is quite simple - my go-to- recipe of Nagios (for monitoring/notifications) and Munin (for collection of performance data and displaying trends) will work just fine. However - as part of this application, we are constantly starting up and terminating compute instances on EC2. In typical usage, these compute instances start up, configure themselves, receive a job from a message queue, and then get to work processing that job, which takes anywhere from 15 minutes to over 8 hours. After job completion, these instances get terminated, never to be heard from again. What is a decent strategy for collecting performance data on these short-lived instances? I don't necessarily need monitoring on them - if they fail for whatever reason, our application will detect this and handle re-starting the job on another instance or raising the flag so an administrator can take a look at things. However, it still would be useful to collect information like CPU (user, idle, iowait, etc.), memory usage, network traffic, disk read/write data, etc. In our internal database, we track the instance ID of the machine that runs each job, and it would be quite helpful to be able to look up performance data for a specific instance ID for troubleshooting and profiling. Munin doesn't seem like a great candidate, as it requires maintaining a list of munin nodes in a text file - far from ideal for an environment with a high amount of churn, and for the short amount of time each node will be running, I'd rather keep the full-resolution data indefinitely than have RRD water down the data over time. In the end, my guess is that this will require a monitoring engine that: uses a database (MySQL, SQLite, etc.) for configuration and data storage exposes an API for adding/removing hosts and services Are there other things I should be thinking about when evaluating options? Perhaps I'm over-thinking this, though, and just ought to run sar at 1-minute intervals on these short-lived instances and collect the sar db files prior to termination.

    Read the article

  • Binding data to subgrid

    - by bhargav
    i have a jqgrid with a subgrid...the databinding is done in javascript like this <script language="javascript" type="text/javascript"> var x = screen.width; $(document).ready(function () { $("#projgrid").jqGrid({ mtype: 'POST', datatype: function (pdata) { getData(pdata); }, colNames: ['Project ID', 'Due Date', 'Project Name', 'SalesRep', 'Organization:', 'Status', 'Active Value', 'Delete'], colModel: [ { name: 'Project ID', index: 'project_id', width: 12, align: 'left', key: true }, { name: 'Due Date', index: 'project_date_display', width: 15, align: 'left' }, { name: 'Project Name', index: 'project_title', width: 60, align: 'left' }, { name: 'SalesRep', index: 'Salesrep', width: 22, align: 'left' }, { name: 'Organization:', index: 'customer_company_name:', width: 56, align: 'left' }, { name: 'Status', index: 'Status', align: 'left', width: 15 }, { name: 'Active Value', index: 'Active Value', align: 'left', width: 10 }, { name: 'Delete', index: 'Delete', align: 'left', width: 10}], pager: '#proj_pager', rowList: [10, 20, 50], sortname: 'project_id', sortorder: 'asc', rowNum: 10, loadtext: "Loading....", subGrid: true, shrinkToFit: true, emptyrecords: "No records to view", width: x - 100, height: "100%", rownumbers: true, caption: 'Projects', subGridRowExpanded: function (subgrid_id, row_id) { var subgrid_table_id, pager_id; subgrid_table_id = subgrid_id + "_t"; pager_id = "p_" + subgrid_table_id; $("#" + subgrid_id).html("<table id='" + subgrid_table_id + "' class='scroll'></table><div id='" + pager_id + "' class='scroll'></div>"); jQuery("#" + subgrid_table_id).jqGrid({ mtype: 'POST', postData: { entityIndex: function () { return row_id } }, datatype: function (pdata) { getactionData(pdata); }, height: "100%", colNames: ['Event ID', 'Priority', 'Deadline', 'From Date', 'Title', 'Status', 'Hours', 'Contact From', 'Contact To'], colModel: [ { name: 'Event ID', index: 'Event ID' }, { name: 'Priority', index: 'IssueCode' }, { name: 'Deadline', index: 'IssueTitle' }, { name: 'From Date', index: 'From Date' }, { name: 'Title', index: 'Title' }, { name: 'Status', index: 'Status' }, { name: 'Hours', index: 'Hours' }, { name: 'Contact From', index: 'Contact From' }, { name: 'Contact To', index: 'Contact To' } ], caption: "Action Details", rowNum: 10, pager: '#actionpager', rowList: [10, 20, 30, 50], sortname: 'Event ID', sortorder: "desc", loadtext: "Loading....", shrinkToFit: true, emptyrecords: "No records to view", rownumbers: true, ondblClickRow: function (rowid) { } }); jQuery("#actiongrid").jqGrid('navGrid', '#actionpager', { edit: false, add: false, del: false, search: false }); } }); jQuery("#projgrid").jqGrid('navGrid', '#proj_pager', { edit: false, add: false, del: false, excel: true, search: false }); }); function getactionData(pdata) { var project_id = pdata.entityIndex(); var ChannelContact = document.getElementById('ctl00_ContentPlaceHolder2_ddlChannelContact').value; var HideCompleted = document.getElementById('ctl00_ContentPlaceHolder2_chkHideCompleted').checked; var Scm = document.getElementById('ctl00_ContentPlaceHolder2_chkScm').checked; var checkOnlyContact = document.getElementById('ctl00_ContentPlaceHolder2_chkOnlyContact').checked; var MerchantId = document.getElementById('ctl00_ContentPlaceHolder2_ucProjectDetail_hidden_MerchantId').value; var nrows = pdata.rows; var npage = pdata.page; var sortindex = pdata.sidx; var sortdir = pdata.sord; var path = "project_brow.aspx/GetActionDetails" $.ajax({ type: "POST", url: path, data: "{'project_id': '" + project_id + "','ChannelContact': '" + ChannelContact + "','HideCompleted': '" + HideCompleted + "','Scm': '" + Scm + "','checkOnlyContact': '" + checkOnlyContact + "','MerchantId': '" + MerchantId + "','nrows': '" + nrows + "','npage': '" + npage + "','sortindex': '" + sortindex + "','sortdir': '" + sortdir + "'}", contentType: "application/json; charset=utf-8", success: function (data, textStatus) { if (textStatus == "success") obj = jQuery.parseJSON(data.d) ReceivedData(obj); }, error: function (data, textStatus) { alert('An error has occured retrieving data!'); } }); } function ReceivedData(data) { var thegrid = jQuery("#actiongrid")[0]; thegrid.addJSONData(data); } function getData(pData) { var dtDateFrom = document.getElementById('ctl00_ContentPlaceHolder2_dtDateFrom_textBox').value; var dtDateTo = document.getElementById('ctl00_ContentPlaceHolder2_dtDateTo_textBox').value; var Status = document.getElementById('ctl00_ContentPlaceHolder2_ddlStatus').value; var Type = document.getElementById('ctl00_ContentPlaceHolder2_ddlType').value; var Channel = document.getElementById('ctl00_ContentPlaceHolder2_ddlChannel').value; var ChannelContact = document.getElementById('ctl00_ContentPlaceHolder2_ddlChannelContact').value; var Customers = document.getElementById('ctl00_ContentPlaceHolder2_txtCustomers').value; var KeywordSearch = document.getElementById('ctl00_ContentPlaceHolder2_txtKeywordSearch').value; var Scm = document.getElementById('ctl00_ContentPlaceHolder2_chkScm').checked; var HideCompleted = document.getElementById('ctl00_ContentPlaceHolder2_chkHideCompleted').checked; var SelectedCustomerId = document.getElementById("<%=hdnSelectedCustomerId.ClientID %>").value var MerchantId = document.getElementById('ctl00_ContentPlaceHolder2_ucProjectDetail_hidden_MerchantId').value; var nrows = pData.rows; var npage = pData.page; var sortindex = pData.sidx; var sortdir = pData.sord; PageMethods.GetProjectDetails(SelectedCustomerId, Customers, KeywordSearch, MerchantId, Channel, Status, Type, dtDateTo, dtDateFrom, ChannelContact, HideCompleted, Scm, nrows, npage, sortindex, sortdir, AjaxSucceeded, AjaxFailed); } function AjaxSucceeded(data) { var obj = jQuery.parseJSON(data) if (obj != null) { if (obj.records!="") { ReceivedClientData(obj); } else { alert('No Data Available to Display') } } } function AjaxFailed(data) { alert('An error has occured retrieving data!'); } function ReceivedClientData(data) { var thegrid = jQuery("#projgrid")[0]; thegrid.addJSONData(data); } </script> as u can see projgrid is my parent grid and action grid is my subgrid to be shown onclicking the '+' symbol Projgrid is binded and being displayed but when it comes to subgrid im able to get the data but the problem comes at the time of binding data to subgrid which is done in function named ReceivedData where you can see like this function ReceivedData(data) { var thegrid = jQuery("#actiongrid")[0]; thegrid.addJSONData(data); } "data" is what i wanted exactly but it cannot be binded to actiongrid which is the subgrid Thanx in advance for help

    Read the article

  • T-SQL Improvements And Data Types in ms sql 2008

    - by Aamir Hasan
     Microsoft SQL Server 2008 is a new version released in the first half of 2008 introducing new properties and capabilities to SQL Server product family. All these new and enhanced capabilities can be defined as the classic words like secure, reliable, scalable and manageable. SQL Server 2008 is secure. It is reliable. SQL2008 is scalable and is more manageable when compared to previous releases. Now we will have a look at the features that are making MS SQL Server 2008 more secure, more reliable, more scalable, etc. in details.Microsoft SQL Server 2008 provides T-SQL enhancements that improve performance and reliability. Itzik discusses composable DML, the ability to declare and initialize variables in the same statement, compound assignment operators, and more reliable object dependency information. Table-Valued ParametersInserts into structures with 1-N cardinality problematicOne order -> N order line items"N" is variable and can be largeDon't want to force a new order for every 20 line itemsOne database round-trip / line item slows things downNo ARRAY data type in SQL ServerXML composition/decomposition used as an alternativeTable-valued parameters solve this problemTable-Valued ParametersSQL Server has table variablesDECLARE @t TABLE (id int);SQL Server 2008 adds strongly typed table variablesCREATE TYPE mytab AS TABLE (id int);DECLARE @t mytab;Parameters must use strongly typed table variables Table Variables are Input OnlyDeclare and initialize TABLE variable  DECLARE @t mytab;  INSERT @t VALUES (1), (2), (3);  EXEC myproc @t;Procedure must declare variable READONLY  CREATE PROCEDURE usetable (    @t mytab READONLY ...)  AS    INSERT INTO lineitems SELECT * FROM @t;    UPDATE @t SET... -- no!T-SQL Syntax EnhancementsSingle statement declare and initialize  DECLARE @iint = 4;Compound Assignment Operators  SET @i += 1;Row constructors  DECLARE @t TABLE (id int, name varchar(20));  INSERT INTO @t VALUES    (1, 'Fred'), (2, 'Jim'), (3, 'Sue');Grouping SetsGrouping Sets allow multiple GROUP BY clauses in a single SQL statementMultiple, arbitrary, sets of subtotalsSingle read pass for performanceNested subtotals provide ever better performanceGrouping Sets are an ANSI-standardCOMPUTE BY is deprecatedGROUPING SETS, ROLLUP, CUBESQL Server 2008 - ANSI-syntax ROLLUP and CUBEPre-2008 non-ANSI syntax is deprecatedWITH ROLLUP produces n+1 different groupings of datawhere n is the number of columns in GROUP BYWITH CUBE produces 2^n different groupingswhere n is the number of columns in GROUP BYGROUPING SETS provide a "halfway measure"Just the number of different groupings you needGrouping Sets are visible in query planGROUPING_ID and GROUPINGGrouping Sets can produce non-homogeneous setsGrouping set includes NULL values for group membersNeed to distinguish by grouping and NULL valuesGROUPING (column expression) returns 0 or 1Is this a group based on column expr. or NULL value?GROUPING_ID (a,b,c) is a bitmaskGROUPING_ID bits are set based on column expressions a, b, and cMERGE StatementMultiple set operations in a single SQL statementUses multiple sets as inputMERGE target USING source ON ...Operations can be INSERT, UPDATE, DELETEOperations based onWHEN MATCHEDWHEN NOT MATCHED [BY TARGET] WHEN NOT MATCHED [BY SOURCE]More on MERGEMERGE statement can reference a $action columnUsed when MERGE used with OUTPUT clauseMultiple WHEN clauses possible For MATCHED and NOT MATCHED BY SOURCEOnly one WHEN clause for NOT MATCHED BY TARGETMERGE can be used with any table sourceA MERGE statement causes triggers to be fired onceRows affected includes total rows affected by all clausesMERGE PerformanceMERGE statement is transactionalNo explicit transaction requiredOne Pass Through TablesAt most a full outer joinMatching rows = when matchedLeft-outer join rows = when not matched by targetRight-outer join rows = when not matched by sourceMERGE and DeterminismUPDATE using a JOIN is non-deterministicIf more than one row in source matches ON clause, either/any row can be used for the UPDATEMERGE is deterministicIf more than one row in source matches ON clause, its an errorKeeping Track of DependenciesNew dependency views replace sp_dependsViews are kept in sync as changes occursys.dm_sql_referenced_entitiesLists all named entities that an object referencesExample: which objects does this stored procedure use?sys.dm_sql_referencing_entities 

    Read the article

  • Converting Openfire IM datetime values in SQL Server to / from VARCHAR(15) and DATETIME data types

    - by Brian Biales
    A client is using Openfire IM for their users, and would like some custom queries to audit user conversations (which are stored by Openfire in tables in the SQL Server database). Because Openfire supports multiple database servers and multiple platforms, the designers chose to store all date/time stamps in the database as 15 character strings, which get converted to Java Date objects in their code (Openfire is written in Java).  I did some digging around, and, so I don't forget and in case someone else will find this useful, I will put the simple algorithms here for converting back and forth between SQL DATETIME and the Java string representation. The Java string representation is the number of milliseconds since 1/1/1970.  SQL Server's DATETIME is actually represented as a float, the value being the number of days since 1/1/1900, the portion after the decimal point representing the hours/minutes/seconds/milliseconds... as a fractional part of a day.  Try this and you will see this is true:     SELECT CAST(0 AS DATETIME) and you will see it returns the date 1/1/1900. The difference in days between SQL Server's 0 date of 1/1/1900 and the Java representation's 0 date of 1/1/1970 is found easily using the following SQL:   SELECT DATEDIFF(D, '1900-01-01', '1970-01-01') which returns 25567.  There are 25567 days between these dates. So to convert from the Java string to SQL Server's date time, we need to convert the number of milliseconds to a floating point representation of the number of days since 1/1/1970, then add the 25567 to change this to the number of days since 1/1/1900.  To convert to days, you need to divide the number by 1000 ms/s, then by  60 seconds/minute, then by 60 minutes/hour, then by 24 hours/day.  Or simply divide by 1000*60*60*24, or 86400000.   So, to summarize, we need to cast this string as a float, divide by 86400000 milliseconds/day, then add 25567 days, and cast the resulting value to a DateTime.  Here is an example:   DECLARE @tmp as VARCHAR(15)   SET @tmp = '1268231722123'   SELECT @tmp as JavaTime, CAST((CAST(@tmp AS FLOAT) / 86400000) + 25567 AS DATETIME) as SQLTime   To convert from SQL datetime back to the Java time format is not quite as simple, I found, because floats of that size do not convert nicely to strings, they end up in scientific notation using the CONVERT function or CAST function.  But I found a couple ways around that problem. You can convert a date to the number of  seconds since 1/1/1970 very easily using the DATEDIFF function, as this value fits in an Int.  If you don't need to worry about the milliseconds, simply cast this integer as a string, and then concatenate '000' at the end, essentially multiplying this number by 1000, and making it milliseconds since 1/1/1970.  If, however, you do care about the milliseconds, you will need to use DATEPART to get the milliseconds part of the date, cast this integer to a string, and then pad zeros on the left to make sure this is three digits, and concatenate these three digits to the number of seconds string above.  And finally, I discovered by casting to DECIMAL(15,0) then to VARCHAR(15), I avoid the scientific notation issue.  So here are all my examples, pick the one you like best... First, here is the simple approach if you don't care about the milliseconds:   DECLARE @tmp as VARCHAR(15)   DECLARE @dt as DATETIME   SET @dt = '2010-03-10 14:35:22.123'   SET @tmp = CAST(DATEDIFF(s, '1970-01-01 00:00:00' , @dt) AS VARCHAR(15)) + '000'   SELECT @tmp as JavaTime, @dt as SQLTime If you want to keep the milliseconds:   DECLARE @tmp as VARCHAR(15)   DECLARE @dt as DATETIME   DECLARE @ms as int   SET @dt = '2010-03-10 14:35:22.123'   SET @ms as DATEPART(ms, @dt)   SET @tmp = CAST(DATEDIFF(s, '1970-01-01 00:00:00' , @dt) AS VARCHAR(15))           + RIGHT('000' + CAST(@ms AS VARCHAR(3)), 3)   SELECT @tmp as JavaTime, @dt as SQLTime Or, in one fell swoop:   DECLARE @dt as DATETIME   SET @dt = '2010-03-10 14:35:22.123'   SELECT @dt as SQLTime     , CAST(DATEDIFF(s, '1970-01-01 00:00:00' , @dt) AS VARCHAR(15))           + RIGHT('000' + CAST( DATEPART(ms, @dt) AS VARCHAR(3)), 3) as JavaTime   And finally, a way to simply reverse the math used converting from Java date to SQL date. Note the parenthesis - watch out for operator precedence, you want to subtract, then multiply:   DECLARE @dt as DATETIME   SET @dt = '2010-03-10 14:35:22.123'   SELECT @dt as SQLTime     , CAST(CAST((CAST(@dt as Float) - 25567.0) * 86400000.0 as DECIMAL(15,0)) as VARCHAR(15)) as JavaTime Interestingly, I found that converting to SQL Date time can lose some accuracy, when I converted the time above to Java time then converted  that back to DateTime, the number of milliseconds is 120, not 123.  As I am not interested in the milliseconds, this is ok for me.  But you may want to look into using DateTime2 in SQL Server 2008 for more accuracy.

    Read the article

  • SQL Server – SafePeak “Logon Trigger” Feature for Managing Data Access

    - by pinaldave
    Lately I received an interesting question about the abilities of SafePeak for SQL Server acceleration software: Q: “I would like to use SafePeak to make my CRM application faster. It is an application we bought from some vendor, after a while it became slow and we can’t reprogram it. SafePeak automated caching sounds like an easy and good solution for us. But, in my application there are many servers and different other applications services that address its main database, and some even change data, and I feel that there is a chance that some servers that during the connection process we may miss some. Is there a way to ensure that SafePeak will be aware of all connections to the SQL Server, so its cache will remain intact?” Interesting question, as I remember that SafePeak (http://www.safepeak.com/Product/SafePeak-Overview) likes that all traffic to the database will go thru it. I decided to check out the features of SafePeak latest version (2.1) and seek for an answer there. A: Indeed I found SafePeak has a feature they call “Logon Trigger” and is designed for that purpose. It is located in the user interface, under: Settings -> SQL instances management  ->  [your instance]  ->  [Logon Trigger] tab. From here you activate / deactivate it and control a white-list of enabled server IPs and Login names that SafePeak will ignore them. Click to Enlarge After activation of the “logon trigger” Safepeak server is notified by the SQL Server itself on each new opened connection. Safepeak monitors those connections and decides if there is something to do with them or not. On a typical installation SafePeak likes all application and users connections to go via SafePeak – this way it knows about data and schema updates immediately (real time). With activation of the safepeak “logon trigger”  a special CLR trigger is deployed on the SQL server and notifies Safepeak on any connection that has not arrived via SafePeak. In such cases Safepeak can act to clear and lock the cache or to ignore it. This feature enables to make sure SafePeak will be aware of all connections so SafePeak cache will maintain exactly correct all times. So even if a user, like a DBA will connect to the SQL Server not via SafePeak, SafePeak will know about it and take actions. The notification does not impact the work of that connection, the user or application still continue to do whatever they planned to do. Note: I found that activation of logon trigger in SafePeak requires that SafePeak SQL login will have the next permissions: 1) CONTROL SERVER; 2) VIEW SERVER STATE; 3) And the SQL Server instance is CLR enabled; Seeing SafePeak in action, I can say SafePeak brings fantastic resource for those who seek to get performance for SQL Server critical apps. SafePeak promises to accelerate SQL Server applications in just several hours of installation, automatic learning and some optimization configuration (no code changes!!!). If better application and database performance means better business to you – I suggest you to download and try SafePeak. The solution of SafePeak is indeed unique, and the questions I receive are very interesting. Have any more questions on SafePeak? Please leave your question as a comment and I will try to get an answer for you. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • You Probably Already Have a “Private Cloud”

    - by BuckWoody
    I’ve mentioned before that I’m not a fan of the word “Cloud”. It’s too marketing-oriented, gimmicky and non-specific. A better definition (in many cases) is “Distributed Computing”. That means that some or all of the computing functions are handled somewhere other than under your specific control. But there is a current use of the word “Cloud” that does not necessarily mean that the computing is done somewhere else. In fact, it’s a vector of Cloud Computing that can better be termed “Utility Computing”. This has to do with the provisioning of a computing resource. That means the setup, configuration, management, balancing and so on that is needed so that a user – which might actually be a developer – can do some computing work. To that person, the resource is just “there” and works like they expect, like the phone system or any other utility. The interesting thing is, you can do this yourself. In fact, you probably already have been, or are now. It’s got a cool new trendy term – “Private Cloud”, but the fact is, if you have your setup automated, the HA and DR handled, balancing and performance tuning done, and a process wrapped around it all, you can call yourself a “Cloud Provider”. A good example here is your E-Mail system. your users – pretty much your whole company – just logs into e-mail and expects it to work. To them, you are the “Cloud” provider. On your side, the more you automate and provision the system, the more you act like a Cloud Provider. Another example is a database server. In this case, the “end user” is usually the development team, or perhaps your SharePoint group and so on. The data professionals configure, monitor, tune and balance the system all the time. The more this is automated, the more you’re acting like a Cloud Provider. Lots of companies help you do this in your own data centers, from VMWare to IBM and many others. Microsoft's offering in this is based around System Center – they have a “cloud in a box” provisioning system that’s actually pretty slick. The most difficult part of operating a Private Cloud is probably the scale factor. In the case of Windows and SQL Azure, we handle this in multiple ways – and we're happy to share how we do it. It’s not magic, and the algorithms for balancing (like the one we started with called Paxos) are well known. The key is the knowledge, infrastructure and people. Sure, you can do this yourself, and in many cases such as top-secret or private systems, you probably should. But there are times where you should evaluate using Azure or other vendors, or even multiple vendors to spread your risk. All of this should be based on client need, not on what you know how to do already. So congrats on your new role as a “Cloud Provider”. If you have an E-mail system or a database platform, you can just put that right on your resume.

    Read the article

  • Adding the New HTML Editor Extender to a Web Forms Application using NuGet

    - by Stephen Walther
    The July 2011 release of the Ajax Control Toolkit includes a new, lightweight, HTML5 compatible HTML Editor extender. In this blog entry, I explain how you can take advantage of NuGet to quickly add the new HTML Editor control extender to a new or existing ASP.NET Web Forms application. Installing the Latest Version of the Ajax Control Toolkit with NuGet NuGet is a package manager. It enables you to quickly install new software directly from within Visual Studio 2010. You can use NuGet to install additional software when building any type of .NET application including ASP.NET Web Forms and ASP.NET MVC applications. If you have not already installed NuGet then you can install NuGet by navigating to the following address and clicking the giant install button: http://nuget.org/ After you install NuGet, you can add the Ajax Control Toolkit to a new or existing ASP.NET Web Forms application by selecting the Visual Studio menu option Tools, Library Package Manager, Package Manager Console: Selecting this menu option opens the Package Manager Console. You can enter the command Install-Package AjaxControlToolkit in the console to install the Ajax Control Toolkit: After you install the Ajax Control Toolkit with NuGet, your application will include an assembly reference to the AjaxControlToolkit.dll and SanitizerProviders.dll assemblies: Furthermore, your Web.config file will be updated to contain a new tag prefix for the Ajax Control Toolkit controls: <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="ajaxToolkit" assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" /> </controls> </pages> </system.web> </configuration> The configuration file installed by NuGet adds the prefix ajaxToolkit for all of the Ajax Control Toolkit controls. You can type ajaxToolkit: in source view to get auto-complete in Source view. You can, of course, change this prefix to anything you want. Using the HTML Editor Extender After you install the Ajax Control Toolkit, you can use the HTML Editor Extender with the standard ASP.NET TextBox control to enable users to enter rich formatting such as bold, underline, italic, different fonts, and different background and foreground colors. For example, the following page can be used for entering comments. The page contains a standard ASP.NET TextBox, Button, and Label control. When you click the button, any text entered into the TextBox is displayed in the Label control. It is a pretty boring page: Let’s make this page fancier by extending the standard ASP.NET TextBox with the HTML Editor extender control: Notice that the ASP.NET TextBox now has a toolbar which includes buttons for performing various kinds of formatting. For example, you can change the size and font used for the text. You also can change the foreground and background color – and make many other formatting changes. You can customize the toolbar buttons which the HTML Editor extender displays. To learn how to customize the toolbar, see the HTML Editor Extender sample page here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/HTMLEditorExtender/HTMLEditorExtender.aspx Here’s the source code for the ASP.NET page: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="WebApplication1.Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Add Comments</title> </head> <body> <form id="form1" runat="server"> <div> <ajaxToolkit:ToolkitScriptManager ID="TSM1" runat="server" /> <asp:TextBox ID="txtComments" TextMode="MultiLine" Columns="50" Rows="8" Runat="server" /> <ajaxToolkit:HtmlEditorExtender ID="hee" TargetControlID="txtComments" Runat="server" /> <br /><br /> <asp:Button ID="btnSubmit" Text="Add Comment" Runat="server" onclick="btnSubmit_Click" /> <hr /> <asp:Label ID="lblComment" Runat="server" /> </div> </form> </body> </html> Notice that the page above contains 5 controls. The page contains a standard ASP.NET TextBox, Button, and Label control. However, the page also contains an Ajax Control Toolkit ToolkitScriptManager control and HtmlEditorExtender control. The HTML Editor extender control extends the standard ASP.NET TextBox control. The HTML Editor TargetID attribute points at the TextBox control. Here’s the code-behind for the page above:   using System; namespace WebApplication1 { public partial class Default : System.Web.UI.Page { protected void btnSubmit_Click(object sender, EventArgs e) { lblComment.Text = txtComments.Text; } } }   Preventing XSS/JavaScript Injection Attacks If you use an HTML Editor -- any HTML Editor -- in a public facing web page then you are opening your website up to Cross-Site Scripting (XSS) attacks. An evil hacker could submit HTML using the HTML Editor which contains JavaScript that steals private information such as other user’s passwords. Imagine, for example, that you create a web page which enables your customers to post comments about your website. Furthermore, imagine that you decide to redisplay the comments so every user can see them. In that case, a malicious user could submit JavaScript which displays a dialog asking for a user name and password. When an unsuspecting customer enters their secret password, the script could transfer the password to the hacker’s website. So how do you accept HTML content without opening your website up to JavaScript injection attacks? The Ajax Control Toolkit HTML Editor supports the Anti-XSS library. You can use the Anti-XSS library to sanitize any HTML content. The Anti-XSS library, for example, strips away all JavaScript automatically. You can download the Anti-XSS library from NuGet. Open the Package Manager Console and execute the command Install-Package AntiXSS: Adding the Anti-XSS library to your application adds two assemblies to your application named AntiXssLibrary.dll and HtmlSanitizationLibrary.dll. After you install the Anti-XSS library, you can configure the HTML Editor extender to use the Anti-XSS library your application’s web.config file: <?xml version="1.0" encoding="utf-8"?> <configuration> <configSections> <sectionGroup name="system.web"> <section name="sanitizer" requirePermission="false" type="AjaxControlToolkit.Sanitizer.ProviderSanitizerSection, AjaxControlToolkit"/> </sectionGroup> </configSections> <system.web> <sanitizer defaultProvider="AntiXssSanitizerProvider"> <providers> <add name="AntiXssSanitizerProvider" type="AjaxControlToolkit.Sanitizer.AntiXssSanitizerProvider"></add> </providers> </sanitizer> <compilation debug="true" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="ajaxToolkit" assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" /> </controls> </pages> </system.web> </configuration> Summary In this blog entry, I described how you can quickly get started using the new HTML Editor extender – included with the July 2011 release of the Ajax Control Toolkit – by installing the Ajax Control Toolkit with NuGet. If you want to learn more about the HTML Editor then please take a look at the Ajax Control Toolkit sample site: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/HTMLEditorExtender/HTMLEditorExtender.aspx

    Read the article

  • (Unity)Getting a mirrored mesh from my data structure

    - by Steve
    Here's the background: I'm in the beginning stages of an RTS game in Unity. I have a procedurally generated terrain with a perlin-noise height map, as well as a function to generate a river. The problem is that the graphical creation of the map is taking the data structure of the map and rotating it by 180 degrees. I noticed this problem when i was creating my rivers. I would set the River's height to flat, and noticed that the actual tiles that were flat in the graphical representation were flipped and mirrored. Here's 3 screenshots of the map from different angles: http://imgur.com/a/VLHHq As you can see, if you flipped (graphically) the river by 180 degrees on the z axis, it would fit where the terrain is flattened. I have a suspicion it is being caused by a misunderstanding on my part of how vertices work. Alas, here is a snippet of the code that is used: This code here creates a new array of Tile objects, which hold the information for each tile, including its type, coordinate, height, and it's 4 vertices public DTileMap (int size_x, int size_y) { this.size_x = size_x; this.size_y = size_y; //Initialize Map_Data Array of Tile Objects map_data = new Tile[size_x, size_y]; for (int j = 0; j < size_y; j++) { for (int i = 0; i < size_x; i++) { map_data [i, j] = new Tile (); map_data[i,j].coordinate.x = (int)i; map_data[i,j].coordinate.y = (int)j; map_data[i,j].vertices[0] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -j * GTileMap.TileMap.tileSize); map_data[i,j].vertices[1] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[2] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[3] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); } } This code sets the river tiles to height 0 foreach (Tile t in map_data) { if (t.realType == "Water") { t.vertices[0].y = 0f; t.vertices[1].y = 0f; t.vertices[2].y = 0f; t.vertices[3].y = 0f; } } And below is the code to generate the actual graphics from the data: public void BuildMesh () { DTileMap.DTileMap map = new DTileMap.DTileMap (size_x, size_z); int numTiles = size_x * size_z; int numTris = numTiles * 2; int vsize_x = size_x + 1; int vsize_z = size_z + 1; int numVerts = vsize_x * vsize_z; // Generate the mesh data Vector3[] vertices = new Vector3[ numVerts ]; Vector3[] normals = new Vector3[numVerts]; Vector2[] uv = new Vector2[numVerts]; int[] triangles = new int[ numTris * 3 ]; int x, z; for (z=0; z < vsize_z; z++) { for (x=0; x < vsize_x; x++) { normals [z * vsize_x + x] = Vector3.up; uv [z * vsize_x + x] = new Vector2 ((float)x / size_x, 1f - (float)z / size_z); } } for (z=0; z < vsize_z; z+=1) { for (x=0; x < vsize_x; x+=1) { if (x == vsize_x - 1 && z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z - 1].vertices [3]; } else if (z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z - 1].vertices [2]; } else if (x == vsize_x - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z].vertices [1]; } else { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [0]; vertices [z * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [1]; vertices [(z+1) * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [2]; vertices [(z+1) * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [3]; } } } } for (z=0; z < size_z; z++) { for (x=0; x < size_x; x++) { int squareIndex = z * size_x + x; int triOffset = squareIndex * 6; triangles [triOffset + 0] = z * vsize_x + x + 0; triangles [triOffset + 2] = z * vsize_x + x + vsize_x + 0; triangles [triOffset + 1] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 3] = z * vsize_x + x + 0; triangles [triOffset + 5] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 4] = z * vsize_x + x + 1; } } // Create a new Mesh and populate with the data Mesh mesh = new Mesh (); mesh.vertices = vertices; mesh.triangles = triangles; mesh.normals = normals; mesh.uv = uv; // Assign our mesh to our filter/renderer/collider MeshFilter mesh_filter = GetComponent<MeshFilter> (); MeshCollider mesh_collider = GetComponent<MeshCollider> (); mesh_filter.mesh = mesh; mesh_collider.sharedMesh = mesh; calculateMeshTangents (mesh); BuildTexture (map); } If this looks familiar to you, its because i got most of it from Quill18. I've been slowly adapting it for my uses. And please include any suggestions you have for my code. I'm still in the very early prototyping stage.

    Read the article

  • Getting a mirrored mesh from my data structure

    - by Steve
    Here's the background: I'm in the beginning stages of an RTS game in Unity. I have a procedurally generated terrain with a perlin-noise height map, as well as a function to generate a river. The problem is that the graphical creation of the map is taking the data structure of the map and rotating it by 180 degrees. I noticed this problem when I was creating my rivers. I would set the River's height to flat, and noticed that the actual tiles that were flat in the graphical representation were flipped and mirrored. Here's 3 screenshots of the map from different angles: http://imgur.com/a/VLHHq As you can see, if you flipped (graphically) the river by 180 degrees on the z axis, it would fit where the terrain is flattened. I have a suspicion it is being caused by a misunderstanding on my part of how vertices work. Alas, here is a snippet of the code that is used: This code here creates a new array of Tile objects, which hold the information for each tile, including its type, coordinate, height, and it's 4 vertices public DTileMap (int size_x, int size_y) { this.size_x = size_x; this.size_y = size_y; //Initialize Map_Data Array of Tile Objects map_data = new Tile[size_x, size_y]; for (int j = 0; j < size_y; j++) { for (int i = 0; i < size_x; i++) { map_data [i, j] = new Tile (); map_data[i,j].coordinate.x = (int)i; map_data[i,j].coordinate.y = (int)j; map_data[i,j].vertices[0] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -j * GTileMap.TileMap.tileSize); map_data[i,j].vertices[1] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[2] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[3] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); } } This code sets the river tiles to height 0 foreach (Tile t in map_data) { if (t.realType == "Water") { t.vertices[0].y = 0f; t.vertices[1].y = 0f; t.vertices[2].y = 0f; t.vertices[3].y = 0f; } } And below is the code to generate the actual graphics from the data: public void BuildMesh () { DTileMap.DTileMap map = new DTileMap.DTileMap (size_x, size_z); int numTiles = size_x * size_z; int numTris = numTiles * 2; int vsize_x = size_x + 1; int vsize_z = size_z + 1; int numVerts = vsize_x * vsize_z; // Generate the mesh data Vector3[] vertices = new Vector3[ numVerts ]; Vector3[] normals = new Vector3[numVerts]; Vector2[] uv = new Vector2[numVerts]; int[] triangles = new int[ numTris * 3 ]; int x, z; for (z=0; z < vsize_z; z++) { for (x=0; x < vsize_x; x++) { normals [z * vsize_x + x] = Vector3.up; uv [z * vsize_x + x] = new Vector2 ((float)x / size_x, 1f - (float)z / size_z); } } for (z=0; z < vsize_z; z+=1) { for (x=0; x < vsize_x; x+=1) { if (x == vsize_x - 1 && z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z - 1].vertices [3]; } else if (z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z - 1].vertices [2]; } else if (x == vsize_x - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z].vertices [1]; } else { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [0]; vertices [z * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [1]; vertices [(z+1) * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [2]; vertices [(z+1) * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [3]; } } } } for (z=0; z < size_z; z++) { for (x=0; x < size_x; x++) { int squareIndex = z * size_x + x; int triOffset = squareIndex * 6; triangles [triOffset + 0] = z * vsize_x + x + 0; triangles [triOffset + 2] = z * vsize_x + x + vsize_x + 0; triangles [triOffset + 1] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 3] = z * vsize_x + x + 0; triangles [triOffset + 5] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 4] = z * vsize_x + x + 1; } } // Create a new Mesh and populate with the data Mesh mesh = new Mesh (); mesh.vertices = vertices; mesh.triangles = triangles; mesh.normals = normals; mesh.uv = uv; // Assign our mesh to our filter/renderer/collider MeshFilter mesh_filter = GetComponent<MeshFilter> (); MeshCollider mesh_collider = GetComponent<MeshCollider> (); mesh_filter.mesh = mesh; mesh_collider.sharedMesh = mesh; calculateMeshTangents (mesh); BuildTexture (map); } If this looks familiar to you, its because i got most of it from Quill18. I've been slowly adapting it for my uses. And please include any suggestions you have for my code. I'm still in the very early prototyping stage.

    Read the article

  • Xcode strange warning - Multiple build commands for output file

    - by Futur
    Hi All, I am getting an error like this, [WARN]Warning: Multiple build commands for output file /Developer/B/Be/build/Release-iphonesimulator/BB.app/no.png [WARN]Warning: Multiple build commands for output file /Developer/B/Be/build/Release-iphonesimulator/BB.app/d.png [WARN]Warning: Multiple build commands for output file /Developer/B/Be/build/Release-iphonesimulator/BB.app/n.png but i have checked the xcode and i dont see any duplicates of such files at all. As the apple lists says : http://lists.apple.com/archives/xcode-users/2006/Dec/msg00276.html there are no duplicates. Please help.

    Read the article

  • Hidden/Shown AsyncFileUpload Control Doesn't Fire Server-Side UploadedComplete Event

    - by Bob Mc
    I recently came across the AsyncFileUpload control in the latest (3.0.40412) release of the ASP.Net Ajax Control Toolkit. There appears to be an issue when using it in a hidden control that is later revealed, such as a <div> tag with visible=false. Example: Page code - <%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit" TagPrefix="act" %> . . . <act:ToolkitScriptManager runat="server" ID="ScriptManager1" /> <asp:UpdatePanel runat="server" ID="upnlFileUpload"> <ContentTemplate> <asp:Button runat="server" ID="btnShowUpload" Text="Show Upload" /> <div runat="server" id="divUpload" visible="false"> <act:AsyncFileUpload runat="server" id="ctlFileUpload" /> </div> </ContentTemplate> </asp:UpdatePanel> Server-side Code - Protected Sub ctlFileUpload_UploadedComplete(ByVal sender As Object, ByVal e As AjaxControlToolkit.AsyncFileUploadEventArgs) Handles ctlFileUpload.UploadedComplete End Sub Protected Sub btnShowUpload_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles btnShowUpload.Click divUpload.Visible = True End Sub I have a breakpoint on the UploadedComplete event but it never fires. However, if you take the AsyncFileUpload control out of the <div>, making it visible at initial page render, the control works as expected. So, is this a bug within the AsynchUploadControl, or am I not grasping a fundamental concept (which happens regularly)?

    Read the article

  • C# Winform bring control to front

    - by Nathan
    Are there any other methods of bringing a control to the front other than control.BringToFront()? I have series of labels on a user control and when I try to bring one of them to front it is not working. I have even looped through all the controls and sent them all the back except for the one I am interested in and it doesn't change a thing.

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >