Search Results

Search found 1402 results on 57 pages for 'dataset'.

Page 18/57 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Create an XML file using Datasets Using info from XML Schema

    - by Voulnet
    Hello there, I have been thinking about the optimal way to create an XML file using data from a Dataset AND according to the rules of an XML schema. I've been searching around for a bit, and I failed to find a way in which I only take the data from the Dataset and put it inside a XML tags, with the tags being defined by an already-existing schema. So it might go like this: 1- Create Dataset and fill its rows with data. 2- Create an XML according to an XML schema rules. 3- Fill said XML file with data from Dataset such that data is taken from the Dataset while structure of the XML file is taken from the XML schema.

    Read the article

  • Handling update errors in multiple records in the TClientDataset's ReconcileError method

    - by Fabio Gomes
    I'm trying to use the ReconcileError event to allow the user to correct the data after an update error which occurred in a specific record among others. Example: I have a dataset with one field and 3 records, this field have a unique constraint on the database, then I change one value to conflict when it reaches the database, then I call ApplyUpdates on the Dataset. This will generate an error (violation of unique constraint) in the provider and abort the applyupdates process, returning raAbort in the Action var of the ReconcileError method. In the ReconcileError method I tryied to use: Action := HandleReconcileError(aDataSet, UpdateKind, E); ** EDIT ** After debugging and dumping the DataSet records which were returned from the server, I noticed that there are 2 records in this Dataset, the first is the Old record and the second have all the changes I made to the first record. I'm a bit confused, will I always get this DataSet with 2 records? I thought that it should have only one record with the Old/New values. Thanks.

    Read the article

  • Dynamic type for List<T>?

    - by Brett
    Hi All, I've got a method that returns a List for a DataSet table public static List<string> GetListFromDataTable(DataSet dataSet, string tableName, string rowName) { int count = dataSet.Tables[tableName].Rows.Count; List<string> values = new List<string>(); // Loop through the table and row and add them into the array for (int i = 0; i < count; i++) { values.Add(dataSet.Tables[tableName].Rows[i][rowName].ToString()); } return values; } Is there a way I can dynamically set the datatype for the list and have this one method cater for all datatypes so I can specify upon calling this method that it should be a List<int or List<string> or List<AnythingILike>? Also, what would the return type be when declaring the method? Thanks in advance, Brett

    Read the article

  • Crystal Report just have one line ?

    - by Henry
    OpenConnect(); OleDbDataAdapter olda = new OleDbDataAdapter("Select * from RECORD where LIC_PLATE='GE 320'", con); DataSet dataset = new DataSet(); olda.Fill(dataset); cr1.SetDataSource(dataset.Tables[0]); crystalReportViewer1.ReportSource = cr1; crystalReportViewer1.Refresh(); CloseConnect(); I had only one line in my report. How can I solve this problem ? I checked that I had too many records that has LIC_PLATE= GE 320

    Read the article

  • allow only distinct values in ComboBox

    - by Ravinder Gangadher
    In my project, I am trying to populate ComboBox from DataSet. I succeeded in populating but the values inside the ComboBox are not distinct (Because it just showing the values present in DataSet). I cant bind the ComboBox to DataSet because I am adding "Select" text at first of populating the values. Here is my code. ComboBox --> cmb DataSet --> ds DataSet Column Name --> value(string) cmb.Items.Clear(); cmb.Items.Add("Select"); for (int intCount = 0; intCount < ds.Tables[0].Rows.Count; intCount++) { cmb.Items.Add(ds.Tables[0].Rows[intCount][value].ToString()); } cmb.SelectedIndex = 0; My question is how to allow distinct values (or restrict duplicate values) inside the ComboBox.

    Read the article

  • ASP.net: Efficient ways to convert DataSets to GenericCollection (Of ObjectType)

    - by jlrolin
    I currently have a function that gets some data from the database and puts it into a dataset. The return type on my function is GenericCollection (Of CustomerDetails) If I do this: Dim dataset As DataSet = Read(strSQL.ToString) 'Gets Data from DB What's the most efficient way to map the dataset results to an collection of objects. More importantly, since I'm using GenericCollection, is there a way to do this in which I can call a function from the ObjectType class (CustomerDetails) that would have a means to converting that specific object. Or is there a way in which I can use a function that would handle all types? Is there a way to do something like: Return returnedResults.TransformDataSet(dataset) In which returnedResults is an object collection Of CustomerDetails, or would it simply be easier to have TransformDataSet return an object collection Of CustomerDetails by itself? Thanks for any help.

    Read the article

  • Python - calculate multinomial probability density functions on large dataset?

    - by Seafoid
    Hi, I originally intended to use MATLAB to tackle this problem but the inbuilt functions has limitations that do not suit my goal. The same limitation occurs in NumPy. I have two tab-delimited files. The first is a file showing amino acid residue, frequency and count for an in-house database of protein structures, i.e. A 0.25 1 S 0.25 1 T 0.25 1 P 0.25 1 The second file consists of quadruplets of amino acids and the number of times they occur, i.e. ASTP 1 Note, there are 8,000 such quadruplets. Based on the background frequency of occurence of each amino acid and the count of quadruplets, I aim to calculate the multinomial probability density function for each quadruplet and subsequently use it as the expected value in a maximum likelihood calculation. The multinomial distribution is as follows: f(x|n, p) = n!/(x1!*x2!*...*xk!)*((p1^x1)*(p2^x2)*...*(pk^xk)) where x is the number of each of k outcomes in n trials with fixed probabilities p. n is 4 four in all cases in my calculation. I have created three functions to calculate this distribution. # functions for multinomial distribution def expected_quadruplets(x, y): expected = x*y return expected # calculates the probabilities of occurence raised to the number of occurrences def prod_prob(p1, a, p2, b, p3, c, p4, d): prob_prod = (pow(p1, a))*(pow(p2, b))*(pow(p3, c))*(pow(p4, d)) return prob_prod # factorial() and multinomial_coefficient() work in tandem to calculate C, the multinomial coefficient def factorial(n): if n <= 1: return 1 return n*factorial(n-1) def multinomial_coefficient(a, b, c, d): n = 24.0 multi_coeff = (n/(factorial(a) * factorial(b) * factorial(c) * factorial(d))) return multi_coeff The problem is how best to structure the data in order to tackle the calculation most efficiently, in a manner that I can read (you guys write some cryptic code :-)) and that will not create an overflow or runtime error. To data my data is represented as nested lists. amino_acids = [['A', '0.25', '1'], ['S', '0.25', '1'], ['T', '0.25', '1'], ['P', '0.25', '1']] quadruplets = [['ASTP', '1']] I initially intended calling these functions within a nested for loop but this resulted in runtime errors or overfloe errors. I know that I can reset the recursion limit but I would rather do this more elegantly. I had the following: for i in quadruplets: quad = i[0].split(' ') for j in amino_acids: for k in quadruplets: for v in k: if j[0] == v: multinomial_coefficient(int(j[2]), int(j[2]), int(j[2]), int(j[2])) I haven'te really gotten to how to incorporate the other functions yet. I think that my current nested list arrangement is sub optimal. I wish to compare the each letter within the string 'ASTP' with the first component of each sub list in amino_acids. Where a match exists, I wish to pass the appropriate numeric values to the functions using indices. Is their a better way? Can I append the appropriate numbers for each amino acid and quadruplet to a temporary data structure within a loop, pass this to the functions and clear it for the next iteration? Thanks, S :-)

    Read the article

  • how to update a selectecd record in a dataset an update a onother datatable in another Adoconecion

    - by ml
    I have 2 adoconections and 2 datatables in each conecion (Local Table1_master Table1_Detail) (Network Table1_master Table1_Detail) i show thwm in a DBgrid and now i wouth like to update the (Local Table1_master Table1_Detail) from the tables in (Network Table1_master Table1_Detail) how can i upddate the selected record´s .!!! i try many ways but normaly it insert more recordes and don´t update record i use a .MDB database

    Read the article

  • how to update a selected record in a dataset and update another datatable in another Adoconnection?

    - by ml
    I have 2 adoconnections and 2 datatables in each connection (Local Table1_master Table1_Detail) (Network Table1_master Table1_Detail). I show them in a DBgrid and now I would like to update the (Local Table1_master Table1_Detail) from the tables in (Network Table1_master Table1_Detail). How can I update the selected records? I have tried many ways but normally it inserts more records and doesn´t update the record. I use a .MDB database.

    Read the article

  • How can I optimize the import of this dataset in mysql?

    - by GeoffreyF67
    I've got the following table schema: CREATE TABLE `alexa` ( `id` int(10) unsigned NOT NULL, `rank` int(10) unsigned NOT NULL, `domain` varchar(63) NOT NULL, `domainStatus` varchar(6) DEFAULT NULL, PRIMARY KEY (`rank`), KEY `domain` (`domain`), KEY `id` (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 It takes several minutes to import the data. To me that seems rather slow as we're only talking about a million rows of data. What can I do to optimize the insert of this data? (already using disable keys) G-Man

    Read the article

  • How to handle large dataset with JPA (or at least with Hibernate)?

    - by Roman
    I need to make my web-app work with really huge datasets. At the moment I get either OutOfMemoryException or output which is being generated 1-2 minutes. Let's put it simple and suppose that we have 2 tables in DB: Worker and WorkLog with about 1000 rows in the first one and 10 000 000 rows in the second one. Latter table has several fields including 'workerId' and 'hoursWorked' fields among others. What we need is: count total hours worked by each user; list of work periods for each user. The most straightforward approach (IMO) for each task in plain SQL is: 1) select Worker.name, sum(hoursWorked) from Worker, WorkLog where Worker.id = WorkLog.workerId group by Worker.name; //results of this query should be transformed to Multimap<Worker, Long> 2) select Worker.name, WorkLog.start, WorkLog.hoursWorked from Worker, WorkLog where Worker.id = WorkLog.workerId; //results of this query should be transformed to Multimap<Worker, Period> //if it was JDBC then it would be vitally //to set resultSet.setFetchSize (someSmallNumber), ~100 So, I have two questions: how to implement each of my approaches with JPA (or at least with Hibernate); how would you handle this problem (with JPA or Hibernate of course)?

    Read the article

  • How can I build a generic dataset-handling Perl library?

    - by Pep.
    Hello, I want to build a generic Perl module for handling and analysing biomedical character separated datasets and which can, most certain, be used on any kind of datasets that contain a mixture of categorical (A,B,C,..) and continuous (1.2,3,881..) and identifier (XXX1,XXX2...). The plan is to have people initialize the module and then use some arguments to point to the data file(s), the place were the analysis reports should be placed and the structure of the data. By structure of data I mean which variable is in which place and its name/type. And this is where I need some enlightenment. I am baffled how to do this in a clean way. Obviously, having people create a simple schema file, be it XML or some other format would be the cleanest but maybe not all people enjoy doing something like this. The solutions I can think of are: Create a configuration file in XML or similar and with a prespecified format. Pass the information during initialization of the module. Use the first row of the data as headers and try to guess types (ouch) Surely there must be a "canonical" way of doing this that is also usable and efficient. Thanks p.

    Read the article

  • How does CouchDB perform for a regularly updated dataset?

    - by Ritesh M Nayak
    I am planning on using CouchDB on a project. But as the querying mechanism involves writing views (which are a lot like indexes on regular RDMBMS's) I was wondering, if the document database keeps getting updated a lot ( a write heavy database) would CouchDB perform well compared to a regular RDBMS? Or do we have to compact/re-index the system occasionally to make it perform faster?

    Read the article

  • how to add a variables which comes from dataset in for loop Collection array in c#?

    - by leventkalay1986
    I have a collection of RSS items protected Collection<Rss.Items> list = new Collection<Rss.Items>(); The class RSS.Items includes properties such as Link, Text, Description, etc. But when I try to read the XML and set these properties: for (int i = 0; i < dt.Rows.Count; i++) { row = dt.Rows[i]; list[i].Link.Equals(row[0].ToString()); list[i].Description.Equals( row[1].ToString()); list[i].Title.Equals( row[2].ToString()); list[i].Date.Equals( Convert.ToDateTime(row[3])); } I get a null reference exception on the line list[i].Link.Equals(row[0].ToString()); What am I doing wrong?

    Read the article

  • To use an api or store a large dataset in a rails app?

    - by Dave
    Hi all- I am working on a site that has the potential to need a LOT of space. Basically we hope to have every video game every created stored in a database along with an image of the cover. There are some api's out there that might be able to help, like GiantBomb's (www.giantbomb.com). We are trying to decide whether to store the data locally and if so where to find that comprehensive a list, or make calls to the api on demand. The problem with the latter is likely latency and also downtime problems. Assuming we want to store it locally here are the questions: 1) Where can we find this kind of data (yes, I looked on google, and no I couldnt find anything:)) 2) What is the most efficient way to encode and store the images? Thanks!

    Read the article

  • Is using a DataSet's column Expression works in background same as manual calculation?

    - by Harikrishna
    I have one datatable which is not bindided and records are coming from the file by parsing it in the datatable dynamically every time. Now there is three columns in the datatable Marks1,Marks2 and FinalMarks. And their types is decimal. Now for making addition of columns Marks1 and Marks2 's records and store it into FinalMarks column,For that what I do is : datatableResult.Columns["FinalMarks"].Expression="Marks1+Marks2"; It's works properly. It can be done in other way also is foreach (DataRow r in datatableResult.Rows) { r["FinalMarks"]=Convert.ToDecimal(r["Marks1"])+Convert.ToDecimal(r["Marks2"]); } Is first approach same as second approach in background means is both approach same or what? EDIT: I want to know that first approach works in background as second approach.

    Read the article

  • New Analytic settings for the new code

    - by Steve Tunstall
    If you have upgraded to the new 2011.1.3.0 code, you may find some very useful settings for the Analytics. If you didn't already know, the analytic datasets have the potential to fill up your OS hard drives. The more datasets you use and create, that faster this can happen. Since they take a measurement every second, forever, some of these metrics can get in the multiple GB size in a matter of weeks. The traditional 'fix' was that you had to go into Analytics -> Datasets about once a month and clean up the largest datasets. You did this by deleting them. Ouch. Now you lost all of that historical data that you might have wanted to check out many months from now. Or, you had to export each metric individually to a CSV file first. Not very easy or fun. You could also suspend a dataset, and have it not collect data at all. Well, that fixed the problem, didn't it? of course you now had no data to go look at. Hmmmm.... All of this is no longer a concern. Check out the new Settings tab under Analytics... Now, I can tell the ZFSSA to keep every second of data for, say, 2 weeks, and then average those 60 seconds of each minute into a single 'minute' value. I can go even further and ask it to average those 60 minutes of data into a single 'hour' value.  This allows me to effectively shrink my older datasets by a factor of 1/3600 !!! Very cool. I can now allow my datasets to go forever, and really never have to worry about them filling up my OS drives. That's great going forward, but what about those huge datasets you already have? No problem. Another new feature in 2011.1.3.0 is the ability to shrink the older datasets in the same way. Check this out. I have here a dataset called "Disk: I/O opps per second" that is about 6.32M on disk (You need not worry so much about the "In Core" value, as that is in RAM, and it fluctuates all the time. Once you stop viewing a particular metric, you will see that shrink over time, just relax).  When one clicks on the trash can icon to the right of the dataset, it used to delete the whole thing, and you would have to re-create it from scratch to get the data collecting again. Now, however, it gives you this prompt: As you can see, this allows you to once again shrink the dataset by averaging the second data into minutes or hours. Here is my new dataset size after I do this. So it shrank from 6.32MB down to 2.87MB, but i can still see my metrics going back to the time I began the dataset. Now, you do understand that once you do this, as you look back in time to the minute or hour data metrics, that you are going to see much larger time values, right? You will need to decide what size of granularity you can live with, and for how long. Check this out. Here is my Disk: Percent utilized from 5-21-2012 2:42 pm to 4:22 pm: After I went through the delete process to change everything older than 1 week to "Minutes", the same date and time looks like this: Just understand what this will do and how you want to use it. Right now, I'm thinking of keeping the last 6 weeks of data as "seconds", and then the last 3 months as "Minutes", and then "Hours" forever after that. I'll check back in six months and see how the sizes look. Steve 

    Read the article

  • RESTful API design question - how should one allow users to create new resource instances?

    - by Tamás
    I'm working in a research group where we intend to publish implementations of some of the algorithms we develop on the web via a RESTful API. Most of these algorithms work on small to medium size datasets, and in many cases, a user of our services might want to run multiple queries (with different parameters) on the same dataset, so for me it seems reasonable to allow users to upload their datasets in advance and refer to them in their queries later. In this sense, a dataset could be a resource in my API, and an algorithm could be another. My question is: how should I let the users upload their own datasets? I cannot simply let users upload their data to /dataset/dataset_id as letting the users invent their own dataset_ids might result in ID collision and users overwriting each other's datasets by accident. (I believe one of the most frequently used dataset ID would be test). I think an ideal way would be to have a dedicated URL (like /dataset/upload) where users can POST their datasets and the response would contain a unique ID under which the dataset was stored, but I'm not sure that it does not violate the basic principles of REST. What is the preferred way of dealing with such scenarios?

    Read the article

  • ado.net Concurrency violation

    - by Bicubic
    My first time using ADO.net. Trying to make database of Users. First I populate my DataSet: adapter.AcceptChangesDuringFill = true; adapter.AcceptChangesDuringUpdate = true; adapter.Fill(dataset); To create a user: User user = new User(); user.datarow = dataset.Users.NewUsersRow(); user.Name = username; user.PasswordHash = GetHash(password); user.Rights = UserRights.None; users.Add(user); dataset.Users.AddUsersRow(user.datarow); adapter.Update(dataset); When a user property is modified: adapter.Update(dataset); Creation by itself is fine. If I take an existing user and make multiple changes, fine. Multiple creations in a row, fine. Creation followed by a property change, I get this: "Concurrency violation: the UpdateCommand affected 0 of the expected 1 records." Any ideas?

    Read the article

  • Storing Number in an integer from sql Database

    - by ar31an
    i am using database with table RESUME and column PageIndex in it which type is number in database but when i want to store this PageIndex value to an integer i get exception error Specified cast is not valid. here is the code string sql; string conString = "Provider=Microsoft.ACE.OLEDB.12.0; Data Source=D:\\Deliverable4.accdb"; protected OleDbConnection rMSConnection; protected OleDbDataAdapter rMSDataAdapter; protected DataSet dataSet; protected DataTable dataTable; protected DataRow dataRow; on Button Click sql = "select PageIndex from RESUME"; rMSConnection = new OleDbConnection(conString); rMSDataAdapter = new OleDbDataAdapter(sql, rMSConnection); dataSet = new DataSet("pInDex"); rMSDataAdapter.Fill(dataSet, "RESUME"); dataTable = dataSet.Tables["RESUME"]; int pIndex = (int)dataTable.Rows[0][0]; rMSConnection.Close(); if (pIndex == 0) { Response.Redirect("Create Resume-1.aspx"); } else if (pIndex == 1) { Response.Redirect("Create Resume-2.aspx"); } else if (pIndex == 2) { Response.Redirect("Create Resume-3.aspx"); } } i am getting error in this line int pIndex = (int)dataTable.Rows[0][0];

    Read the article

  • An existing connection was forcibly closed by the remote host

    - by George
    I have a fat VB.NET Winform client that is using the an old asmx style web service. Very often, when I perform query that takes a while, I get the subject error. The error happenes The error seems to occur in < 1 min, which is far less that the web service timeout value that I have set or the timeout value on the ADO Command object that is performing the query within the web server. It seems to occur whenever I am performing a large query that expects to return a lot of rows or when I am sending up a large amount of data to the web service. For example, it just occurred when I was passing a large dataset to the web server: System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a receive. ---> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) --- End of inner exception stack trace --- at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead) --- End of inner exception stack trace --- at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at Smit.Pipeline.Bo.localhost.WsSR.SaveOptions(String emailId, DataSet dsNeighborhood, DataSet dsOption, DataSet dsTaskApplications, DataSet dsCcUsers, DataSet dsDistinctUsers, DataSet dsReferencedApplications) in C:\My\Code\Pipeline2\Smit.Pipeline.Bo\Web References\localhost\Reference.vb:line 944 at Smit.Pipeline.Bo.Options.Save(TaskApplications updatedTaskApplications) in I've been looking a tons of postings on this error and it is surprising at how varied the circumstances which cause this error are. I've tried messing with Wireshark, but I am clueless how to use it. This application only has about 20 users at any one time and I am able to reproduce this error in the middle of the night when probably no one is using the app, so I don't think that the number of requests to the web server or to the database is high. It's probably one right now when I just got the error now. It seems to have to do everything with the amt of data being passed in either direction. This error is really chronic and killing me. Please help.

    Read the article

  • Is a many-to-many relationship with extra fields the right tool for my job?

    - by whichhand
    Previously had a go at asking a more specific version of this question, but had trouble articulating what my question was. On reflection that made me doubt if my chosen solution was correct for the problem, so this time I will explain the problem and ask if a) I am on the right track and b) if there is a way around my current brick wall. I am currently building a web interface to enable an existing database to be interrogated by (a small number of) users. Sticking with the analogy from the docs, I have models that look something like this: class Musician(models.Model): first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) dob = models.DateField() class Album(models.Model): artist = models.ForeignKey(Musician) name = models.CharField(max_length=100) class Instrument(models.Model): artist = models.ForeignKey(Musician) name = models.CharField(max_length=100) Where I have one central table (Musician) and several tables of associated data that are related by either ForeignKey or OneToOneFields. Users interact with the database by creating filtering criteria to select a subset of Musicians based on data the data on the main or related tables. Likewise, the users can then select what piece of data is used to rank results that are presented to them. The results are then viewed initially as a 2 dimensional table with a single row per Musician with selected data fields (or aggregates) in each column. To give you some idea of scale, the database has ~5,000 Musicians with around 20 fields of related data. Up to here is fine and I have a working implementation. However, it is important that I have the ability for a given user to upload there own annotation data sets (more than one) and then filter and order on these in the same way they can with the existing data. The way I had tried to do this was to add the models: class UserDataSets(models.Model): user = models.ForeignKey(User) name = models.CharField(max_length=100) description = models.CharField(max_length=64) results = models.ManyToManyField(Musician, through='UserData') class UserData(models.Model): artist = models.ForeignKey(Musician) dataset = models.ForeignKey(UserDataSets) score = models.IntegerField() class Meta: unique_together = (("artist", "dataset"),) I have a simple upload mechanism enabling users to upload a data set file that consists of 1 to 1 relationship between a Musician and their "score". Within a given user dataset each artist will be unique, but different datasets are independent from each other and will often contain entries for the same musician. This worked fine for displaying the data, starting from a given artist I can do something like this: artist = Musician.objects.get(pk=1) dataset = UserDataSets.objects.get(pk=5) print artist.userdata_set.get(dataset=dataset.pk) However, this approach fell over when I came to implement the filtering and ordering of query set of musicians based on the data contained in a single user data set. For example, I could easily order the query set based on all of the data in the UserData table like this: artists = Musician.objects.all().order_by(userdata__score) But that does not help me order by the results of a given single user dataset. Likewise I need to be able to filter the query set based on the "scores" from different user data sets (eg find all musicians with a score 5 in dataset1 and < 2 in dataset2). Is there a way of doing this, or am I going about the whole thing wrong?

    Read the article

  • Methodology behind fetching large XML data sets in pieces

    - by Jerry Dodge
    I am working on an HTTP Server in Delphi which simply sends back a custom XML dataset. I am not following any type of standard formatting, such as SOAP. I have the system working seamlessly, except one small flaw: When I have a very large dataset to send back to the client, it might take up to 2 minutes for all the data to be transferred. The HTTP Server I'm building is essentially an XML Data based API around a database, implementing the common business rule - therefore, the requests are specific to the data behind the system. When, for example, I fetch a large set of product data, I would like to break this down and send it back piece by piece. However, a single HTTP request calls for a single response. I can't necessarily keep feeding the client with multiple different XML packets unless the client explicitly requests it. I don't have any session management, but rather an API Key. I know if I had sessions, I could keep-alive a dataset temporarily for a client, and they could request bits and pieces of it. However, without session management, I would have to execute the SQL query multiple times (for each chunk of data), and in the mean-time, if that data changes, the "pages" might get messed up, therefore causing items to show on the wrong pages, after navigating to a different page. So how is this commonly handled? What's the methodology behind breaking down a large XML dataset into chunks to save the load?

    Read the article

  • HTML5 data-* (custom data attribute)

    - by Renso
    Goal: Store custom data with the data attribute on any DOM element and retrieve it. Previously under HTML4 we used to use classes to store custom data, something to the affect of <input class="account void limit-5000 over-4999" /> and then have to parse the data out of the class In a book published by Peter-Paul Koch in 2007, ppk on JavaScript, he explains why and how to use custom attributes to make data more accessible to JavaScript, using name-value pairs. Accessing a custom attribute account-limit=5000 is much easier and more intuitive than trying to parse it out of a class, Plus, what if the class name for example "color-5" has a representative class definition in a CSS stylesheet that hides it away or worse some JavaScript plugin that automatically adds 5000 to it, or something crazy like that, just because it is a valid class name. As you can see there are quite a few reasons why using classes is a bad design and why it was important to define custom data attributes in HTML5. Syntax: You define the data attribute by simply prefixing any data item you want to store with any HTML element with "data-". For example to store our customers account data with a hidden input element: <input type="hidden" data-account="void" data-limit=5000 data-over=4999  /> How to access the data: account  -     element.dataset.account limit    -     element.dataset.limit You can also access it by using the more traditional get/setAttribute method or if using jQuery $('#element').attr('data-account','void') Browser support: All except for IE. There is an IE hack around this at http://gist.github.com/362081. Special Note: Be AWARE, do not use upper-case when defining your data elements as it is all converted to lower-case when reading it, so: data-myAccount="A1234" will not be found when you read it with: element.dataset.myAccount Use only lowercase when reading so this will work: element.dataset.myaccount

    Read the article

  • Creating Parent-Child Relationships in SSRS

    - by Tim Murphy
    As I have been working on SQL Server Reporting Services reports the last couple of weeks I ran into a scenario where I needed to present a parent-child data layout.  It is rare that I have seen a report that was a simple tabular or matrix format and this report continued that trend.  I found that the processes for developing complex SSRS reports aren’t as commonly described as I would have thought.  Below I will layout the process that I went through to create a solution. I started with a List control which will contain the layout of the master (parent) information.  This allows for a main repeating report part.  The dataset for this report should include the data elements needed to be passed to the subreport as parameters.  As you can see the layout is simply text boxes that are bound to the dataset. The next step is to set a row group on the List row.  When the dialog appears select the field that you wish to group your report by.  A good example in this case would be the employee name or ID. Create a second report which becomes the subreport.  The example below has a matrix control.  Create the report as you would any parameter driven document by parameterizing the dataset. Add the subreport to the main report inside the row of the List control.  This can be accomplished by either dragging the report from the solution explorer or inserting a Subreport control and then setting the report name property. The last step is to set the parameters on the subreport.  In this case the subreport has EmpId and ReportYear as parameters.  While some of the documentation on this states that the dialog will automatically detect the child parameters, but this has not been my experience.  You must make sure that the names match exactly.  Tie the name of the parameter to either a field in the dataset or a parameter of the parent report. del.icio.us Tags: SQL Server Reporting Services,SSRS,SQL Server,Subreports

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >