Search Results

Search found 64066 results on 2563 pages for 'data types'.

Page 407/2563 | < Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >

  • Better way to summarize data about stop times?

    - by Vimvq1987
    This question is close to this: http://stackoverflow.com/questions/2947963/find-the-period-of-over-speed Here's my table: Longtitude Latitude Velocity Time 102 401 40 2010-06-01 10:22:34.000 103 403 50 2010-06-01 10:40:00.000 104 405 0 2010-06-01 11:00:03.000 104 405 0 2010-06-01 11:10:05.000 105 406 35 2010-06-01 11:15:30.000 106 403 60 2010-06-01 11:20:00.000 108 404 70 2010-06-01 11:30:05.000 109 405 0 2010-06-01 11:35:00.000 109 405 0 2010-06-01 11:40:00.000 105 407 40 2010-06-01 11:50:00.000 104 406 30 2010-06-01 12:00:00.000 101 409 50 2010-06-01 12:05:30.000 104 405 0 2010-06-01 11:05:30.000 I want to summarize times when vehicle had stopped (velocity = 0), include: it had stopped since "when" to "when" in how much minutes, how many times it stopped and how much time it stopped. I wrote this query to do it: select longtitude, latitude, MIN(time), MAX(time), DATEDIFF(minute, MIN(Time), MAX(time)) as Timespan from table_1 where velocity = 0 group by longtitude,latitude select DATEDIFF(minute, MIN(Time), MAX(time)) as minute into #temp3 from table_1 where velocity = 0 group by longtitude,latitude select COUNT(*) as [number]from #temp select SUM(minute) as [totaltime] from #temp3 drop table #temp This query return: longtitude latitude (No column name) (No column name) Timespan 104 405 2010-06-01 11:00:03.000 2010-06-01 11:10:05.000 10 109 405 2010-06-01 11:35:00.000 2010-06-01 11:40:00.000 5 number 2 totaltime 15 You can see, it works fine, but I really don't like the #temp table. Is there anyway to query this without use a temp table? Thank you.

    Read the article

  • Surrogate key for date dimension?

    - by Navin
    There are 2 school of thoughts : Use surrogate key preferbly in the format of YYYYMMDD as this will always be sequential. Eliminate Date dimension surrogate key and use actual date instead. My Questions to experts on dimension modeling are : 1> Which design would you prefer and why ? 2> How should we handle unknown values in each of the cases, Can we simply place NULL in Fact table for unknown dates as Foreign Key can be NULL (if no why)? 3> If we need to partition fact table on date column ,how would we achieve that in case 1. I am inclined towards using actual date and using NULL to represent UNKNOWN dates in fact table , as date related validation on fact can be done without need to look in to dimension table.

    Read the article

  • GZipStream in an WebHttpResponse producing no data

    - by Pierre 303
    I want to compress my HTTP Responses for client that supports it. Here is the code used to send a standard response: IHttpClientContext context = (IHttpClientContext)sender; IHttpRequest request = e.Request; string responseBody = "This is some random text"; IHttpResponse response = request.CreateResponse(context); using (StreamWriter writer = new StreamWriter(response.Body)) { writer.WriteLine(responseBody); writer.Flush(); response.Send(); } The code above works fine. Now I added gzip support below. When I test it with a browser that supports gzip or a custom method, it returns an empty string. I'm sure I'm missing something simple, but I just can't find it... IHttpClientContext context = (IHttpClientContext)sender; IHttpRequest request = e.Request; string acceptEncoding = request.Headers["Accept-Encoding"]; string responseBody = "This is some random text"; IHttpResponse response = request.CreateResponse(context); if (acceptEncoding != null && acceptEncoding.Contains("gzip")) { byte[] bytes = ASCIIEncoding.ASCII.GetBytes(responseBody); response.AddHeader("Content-Encoding", "gzip"); using (GZipStream writer = new GZipStream(response.Body, CompressionMode.Compress)) { writer.Write(bytes, 0, bytes.Length); writer.Flush(); response.Send(); } } else { using (StreamWriter writer = new StreamWriter(response.Body)) { writer.WriteLine(responseBody); writer.Flush(); response.Send(); } }

    Read the article

  • how to filter data from xml file for displaying only selected as nodes in treeview

    - by michale
    I have an xml file named "books.xml" provided in the link "http://msdn.microsoft.com/en-us/library/windows/desktop/ms762271(v=vs.85).aspx". What my requirement was to disaplay only the <title> from xml information as nodes in tree view. But when i did the following coding its displaying all the values as nodes like "catalog" as rootnode, book as parent node for all then author,title,genre etc as nodes but i want only root node catalogue and title as nodes not even book. Can any body guide me what modification i need to do in the exisitng logic for displaying title as nodes OpenFileDialog dlg = new OpenFileDialog(); dlg.Title = "Open XML document"; dlg.Filter = "XML Files (*.xml)|*.xml"; dlg.FileName = Application.StartupPath + "\\..\\..\\Sample.xml"; if (dlg.ShowDialog() == DialogResult.OK) { try { //Just a good practice -- change the cursor to a //wait cursor while the nodes populate this.Cursor = Cursors.WaitCursor; //First, we'll load the Xml document XmlDocument xDoc = new XmlDocument(); xDoc.Load(dlg.FileName); //Now, clear out the treeview, //and add the first (root) node treeView1.Nodes.Clear(); treeView1.Nodes.Add(new TreeNode(xDoc.DocumentElement.Name)); TreeNode tNode = new TreeNode(); tNode = (TreeNode)treeView1.Nodes[0]; //We make a call to addTreeNode, //where we'll add all of our nodes addTreeNode(xDoc.DocumentElement, tNode); //Expand the treeview to show all nodes treeView1.ExpandAll(); } catch (XmlException xExc) //Exception is thrown is there is an error in the Xml { MessageBox.Show(xExc.Message); } catch (Exception ex) //General exception { MessageBox.Show(ex.Message); } finally { this.Cursor = Cursors.Default; //Change the cursor back } }} //This function is called recursively until all nodes are loaded private void addTreeNode(XmlNode xmlNode, TreeNode treeNode) { XmlNode xNode; TreeNode tNode; XmlNodeList xNodeList; if (xmlNode.HasChildNodes) //The current node has children { xNodeList = xmlNode.ChildNodes; for (int x = 0; x <= xNodeList.Count - 1; x++) //Loop through the child nodes { xNode = xmlNode.ChildNodes[x]; treeNode.Nodes.Add(new TreeNode(xNode.Name)); tNode = treeNode.Nodes[x]; addTreeNode(xNode, tNode); } } else //No children, so add the outer xml (trimming off whitespace) treeNode.Text = xmlNode.OuterXml.Trim(); }

    Read the article

  • how to handle result set data

    - by ashwani66476
    Hello All I am getting lacks of records in my Result Set. My concerns are : How Result Set handle these records internally? and How a programmer can handle those records in batches So that memory problem would not occur.? waiting for your answers .. Many Thanks

    Read the article

  • Silverlight Data Binding for Collection in Stack Panel

    - by Blake Blackwell
    I'm new to Silverlight, so I don't have a complete grasp of all the controls at my disposal. What I would like to do is use databinding and a view model to maintain a collection of items. Here is some mock code for what I'd like to do: Model public class MyItem { public string DisplayText { get; set; } public bool Enabled { get; set; } } ViewModel public class MyViewModel : INotifyPropertyChanged { private ObservableCollection<MyItem> _myItems = new ObservableCollection<MyItem>(); public ObservableCollection<MyItem> MyItems { get { return _myItems; } set { _myItems = value NotifyPropertyChanged(this, "MyItems"); } } } View <Grid x:Name="LayoutRoot" Background="White"> <StackPanel ItemsSource="{Binding MyItems}"> <StackPanel Orientation="Horizontal"> <CheckBox "{Binding Enabled, Mode=TwoWay}"></CheckBox> <TextBlock Text="{Binding DisplayText, Mode=TwoWay}" /> </StackPanel> </StackPanel> </Grid> So my end goal would be that every time I add another MyItem to the MyItems collection it would create a new StackPanel with checkbox and textblock. I don't have to use a stack panel but just thought I'd use that for this sample.

    Read the article

  • How can I create multiple identical AWS EC2 server instances with large amounts of persistent data?

    - by mojones
    I have a CPU-intensive data-processing application that I want to run across many (~100,000) input files. The application needs a large (~20GB) data file in order to run. What I would like to do is create an EC2 machine image that has my application and associated data files installed boot up a large number (e.g. 100) of instances of this image split my input files up into 100 batches and send one batch to be processed on each instance I am having trouble figuring out the best way to ensure that each instance has access to the large data file. The data file is too big to fit on the root filesystem of an AMI. I could use Block Storage, but a given Block Storage volume can only be attached to a single instance, so I would need 100 clones. Is there some way to create a custom image that has more space on the root filsystem so that I can include my large data file? Or is there a better way to tackle this problem?

    Read the article

  • How should my application keep clients in sync with schema changes to HTML5 databases?

    - by Chad Johnson
    I'm wanting to incorporate HTML5 database storage into my web application to make it online-accessible. I've done lots of development in server-side environments with databases, and we all know that database schema additions and modifications are often necessary. I am wondering what should happen if my application uses an offline database schema, and that schema changes. How do I prevent the application from breaking on the client side? How do I ensure the database is always up to date on the client end? Anyone have any solutions?

    Read the article

  • where should i encode this html data in an asp.net mvc site

    - by ooo
    here is my view code: <%=Model.HtmlData %> here is my controller code: public ActionResult GetPage() { ContentPageViewModel vm = new ContentPageViewModel(); vm.HtmlData = _htmlPageRepository.Get("key"); return View(vm); } my repository class basically queries a database table that has the fields: id, pageName, htmlContent the .Get() method passes in a pageName (or key) and returns the htmlContent value. Right now i have just started this (haven't persisted anything to the db yet) so i am not doing any explicit encoding in my code now. What is the best practice for where i need to do encoding (in the model, the controller, the view ??)

    Read the article

  • How to display specific data from a file

    - by user1067332
    My program is supposed to ask the user for firstname, lastname, and phone number till the users stops. Then when to display it asks for the first name and does a search in the text file to find all info with the same first name and display lastname and phones of the matches. import java.util.*; import java.io.*; import java.util.Scanner; public class WritePhoneList { public static void main(String[] args)throws IOException { BufferedWriter output = new BufferedWriter(new FileWriter(new File( "PhoneFile.txt"), true)); String name, lname, age; int pos,choice; try { do { Scanner input = new Scanner(System.in); System.out.print("Enter First name, last name, and phone number "); name = input.nextLine(); output.write(name); output.newLine(); System.out.print("Would you like to add another? yes(1)/no(2)"); choice = input.nextInt(); }while(choice == 1); output.close(); } catch(Exception e) { System.out.println("Message: " + e); } } } Here is the display code, when i search for a name, it finds a match but displays the last name and phone number of the same name 3 times, I want it to display all of the possible matches with the first name. import java.util.*; import java.io.*; import java.util.Scanner; public class DisplaySelectedNumbers { public static void main(String[] args)throws IOException { String name; String strLine; try { FileInputStream fstream = new FileInputStream("PhoneFile.txt"); // Get the object of DataInputStream DataInputStream in = new DataInputStream(fstream); BufferedReader br = new BufferedReader(new InputStreamReader(in)); Scanner input = new Scanner(System.in); System.out.print("Enter a first name"); name = input.nextLine(); strLine= br.readLine(); String[] line = strLine.split(" "); String part1 = line[0]; String part2 = line[1]; String part3 = line[2]; //Read File Line By Line while ((strLine= br.readLine()) != null) { if(name.equals(part1)) { // Print the content on the console System.out.print("\n" + part2 + " " + part3); } } }catch (Exception e) {//Catch exception if any System.out.println("Error: " + e.getMessage()); } } }

    Read the article

  • Best way to store 3 pieces of data as a single entry

    - by Matt
    For a game server, I want to record details when a player makes a kill, store this, and then at intervals update to a sql database. The part i'm interested in right now is the best method of storing the kill information. What i'd like to pass to the sql server on update would be {PlayerName, Kills, Deaths}, where the kills and deaths are a sum for the period between updates. So i'm assuming i'd build a list along the lines of {bob, 1, 0} {frank, 0, 1} {tom, 1, 0} {frank, 0, 1} then on update, consolidate the list to {frank, 14, 3}etc Can someone offer some advice please?

    Read the article

  • Should a Trim method generally in the Data Access Layer or with in the Domain Layer?

    - by jpierson
    I'm dealing with a database that contains data with inconsistencies such as white leading and trailing white space. In general I see a lot of developers practice defensive coding by trimming almost all strings that come from the database that may have been entered by a user at some point. In my oppinoin it is better to do such formating before data is persisted so that it is done only once and then the data can be in a consistent and reliable state. Unfortunatley this is not the case however which leads me to the next best solution, using a Trim method. If I trim all data as part of my data access layer then I don't have to concern myself with defensive trimming within the business objects of my domain layer. If I instead put the trimming responsibility in my business objects, such as with set accessors of my C# properties, I should get the same net results however the trim will be operating on all values assigned to my business objects properties not just the ones that come from the inconsistent database. I guess as a somewhat philisophical question that may determine the answer I could ask "Should the domain later be responsible for defensive/coercive formatting of data?" Would it make sense to have a set accessor for a PhoneNumber property on a business object accept a unformatted or formatted string and then attempt to format it as required or should this responsibility be pushed to the presentation and data access layers leaving the domain layer more strict in the type of data that it will accept? I think this may be the more fundamental question. Update: Below are a few links that I thought I should share about the topic of data cleansing. Information service patterns, Part 3: Data cleansing pattern LINQ to SQL - Format a string before saving? How to trim values using Linq to Sql?

    Read the article

  • Java MapReduce read data

    - by Tatiana
    Hi I am having following map-reduce code by which I am trying to read records from my database. There's code: import java.io.*; import java.util.ArrayList; import java.util.List; import org.apache.hadoop.fs.*; import org.apache.hadoop.io.*; import org.apache.hadoop.mapred.*; import org.apache.hadoop.mapred.lib.db.DBConfiguration; import org.apache.hadoop.mapred.lib.db.DBInputFormat; import org.apache.hadoop.mapred.lib.db.DBWritable; import org.apache.hadoop.util.*; import org.apache.hadoop.conf.*; public class Connection extends Configured implements Tool { public int run(String[] args) throws IOException { JobConf conf = new JobConf(getConf(), Connection.class); conf.setInputFormat(DBInputFormat.class); DBConfiguration.configureDB(conf, "com.sun.java.util.jar.pack.Driver", "jdbc:postgresql://localhost:5432/polyclinic", "postgres", "12345"); String[] fields = { "name" }; DBInputFormat.setInput(conf, MyRecord.class, "doctors", null, null, fields); conf.setMapOutputKeyClass(LongWritable.class); conf.setMapOutputValueClass(MyRecord.class); conf.setOutputKeyClass(LongWritable.class); conf.setOutputValueClass(TextOutputFormat.class); TextOutputFormat.setOutputPath(conf, new Path(args[0])); JobClient.runJob(conf); return 0; } public static void main(String[] args) throws Exception { int exitCode = ToolRunner.run(new Connection(), args); System.exit(exitCode); } } Class Mapper: import java.io.IOException; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapred.MapReduceBase; import org.apache.hadoop.mapred.Mapper; import org.apache.hadoop.mapred.OutputCollector; import org.apache.hadoop.mapred.Reporter; public class MyMapper extends MapReduceBase implements Mapper<LongWritable, MyRecord, Text, IntWritable> { public void map(LongWritable key, MyRecord val, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { output.collect(new Text(val.name), new IntWritable(1)); } } Class Record: import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import org.apache.hadoop.io.Text; import org.apache.hadoop.io.Writable; import org.apache.hadoop.mapred.lib.db.DBWritable; class MyRecord implements Writable, DBWritable { String name; public void readFields(DataInput in) throws IOException { this.name = Text.readString(in); } public void readFields(ResultSet resultSet) throws SQLException { this.name = resultSet.getString(1); } public void write(DataOutput out) throws IOException { } public void write(PreparedStatement stmt) throws SQLException { } } After this I got error: WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String). Can you give me any suggestion how to solve this problem?

    Read the article

  • How to index a table with a Type 2 slowly changing dimension for optimal performance

    - by The Lazy DBA
    Suppose you have a table with a Type 2 slowly-changing dimension. Let's express this table as follows, with the following columns: * [Key] * [Value1] * ... * [ValueN] * [StartDate] * [ExpiryDate] In this example, let's suppose that [StartDate] is effectively the date in which the values for a given [Key] become known to the system. So our primary key would be composed of both [StartDate] and [Key]. When a new set of values arrives for a given [Key], we assign [ExpiryDate] to some pre-defined high surrogate value such as '12/31/9999'. We then set the existing "most recent" records for that [Key] to have an [ExpiryDate] that is equal to the [StartDate] of the new value. A simple update based on a join. So if we always wanted to get the most recent records for a given [Key], we know we could create a clustered index that is: * [ExpiryDate] ASC * [Key] ASC Although the keyspace may be very wide (say, a million keys), we can minimize the number of pages between reads by initially ordering them by [ExpiryDate]. And since we know the most recent record for a given key will always have an [ExpiryDate] of '12/31/9999', we can use that to our advantage. However... what if we want to get a point-in-time snapshot of all [Key]s at a given time? Theoretically, the entirety of the keyspace isn't all being updated at the same time. Therefore for a given point-in-time, the window between [StartDate] and [ExpiryDate] is variable, so ordering by either [StartDate] or [ExpiryDate] would never yield a result in which all the records you're looking for are contiguous. Granted, you can immediately throw out all records in which the [StartDate] is greater than your defined point-in-time. In essence, in a typical RDBMS, what indexing strategy affords the best way to minimize the number of reads to retrieve the values for all keys for a given point-in-time? I realize I can at least maximize IO by partitioning the table by [Key], however this certainly isn't ideal. Alternatively, is there a different type of slowly-changing-dimension that solves this problem in a more performant manner?

    Read the article

  • Unable to pass variable data

    - by JM4
    I am trying to use the following code to pass information to another site but it keeps giving me an error: Invalid Code: $a = $b->doIt("James",$_SESSION['JNum'],"Frank"); Valid Sample Code: $a = $b->doIt("James","123456","Frank"); In the first example, the page returns "number field is required". The second piece of sample code returns valid results. Do to the nature of this project though I need to pass the id number as they are stored in SESSION variables. What am I missing?

    Read the article

  • Hierarchical data table for JSF2 and RichFaces

    - by Iravanchi
    I'm using RichFaces on my JSF2 application, and I need a way to have something like a tree-column or tree-table. As far as I know, there's no support for such thing in RichFaces. Something's mentioned for RichFaces 4.0, but the priority of this in their plan isn't promising at all, and I don't think that it's going to be included in 4.0. I know there's a tree table available in IceFaces, but I'd rather not add another library to avoid conflicts and learning curve and ... So, I'm looking for the simplest way that I can achieve the same results with minimum efforts, maybe using sub-tables in RichFaces? Any ideas would be appreciated.

    Read the article

  • How to model my database when using entity framework 4?

    - by Junior Ewing
    Trying to wrap my head around the best approach in modelling a database when we are using Entity Framework 4 as the ORM layer. We are going to use asp.net mvc 2 for the application. Is it worth trying to model using the class diagram modeller that comes with Visual Studio 2010 where you graphically configure your models into the EDMX file and then generate out the database structure? I have run into a bunch of non trivial issues and for complex many to many mappings or multi primary key entities the answer is not that obvious even after poking around a while with the tools. I figure its easy at this point to give up and start modelling the DB using real, working DB modelling tools and then try to generate out the EDMX from the database, rather than trying to do the model first approach.

    Read the article

  • Hide labels in pie charts (MS Chart for .Net)

    - by grenade
    I can't seem to find the property that controls visibility of labels in pie charts. I need to turn the labels off as the information is available in the legend. Anyone know what property I can use in code behind? I tried setting the series labels to nothing Chart1.Series[i].Label = string.Empty; but the labels seem to show up anyway.

    Read the article

  • Autmatically create table on MySQL server based on date?

    - by Anthony
    Is there an equivalent to cron for MySQL? I have a PHP script that queries a table based on the month and year, like: SELECT * FROM data_2010_1 What I have been doing until now is, every time the script executes it does a query for the table, and if it exists, does the work, if it doesn't it creates the table. I was wondering if I can just set something up on the MySQL server itself that will create the table (based on a default table) at the stroke of midnight on the first of the month. Update Based on the comments I've gotten, I'm thinking this isn't the best way to achieve my goal. So here's two more questions: If I have a table with thousands of rows added monthly, is this potentially a drag on resources? If so, what is the best way to partition this table, since the above is verboten? What are the potential problems with my home-grown method I originally thought up?

    Read the article

< Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >