Search Results

Search found 59256 results on 2371 pages for 'data compression'.

Page 388/2371 | < Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >

  • Row selection based on subtable data in MySQL

    - by Felthragar
    I've been struggling with this selection for a while now, I've been searching around StackOverflow and tried a bunch of stuff but nothing that helps me with my particular issue. Maybe I'm just missing something obvious. I have two tables: Measurements, MeasurementFlags "Measurements" contain measurements from a device, and each measurement can have properties/attributes attached to them (commonly known as "flags") to signify that the measurement in question is special in some way or another (for instance, one flag may signify a test or calibration measurement). Note: One record per flag! Right, so a record from the "Measurements" table can theoreticly have an unlimited amount of MeasurementFlags attached to it, or it can have none. Now, I need to select records from "Measurements", that have an attached "MeasurementFlag" valued "X", but it must also NOT have a flag valued "Y" attached to it. We're talking about a fairly large database with hundreds of millions of rows, which is why I'm trying to keep all of this logic within one query. Splitting it up would create too many queries, however if it's not possible to do in one query I guess I don't have a choise. Thanks in advance.

    Read the article

  • Parsing XML data with Namespaces in PHP

    - by osbmedia
    I'm trying to work with this XML feed that uses namespaces and i'm not able to get past the colon in the tags. Here's how the XML feed looks like: <r25:events pubdate="2010-05-19T13:58:08-04:00"> <r25:event xl:href="event.xml?event_id=328" id="BRJDMzI4" crc="00000022" status="est"> <r25:event_id>328</r25:event_id> <r25:event_name>Testing 09/2005-08/2006</r25:event_name> <r25:alien_uid/> <r25:event_priority>0</r25:event_priority> <r25:event_type_id xl:href="evtype.xml?type_id=105">105</r25:event_type_id> <r25:event_type_name>CABINET</r25:event_type_name> <r25:node_type>C</r25:node_type> <r25:node_type_name>cabinet</r25:node_type_name> <r25:state>1</r25:state> <r25:state_name>Tentative</r25:state_name> <r25:event_locator>2005-AAAAMQ</r25:event_locator> <r25:event_title/> <r25:favorite>F</r25:favorite> <r25:organization_id/> <r25:organization_name/> <r25:parent_id/> <r25:cabinet_id xl:href="event.xml?event_id=328">328</r25:cabinet_id> <r25:cabinet_name>cabinet 09/2005-08/2006</r25:cabinet_name> <r25:start_date>2005-09-01</r25:start_date> <r25:end_date>2006-08-31</r25:end_date> <r25:registration_url/> <r25:last_mod_dt>2008-02-27T14:22:43-05:00</r25:last_mod_dt> <r25:last_mod_user>abc00296004</r25:last_mod_user> </r25:event> </r25:events> And here is what I'm using for code - I'll trying to throw these into a bunch of arrays where I can format the output however I want: <?php $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "http://somedomain.com/blah.xml"); curl_setopt ($ch, CURLOPT_HTTPHEADER, Array("Content-Type: text/xml")); curl_setopt($ch, CURLOPT_USERPWD, "username:password"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $output = curl_exec($ch); curl_close($ch); $xml = new SimpleXmlElement($output); foreach ($xml->events->event as $entry){ $dc = $entry->children('http://www.collegenet.com/r25'); echo $entry->event_name . "<br />"; echo $entry->event_id . "<br /><br />"; }

    Read the article

  • Unable to pass variable data

    - by JM4
    I am trying to use the following code to pass information to another site but it keeps giving me an error: Invalid Code: $a = $b->doIt("James",$_SESSION['JNum'],"Frank"); Valid Sample Code: $a = $b->doIt("James","123456","Frank"); In the first example, the page returns "number field is required". The second piece of sample code returns valid results. Do to the nature of this project though I need to pass the id number as they are stored in SESSION variables. What am I missing?

    Read the article

  • problem during data modification

    - by nectar
    here my code - if($pin == '105') { $sqltree = "INSERT INTO tbltree (`userId`, `level`, `superId`, `rootId`, `childcount`) VALUES ('$child1', '1', '$newid', '$myroot', '0');"; mysql_query($sqltree); update_level($newid); } function update_level() { //for 1st level $newid = $_SESSION['newid']; //getting senior's level 1 and to increase by 1 $sqlgetlevel = "SELECT superId,level1 FROM tbltree WHERE userID='$newid'"; echo "<br>test:".$sqlgetlevel; $result = mysql_query($sqlgetlevel,$link)or die(mysql_error()); //line 340 $row = mysql_fetch_array($result, MYSQL_ASSOC); $level1 = $row["level1"]; $level1 = $level1 + 1; //update increased level $sqlupdate = "UPDATE tbltree SET level1='$level1' WHERE userId='$newid';"; mysql_query($sqlupdate,$link)or die(mysql_error()); //change superId for new level $superid = $row["superId"]; } ERROR - test:SELECT superId,level1 FROM tbltree WHERE userID='29277640' Warning: mysql_query() expects parameter 2 to be resource, null given in C:\xampp\htdocs\303\levelupdate.php on line 340

    Read the article

  • Find order of data inside an array

    - by user271619
    I have a simple array of stuff: $array = array("apples","oranges","strawberries"); I am trying to find the order of the stuff inside the array. (sometimes the order changes, and so do the items) I'm expecting to get something like this: "apples" = 0, "oranges = 1, "strawberries = 2 The end result has something to do with database sorting. Something like this, inside a foreach loop: UPDATE tbl SET sortorder = $neworder WHERE fruit = '$fruitname' The $neworder variable would be populated with the new order, inside the array. While the $fruit variable comes from the item inside the array.

    Read the article

  • How should my application keep clients in sync with schema changes to HTML5 databases?

    - by Chad Johnson
    I'm wanting to incorporate HTML5 database storage into my web application to make it online-accessible. I've done lots of development in server-side environments with databases, and we all know that database schema additions and modifications are often necessary. I am wondering what should happen if my application uses an offline database schema, and that schema changes. How do I prevent the application from breaking on the client side? How do I ensure the database is always up to date on the client end? Anyone have any solutions?

    Read the article

  • How to model my database when using entity framework 4?

    - by Junior Ewing
    Trying to wrap my head around the best approach in modelling a database when we are using Entity Framework 4 as the ORM layer. We are going to use asp.net mvc 2 for the application. Is it worth trying to model using the class diagram modeller that comes with Visual Studio 2010 where you graphically configure your models into the EDMX file and then generate out the database structure? I have run into a bunch of non trivial issues and for complex many to many mappings or multi primary key entities the answer is not that obvious even after poking around a while with the tools. I figure its easy at this point to give up and start modelling the DB using real, working DB modelling tools and then try to generate out the EDMX from the database, rather than trying to do the model first approach.

    Read the article

  • How can I create XSD files from M documents?

    - by Andrew Matthews
    Does anyone know of a nice way to: produce XSD documents from an OSLO model consume conformant XML documents using that model and add directly into the DB created from the model? I can't see any obvious way from the current documentation, but I'm a newcomer, so I may have missed something. Thanks.

    Read the article

  • IIS 7 Can't read data from ASP.Net application services tables

    - by vikp
    Hi, I'm working on deploying a web application written in C# with ASP.Net Application services databases. The application runs fine on the development machine. Windows Server 2003 has been built to test the application. The database has been scripted across using MS SQL Server GUI. ASP.Net application services tables were created using an utility. The connection strings are stored in the web.config and connectionStrings.config. The application connects to the database successfully, but then it times out after 10 seconds.

    Read the article

  • LINQ Data Context Not Showing Methods

    - by CccTrash
    For some reason my DataContext is not showing all the normal methods like SubmitChanges() etc in the intellisense. It also won't compile if I type in db.SubmitChanges(); Any idea what I'm doing wrong? Normally I don't have this issue, I have several other projects that work fine... Image of what I'm talking about:

    Read the article

  • Hierarchical data table for JSF2 and RichFaces

    - by Iravanchi
    I'm using RichFaces on my JSF2 application, and I need a way to have something like a tree-column or tree-table. As far as I know, there's no support for such thing in RichFaces. Something's mentioned for RichFaces 4.0, but the priority of this in their plan isn't promising at all, and I don't think that it's going to be included in 4.0. I know there's a tree table available in IceFaces, but I'd rather not add another library to avoid conflicts and learning curve and ... So, I'm looking for the simplest way that I can achieve the same results with minimum efforts, maybe using sub-tables in RichFaces? Any ideas would be appreciated.

    Read the article

  • How to index a table with a Type 2 slowly changing dimension for optimal performance

    - by The Lazy DBA
    Suppose you have a table with a Type 2 slowly-changing dimension. Let's express this table as follows, with the following columns: * [Key] * [Value1] * ... * [ValueN] * [StartDate] * [ExpiryDate] In this example, let's suppose that [StartDate] is effectively the date in which the values for a given [Key] become known to the system. So our primary key would be composed of both [StartDate] and [Key]. When a new set of values arrives for a given [Key], we assign [ExpiryDate] to some pre-defined high surrogate value such as '12/31/9999'. We then set the existing "most recent" records for that [Key] to have an [ExpiryDate] that is equal to the [StartDate] of the new value. A simple update based on a join. So if we always wanted to get the most recent records for a given [Key], we know we could create a clustered index that is: * [ExpiryDate] ASC * [Key] ASC Although the keyspace may be very wide (say, a million keys), we can minimize the number of pages between reads by initially ordering them by [ExpiryDate]. And since we know the most recent record for a given key will always have an [ExpiryDate] of '12/31/9999', we can use that to our advantage. However... what if we want to get a point-in-time snapshot of all [Key]s at a given time? Theoretically, the entirety of the keyspace isn't all being updated at the same time. Therefore for a given point-in-time, the window between [StartDate] and [ExpiryDate] is variable, so ordering by either [StartDate] or [ExpiryDate] would never yield a result in which all the records you're looking for are contiguous. Granted, you can immediately throw out all records in which the [StartDate] is greater than your defined point-in-time. In essence, in a typical RDBMS, what indexing strategy affords the best way to minimize the number of reads to retrieve the values for all keys for a given point-in-time? I realize I can at least maximize IO by partitioning the table by [Key], however this certainly isn't ideal. Alternatively, is there a different type of slowly-changing-dimension that solves this problem in a more performant manner?

    Read the article

  • Tweak Data in Doctrine Migration (say, by running some arbitrary queries)

    - by timdev
    I've got a migration that creates a couple of columns that act as foreign keys. In particular, I'm adding a creator_id and owner_id column to a model. These foreign keys indicate who created, and who currently owns, a particular kind of domain object. Users are managed by sfDoctrineGuardPlugin. What I'd like to do, for the purpose of the migration, is look up the (active) user with the lowest id, and default creator_id/owner_id to that. I don't see any particularly obvious/proper way to run arbitrary operations on the database during a migration. Does anyone know how?

    Read the article

  • How to display specific data from a file

    - by user1067332
    My program is supposed to ask the user for firstname, lastname, and phone number till the users stops. Then when to display it asks for the first name and does a search in the text file to find all info with the same first name and display lastname and phones of the matches. import java.util.*; import java.io.*; import java.util.Scanner; public class WritePhoneList { public static void main(String[] args)throws IOException { BufferedWriter output = new BufferedWriter(new FileWriter(new File( "PhoneFile.txt"), true)); String name, lname, age; int pos,choice; try { do { Scanner input = new Scanner(System.in); System.out.print("Enter First name, last name, and phone number "); name = input.nextLine(); output.write(name); output.newLine(); System.out.print("Would you like to add another? yes(1)/no(2)"); choice = input.nextInt(); }while(choice == 1); output.close(); } catch(Exception e) { System.out.println("Message: " + e); } } } Here is the display code, when i search for a name, it finds a match but displays the last name and phone number of the same name 3 times, I want it to display all of the possible matches with the first name. import java.util.*; import java.io.*; import java.util.Scanner; public class DisplaySelectedNumbers { public static void main(String[] args)throws IOException { String name; String strLine; try { FileInputStream fstream = new FileInputStream("PhoneFile.txt"); // Get the object of DataInputStream DataInputStream in = new DataInputStream(fstream); BufferedReader br = new BufferedReader(new InputStreamReader(in)); Scanner input = new Scanner(System.in); System.out.print("Enter a first name"); name = input.nextLine(); strLine= br.readLine(); String[] line = strLine.split(" "); String part1 = line[0]; String part2 = line[1]; String part3 = line[2]; //Read File Line By Line while ((strLine= br.readLine()) != null) { if(name.equals(part1)) { // Print the content on the console System.out.print("\n" + part2 + " " + part3); } } }catch (Exception e) {//Catch exception if any System.out.println("Error: " + e.getMessage()); } } }

    Read the article

  • How to direct people to fill out a form

    - by Solmead
    What is the best way to get people to fill out a form correctly? For instance I originally had a "Name" field on a form and I want 1 person per form. People filled it out like this: "Mark & Becky Newsman". So I broke it into 2 fields, "First Name" and "Last Name", And people are still filling it out wrong, like "First Name" = "Mark & Becky", "Last Name" = "Newsman". Are there any recommendations on how better to get people to understand that only one person's name should go there? I currently have it on two lines, would it work better to put both fields on one line? If this is on the wrong site go ahead and move it to the correct site.

    Read the article

  • How to convert a table column to another data type

    - by holden
    I have a column with the type of Varchar in my Postgres database which I meant to be integers... and now I want to change them, unfortunately this doesn't seem to work using my rails migration. change_column :table1, :columnB, :integer So I tried doing this: execute 'ALTER TABLE "table1" ALTER COLUMN "columnB" TYPE integer USING CAST(columnB AS INTEGER)' but cast doesn't work in this instance because some of the column are null... any ideas?

    Read the article

  • MVC Entity Model not showing my table

    - by Jessica
    I have a database with multiple tables, and some basic relationships. Here is an example of the problem I am having: My Database: **Org** ID Name etc **Detail1** ID D1name **Org_Detail1** Org_ID Detail1_ID **Detail2** ID D2Name **Org_Detail2** Org_ID Detial1_ID BooleanField My problem is, the Org_detail1 table is not showing up in the entity model, but the Org_Details2 table does. I thought it may have been because the Org_Detail1 table only contains two ID fields that are both primary keys, while the Org_Details2 table contains 2 primary key ID fields as well as a boolean field. If I add a dummy field to Org_detail1 and update it, it still won't show up and wont allow me to add a new entity relating to the Org_Detail1 table. The table won't even show up in the list, but it is listed under the tables. Is there any solution to get this table to appear in my model?

    Read the article

  • Java MapReduce read data

    - by Tatiana
    Hi I am having following map-reduce code by which I am trying to read records from my database. There's code: import java.io.*; import java.util.ArrayList; import java.util.List; import org.apache.hadoop.fs.*; import org.apache.hadoop.io.*; import org.apache.hadoop.mapred.*; import org.apache.hadoop.mapred.lib.db.DBConfiguration; import org.apache.hadoop.mapred.lib.db.DBInputFormat; import org.apache.hadoop.mapred.lib.db.DBWritable; import org.apache.hadoop.util.*; import org.apache.hadoop.conf.*; public class Connection extends Configured implements Tool { public int run(String[] args) throws IOException { JobConf conf = new JobConf(getConf(), Connection.class); conf.setInputFormat(DBInputFormat.class); DBConfiguration.configureDB(conf, "com.sun.java.util.jar.pack.Driver", "jdbc:postgresql://localhost:5432/polyclinic", "postgres", "12345"); String[] fields = { "name" }; DBInputFormat.setInput(conf, MyRecord.class, "doctors", null, null, fields); conf.setMapOutputKeyClass(LongWritable.class); conf.setMapOutputValueClass(MyRecord.class); conf.setOutputKeyClass(LongWritable.class); conf.setOutputValueClass(TextOutputFormat.class); TextOutputFormat.setOutputPath(conf, new Path(args[0])); JobClient.runJob(conf); return 0; } public static void main(String[] args) throws Exception { int exitCode = ToolRunner.run(new Connection(), args); System.exit(exitCode); } } Class Mapper: import java.io.IOException; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapred.MapReduceBase; import org.apache.hadoop.mapred.Mapper; import org.apache.hadoop.mapred.OutputCollector; import org.apache.hadoop.mapred.Reporter; public class MyMapper extends MapReduceBase implements Mapper<LongWritable, MyRecord, Text, IntWritable> { public void map(LongWritable key, MyRecord val, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { output.collect(new Text(val.name), new IntWritable(1)); } } Class Record: import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import org.apache.hadoop.io.Text; import org.apache.hadoop.io.Writable; import org.apache.hadoop.mapred.lib.db.DBWritable; class MyRecord implements Writable, DBWritable { String name; public void readFields(DataInput in) throws IOException { this.name = Text.readString(in); } public void readFields(ResultSet resultSet) throws SQLException { this.name = resultSet.getString(1); } public void write(DataOutput out) throws IOException { } public void write(PreparedStatement stmt) throws SQLException { } } After this I got error: WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String). Can you give me any suggestion how to solve this problem?

    Read the article

  • DMX Analysis Services question

    - by user282382
    Hi, I am have two mining models, both are time series. One is [Company_Inputs] and the other is [Booking_Projections]. What I want to do is use EXTEND_MODEL_CASES to join the results of [Company_Inputs] as the extended cases. So basically something like: Select Flattened PredictTimeSeries([Bookings], 1, 6, EXTEND_MODEL_CASES) FROM [Booking_Projections] Natural Prediction Join (Select Flattened PredictTimeSeries([Metric1], 1, 6) From [Company_Inputs]) AS T This code of course doesn't work, but the idea is to use the predictions made from [Company_Inputs] as cases for predicting future values of [Booking_Projections] If anyone has an idea of how I can accomplish this I would appreciate it very much.

    Read the article

  • algorithm for checking addresses for matches?

    - by user151841
    I'm working on a survey program where people will be given promotional considerations the first time they fill out a survey. In a lot of scenarios, the only way we can stop people from cheating the system and getting a promotion they don't deserve is to check street address strings against each other. I was looking at using levenshtein distance to give me a number to measure similarity, and consider those below a certain threshold a duplicate. However, if someone were looking to game the system, they could easily write "S 5th St" instead of "South Fifth Street", and levenshtein would consider those strings to be very different. So then I was thinking to convert all strings to a 'standard address form' i.e. 'South' becomes 's', 'Fifth' becomes '5th', etc. Then I was thinking this is hopeless, and too much effort to get it working robustly. Is it? I'm working with PHP/MySql, so I have the limitations inherent in that system.

    Read the article

  • Dynamic Hierarchical Javascript Object Loop

    - by user1684586
    var treeData = {"name" : "A", "children" : [ {"name" : "B", "children": [ {"name" : "C", "children" :[]} ]} ]}; THE ARRAY BEFORE SHOULD BE EMPTY. THE ARRAY AFTER SHOULD BE POPULATED DEPENDING ON THE NUMBER OF NODES NEEDED THAT WILL BE DEFINED FROM A DYNAMIC VALUE THAT IS PASSED. I would like to build the hierarchy dynamically with each node created as a layer/level in the hierarchy having its own array of nodes. THIS SHOULD FORM A TREE STRUCTURE. This is hierarchy structure is described in the above code. This code has tree level simple for demonstrating the layout of the hierarchy of values. There should be a root node, and an undefined number of nodes and levels to make up the hierarchy size. Nothing should be fixed besides the root node. I do not need to read the hierarchy, I need to construct it. The array should start {"name" : "A", "children" : []} and every new node as levels would be created {"name" : "A", "children" : [HERE-{"name" : "A", "children" : []}]}. In the child array, going deeper and deeper. Basically the array should have no values before the call, except maybe the root node. After the function call, the array should comprise of the required nodes of a number that may vary with every call. Every child array will contain one or more node values. There should be a minimum of 2 node levels, including the root. It should initially be a Blank canvas, that is no predefined array values.

    Read the article

< Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >