Search Results

Search found 6441 results on 258 pages for 'schema compare'.

Page 225/258 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • How do I add an object to a binary tree based on the value of a member variable?

    - by Max
    How can I get a specific value from an object? I'm trying to get a value of an instance for eg. ListOfPpl newListOfPpl = new ListOfPpl(id, name, age); Object item = newListOfPpl; How can I get a value of name from an Object item?? Even if it is easy or does not interest you can anyone help me?? Edited: I was trying to build a binary tree contains the node of ListOfPpl, and need to sort it in the lexicographic. Here's my code for insertion on the node. Any clue?? public void insert(Object item){ Node current = root; Node follow = null; if(!isEmpty()){ root = new Node(item, null, null); return; }boolean left = false, right = false; while(current != null){ follow = current; left = false; right = false; //I need to compare and sort it if(item.compareTo(current.getFighter()) < 0){ current = current.getLeft(); left = true; }else { current = current.getRight(); right = true; } }if(left) follow.setLeft(new Node(item, null, null)); else follow.setRight(new Node(item, null, null)); }

    Read the article

  • Optimize SQL query (Facebook-like application)

    - by fabriciols
    My application is similar to Facebook, and I'm trying to optimize the query that get user records. The user records are that he as src ou dst. The src is in usermuralentry directly, the dst list are in usermuralentry_user. So, a entry can have one src and many dst. I have those tables: mysql> desc usermuralentry ; +-----------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | user_src_id | int(11) | NO | MUL | NULL | | | private | tinyint(1) | NO | | NULL | | | content | longtext | NO | | NULL | | | date | datetime | NO | | NULL | | | last_update | datetime | NO | | NULL | | +-----------------+------------------+------+-----+---------+----------------+ 10 rows in set (0.10 sec) mysql> desc usermuralentry_user ; +-------------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------------+---------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | usermuralentry_id | int(11) | NO | MUL | NULL | | | userinfo_id | int(11) | NO | MUL | NULL | | +-------------------+---------+------+-----+---------+----------------+ 3 rows in set (0.00 sec) And the following query to retrieve information from two users. mysql> explain SELECT * FROM usermuralentry AS a , usermuralentry_user AS b WHERE a.user_src_id IN ( 1, 2 ) OR ( a.id = b.usermuralentry_id AND b.userinfo_id IN ( 1, 2 ) ); +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ | 1 | SIMPLE | b | ALL | usermuralentry_id,usermuralentry_user_bcd7114e,usermuralentry_user_6b192ca7 | NULL | NULL | NULL | 147188 | | | 1 | SIMPLE | a | ALL | PRIMARY | NULL | NULL | NULL | 1371289 | Range checked for each record (index map: 0x1) | +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ 2 rows in set (0.00 sec) but it is taking A LOT of time... Some tips to optimize? Can the table schema be better in my application?

    Read the article

  • Is it possible to modify the value of a record's primary key in Oracle when child records exist?

    - by Chris Farmer
    I have some Oracle tables that represent a parent-child relationship. They look something like this: create table Parent ( parent_id varchar2(20) not null primary key ); create table Child ( child_id number not null primary key, parent_id varchar2(20) not null, constraint fk_parent_id foreign key (parent_id) references Parent (parent_id) ); This is a live database and its schema was designed long ago under the assumption that the parent_id field would be static and unchanging for a given record. Now the rules have changed and we really would like to change the value of parent_id for some records. For example, I have these records: Parent: parent_id --------- ABC123 Child: child_id parent_id -------- --------- 1 ABC123 2 ABC123 And I want to modify ABC123 in these records in both tables to something else. It's my understanding that one cannot write an Oracle update statement that will update both parent and child tables simultaneously, and given the FK constraint, I'm not sure how best to update my database. I am currently disabling the fk_parent_id constraint, updating each table independently, and then enabling the constraint. Is there a better, single-step way to update this content?

    Read the article

  • Convert C# (with typed events) to VB.NET

    - by Steven
    I have an ASPX page (with VB Codebehind). I would like to extend the GridView class to show the header / footer when no rows are returned. I found a C# example online (link) (source). However, I cannot convert it to VB because it uses typed events (which are not legal in VB). I have tried several free C# to VB.NET converters online, but none have worked. Please convert the example to VB.NET or provide an alternate method of extending the GridView class. Notes / Difficulties: If you get an error with DataView objects, specify the type as System.Data.DataView and the type comparison could be the following: If data.[GetType]() Is GetType(System.Data.DataView) Then Since the event MustAddARow cannot have a type in VB (and RaiseEvent event doesn't have a return value), how can I compare it to Nothing in the function OnMustAddARow()? EDIT: The following is a sample with (hopefully) relevant code to help answer the question. namespace AlwaysShowHeaderFooter { public delegate IEnumerable MustAddARowHandler(IEnumerable data); public class GridViewAlwaysShow : GridView { ////////////////////////////////////// // Various member functions omitted // ////////////////////////////////////// protected IEnumerable OnMustAddARow(IEnumerable data) { if (MustAddARow == null) { throw new NullReferenceException("The datasource has no rows. You must handle the \"MustAddARow\" Event."); } return MustAddARow(data); } public event MustAddARowHandler MustAddARow; } }

    Read the article

  • MySQL Join/Comparison on a DATETIME column (<5.6.4 and > 5.6.4)

    - by Simon
    Suppose i have two tables like so: Events ID (PK int autoInc), Time (datetime), Caption (varchar) Position ID (PK int autoinc), Time (datetime), Easting (float), Northing (float) Is it safe to, for example, list all the events and their position if I am using the Time field as my joining criteria? I.e.: SELECT E.*,P.* FROM Events E JOIN Position P ON E.Time = P.Time OR, even just simply comparing a datetime value (taking into consideration that the parameterized value may contain the fractional seconds part - which MySQL has always accepted) e.g. SELECT E.* FROM Events E WHERE E.Time = @Time I understand MySQL (before version 5.6.4) only stores datetime fields WITHOUT milliseconds. So I would assume this query would function OK. However as of version 5.6.4, I have read MySQL can now store milliseconds with the datetime field. Assuming datetime values are inserted using functions such as NOW(), the milliseconds are truncated (<5.6.4) which I would assume allow the above query to work. However, with version 5.6.4 and later, this could potentially NOT work. I am, and only ever will be interested in second accuracy. If anyone could answer the following questions would be greatly appreciated: In General, how does MySQL compare datetime fields against one another (consider the above query). Is the above query fine, and does it make use of indexes on the time fields? (MySQL < 5.6.4) Is there any way to exclude milliseconds? I.e. when inserting and in conditional joins/selects etc? (MySQL 5.6.4) Will the join query above work? (MySQL 5.6.4) EDIT I know i can cast the datetimes, thanks for those that answered, but i'm trying to tackle the root of the problem here (the fact that the storage type/definition has been changed) and i DO NOT want to use functions in my queries. This negates all my work of optimizing queries applying indexes etc, not to mention having to rewrite all my queries. EDIT2 Can anyone out there suggest a reason NOT to join on a DATETIME field using second accuracy?

    Read the article

  • Allowing Xform controls for optional XML elements

    - by Cam
    Hi, In designing an XForm interface to an XML database (using eXist and XSLTForms), I'd like to include an input control for an optional element. The XML data records already exist and while some contain the optional element, others don't. To update a record, I'm using the existing XML record as the model instance. The problem is that the form control is not displayed when the optional element is not present, which is logical, but presents a problem when a user wants to add data to the optional element. To be more explicit, here's an example data record, data.xml: <a> <b>content</b> </a> with RNC schema: start = element a { element b { text }, element notes { text }? } XForms model: <xf:model> <xf:instance xmlns="" src="data.xml"/> <xf:submission id="save" method="post" action="update.xq" /> </xf:model> And control: <xf:input ref="/a/notes"> <xf:label>Notes (optional): </xf:label> </xf:input> The problem is that the 'Notes' input control is simply not displayed. An obvious solution is to add a trigger button to allow the user to insert the element if needed, but it is preferable to just have the input control appear, and be empty. My question is: is there some subtle combination of lesser-know attributes/binds/multiple instances/xpath expressions that will cause the control to always be displayed? Thanks

    Read the article

  • Is this a good way to generically deserialize objects?

    - by Damien Wildfire
    I have a stream onto which serialized objects representing messages are dumped periodically. The objects are one of a very limited number of types, and other than the actual sequence of bytes that arrives, I have no way of knowing what type of message it is. I would like to simply try to deserialize it as an object of a particular type, and if an exception is thrown, try again with the next type. I have an interface that looks like this: public interface IMessageHandler<T> where T : class, IMessage { T Handle(string message); } // elsewhere: // (These are all xsd.exe-generated classes from an XML schema.) public class AppleMessage : IMessage { ... } public class BananaMessage : IMessage { ... } public class CoconutMessage : IMessage { ... } Then I wrote a GenericHandler<T> that looks like this: public class GenericHandler<T> : IMessageHandler<T> where T: class, IMessage { public class MessageHandler : IMessageHandler { T IMessageHandler.Handle(string message) { T result = default(T); try { // This utility method tries to deserialize the object with an // XmlSerializer as if it were an object of type T. result = Utils.SerializationHelper.Deserialize<T>(message); } catch (InvalidCastException e) { result = default(T); } return result; } } } Two questions: Using my GenericHandler<T> (or something similar to it), I'd now like to populate a collection with handlers that each handle a different type. Then I want to invoke each handler's Handle method on a particular message to see if it can be deserialized. If I get a null result, move onto the next handler; otherwise, the message has been deserialized. Can this be done? Is there a better way to deserialize data of unknown (but restricted) type?

    Read the article

  • Attribute value nil

    - by mridula
    Can someone tell me why is this happening? I have created a social networking website using Ruby on Rails. This is my first time programming with RoR. I have a model named "Friendship" which contains an attribute "blocked" to indicate whether the user has blocked another user. When I run the following in IRB - friendship = u.friendships.where(:friend_id => 22).first IRB gives me - Friendship Load (0.6ms) SELECT `friendships`.* FROM `friendships` WHERE `friendships`.`user_id` = 17 AND `friendships`.`friend_id` = 22 LIMIT 1 => #<Friendship id: 33, user_id: 17, friend_id: 22, created_at: "2012-04-07 10:29:49", updated_at: "2012-04-07 10:29:49", blocked: 1> As u can see, the "blocked" attribute has value '1'. But when I run the following 1.9.2-p290 :030 > friendship.blocked => nil - it says, the value of blocked is 'nil' and not '1'. Why is this happening? This could be a very silly mistake but I am new to RoR, so kindly help me! I initially didn't include the accessor method for 'blocked'.. I tried that, and still its giving the same result.. Following is the Friendship model.. class Friendship < ActiveRecord::Base belongs_to :friend, :class_name => "User" validates_uniqueness_of :friend_id , :scope => :user_id attr_accessor :blocked attr_accessible :blocked end Here is the schema of the table: 1.9.2-p290 :009 > friendship.class => Friendship(id: integer, user_id: integer, friend_id: integer, created_at: datetime, updated_at: datetime, blocked: integer)

    Read the article

  • SQL code to display counts() of value retrieved from another column

    - by Doctor Trout
    I have three tables (these are the relevant columns): Table1 bookingid, person, role Table2 bookingid, projectid Table3 projectid, project, numberofrole1, numberofrole2 Table1.role can take two values: "role1" or "role2". What I want to do is to show which projects don't have the correct number of roles in Table1. The number of roles there there should be for each role is in Table3. For example, if Table1 contains these three rows: bookingid, person, role 7, Tim, role1 7, Bob, role1, 7, Charles, role2 and Table2 bookingid, projectid 7, 1 and Table3 projectid, project, numberofrole1, numberofrole2 1, Test1, 2, 2 I would like the results to show that there are not the correct number of role2s for project Test1. To be honest, something like this is a bit beyond my ability, so I'm open to suggestions on the best way to do this. I'm using sqlite and php (it's only a small project). I suppose I could do something with the php at the end once I've got my results, but I wondered if there was a better way to do it with sqlite. I started by doing something like this: SELECT project, COUNT(numberofrole1) as "Role" FROM Table1 JOIN Table2 USING (projectid) JOIN Table3 USING (bookingid) WHERE role="role1" GROUP BY project But I can't work out how to compare the value returned as "Role" with the value got from numberofrole1 Any help is gratefully received.

    Read the article

  • Should I learn two (or more) programming languages in parallel?

    - by c_maker
    I found entries on this site about learning a new programming language, however, I have not come across anything that talks about the advantages and disadvantages of learning two languages at the same time. Let's say my goal is to learn two new languages in a year. I understand that the definition of learning a new language is different for everyone and you can probably never know everything about a language. I believe in most cases the following things are enough to include the language in your resume and say that you are proficient in it (list is not in any particular order): Know its syntax so you can write a simple program in it Compare its underlying concepts with concepts of other languages Know best practices Know what libraries are available Know in what situations to use it Understand the flow of a more complex program At least know most of what you do not know I would probably look for a good book and pick an open source project for both of these languages to start with. My questions: Is it best to spend 5 months learning language#1 then 5 months learning language#2, or should you mix the two. Mixing them I mean you work on them in parallel. Should you pick two languages that are similar or different? Are there any advantages/disadvantages of let's say learning Lisp in tandem with Ruby? Is it a good idea to pick two languages with similar syntax or would it be too confusing? Please tell me what your experiences are regarding this. Does it make a difference if you are a beginner or a senior programmer?

    Read the article

  • MATLAB, time match filter

    - by Paul
    OK, I am still getting the hang of MATLAB. I have two files in different format. One Excel file. data1.xls, size= 86400 X 62. It looks like: Date/Time par1 par2 par3 par4 par5 par6 par6 par7 par8 par9 08/02/09 00:06:45 0 3 27 9.9 -133.2 0 0 0 1 0 Another file, data2.csv, size = 144 X 27. (If nothing is missing.) It looks like: date time P01 P02 P03 P04 P05 P06 P07 P08 P09 P10 P11 8/16/2009 0:00 51 45 46 54 53 52 524 5 399 89 78 Now I am using Data10minAvg = mean(reshape(Data,300,144,62)); to get the 10 min average of the first Excel file. Now I need to match up that file I am making above with the .csv file. The problem is many timestamps are missing in the .csv file. How do I make data2.csv into a file of size 144 X 27, replacing the missing datestamps by rows of zero? It will really help me than compare data1.xls file with newdata2.csv.

    Read the article

  • Two references to the same domain/entity model

    - by Sbossb
    Problem I want to save the attributes of a model that have changed when a user edits them. Here's what I want to do ... Retrieve edited view model Get domain model and map back updated value Call the update method on repository Get the "old" domain model and compare values of the fields Store the changed values (in JSON) into a table However I am having trouble with step number 4. It seems that the Entity Framework doesn't want to hit the database again to get the model with the old values. It just returns the same entity I have. Attempted Solutions I have tried using the Find() and the SingleOrDefault() methods, but they just return the model I currently have. Example Code private string ArchiveChanges(T updatedEntity) { //Here is the problem! //oldEntity is the same as updatedEntity T oldEntity = DbSet.SingleOrDefault(x => x.ID == updatedEntity.ID); Dictionary<string, object> changed = new Dictionary<string, object>(); foreach (var propertyInfo in typeof(T).GetProperties()) { var property = typeof(T).GetProperty(propertyInfo.Name); //Get the old value and the new value from the models var newValue = property.GetValue(updatedEntity, null); var oldValue = property.GetValue(oldEntity, null); //Check to see if the values are equal if (!object.Equals(newValue, oldValue)) { //Values have changed ... log it changed.Add(propertyInfo.Name, newValue); } } var ser = new System.Web.Script.Serialization.JavaScriptSerializer(); return ser.Serialize(changed); } public override void Update(T entityToUpdate) { //Do something with this string json = ArchiveChanges(entityToUpdate); entityToUpdate.AuditInfo.Updated = DateTime.Now; entityToUpdate.AuditInfo.UpdatedBy = Thread.CurrentPrincipal.Identity.Name; base.Update(entityToUpdate); }

    Read the article

  • How to synchronize two (or n) replication processes for SQL Server databases?

    - by Yauheni Sivukha
    There are two master databases and two read-only copies updated by standard transactional replication. It is needed to map some entity from both read-only databases, lets say that A databases contains orders and B databases contains lines. The problem is that replication to one database can lag behind replication of second database, and at the moment of mapping R-databases will have inconsistent data. For example. We stored 2 orders with lines at 19:00 and 19:03. Mapping process started at 19:05, but to the moment of mapping A database replication processed all changes up to 19:03, but B database replication processed only changes up to 19:00. After mapping we will have order entity with order as of 19:03 and lines as of 19:00. The troubles are guaranteed:) In my particular case both databases have temporal model, so it is possible to fetch data for every time slice, but the problem is to identify time of latest replication. Question: How to synchronize replication processes for several databases to avoid situation described above? Or, in other words, how to compare last time of replication in each database? UPD: The only way I see to synchronize is to continuously write timestamps into service tables in each database and to check these timestamps on replicated servers. Is that acceptable solution?

    Read the article

  • Are there any free Xml Diff/Merge tools available?

    - by Russell
    I have several config files in my .net applications which I would like to merge application settings elements etc. I was about to begin doing it manually as I usually do, however thought there must be an XML diff GUI tool available somewhere. The tool would be able to go to the element level to compare and display the differences etc. However Google gave no substantive free tool results and no hints for anything of value. Is such a tool available? That is very useful? For free? Thanks in advance. :) Edit: Here is a bit of clarification of the functionality that would turn my error-prone, tedious manual job into a 1-minute simpler task (and potential to automate): In KDiff3, you can do a diff/merge of entire directories. There is a hierarchical diff which is very accurate, user-friendly and clear. I was interested in finding a similar solution, however instead of directory hierarchy, an XML element hierarchy. If there is no such open source software, I am considering creating one on CodePlex to provide this functionality.

    Read the article

  • Utility to indexing a directory?

    - by achacha
    Here is what I am trying to do: I have a directory (with sub-directories) with source files, I need to index them so I can find files fast (find as I type) so I can open them for compare/analysis. I don't want it to scan the content, just filename index for quick lookup. I do this when trying to determine if a class exists in a given tree (we maintain directory trees for each release which has a lot of files) and sometimes I want to quickly check files to see how something was implemented, etc. Most of these directories are on remote servers (sometimes on the other side of the world) or on a VM (which is on a server far away), so I only want to read the directory trees once, which is why running find every time is way too slow and doing 'find . foo.txt' and then searching that is a bit tedious. It's kind of like how "Find Resource" works in eclipse after it indexes all files, but it's a bit of a chore to import/remove directories into eclipse every time. Eclipse is also very slow when dealing with remote volumes. Any suggestions are appreciated :)

    Read the article

  • update record only works when there is no auto_increment

    - by every_answer_gets_a_point
    i am accessing a mysql table through an odbc connection in excel here is how i am updating the table: With rs .AddNew ' create a new record ' add values to each field in the record .Fields("datapath") = dpath .Fields("analysistime") = atime .Fields("reporttime") = rtime .Fields("lastcalib") = lcalib .Fields("analystname") = aname .Fields("reportname") = rname .Fields("batchstate") = "bstate" .Fields("instrument") = "NA" .Update ' stores the new record End With when the schema of the table is this, updating it works: create table batchinfo(datapath text,analysistime text,reporttime text,lastcalib text,analystname text, reportname text, batchstate text, instrument text); but when i have auto_increment in there it does not work: CREATE TABLE batchinfo ( rowid int(11) NOT NULL AUTO_INCREMENT, datapath text, analysistime text, reporttime text, lastcalib text, analystname text, reportname text, batchstate text, instrument text, PRIMARY KEY (rowid) ) ENGINE=InnoDB AUTO_INCREMENT=67 DEFAULT CHARSET=latin1 has anyone experienced a problem like this where updating does not work when there is an auto_increment field involved? connection string: Private Sub ConnectDB() Set oConn = New ADODB.Connection oConn.Open "DRIVER={MySQL ODBC 5.1 Driver};" & _ "SERVER=localhost;" & _ "DATABASE=employees;" & _ "USER=root;" & _ "PASSWORD=pas;" & _ "Option=3" End Sub also here's the rs.open: rs.Open "batchinfo", oConn, adOpenKeyset, adLockOptimistic, adCmdTable

    Read the article

  • can't create partial objects with accepts_nested_attributes_for

    - by Isaac Cambron
    I'm trying to build a form that allows users to update some records. They can't update every field, though, so I'm going to do some explicit processing (in the controller for now) to update the model vis-a-vis the form. Here's how I'm trying to do it: Family model: class Family < ActiveRecord::Base has_many :people, dependent: :destroy accepts_nested_attributes_for :people, allow_destroy: true, reject_if: ->(p){p[:name].blank?} end In the controller def check edited_family = Family.new(params[:family]) #compare to the one we have in the db #update each person as needed/allowed #save it end Form: = form_for current_family, url: check_rsvp_path, method: :post do |f| = f.fields_for :people do |person_fields| - if person_fields.object.user_editable = person_fields.text_field :name, class: "person-label" - else %p.person-label= person_fields.object.name The problem is, I guess, that Family.new(params[:family]) tries to pull the people out of the database, and I get this: ActiveRecord::RecordNotFound in RsvpsController#check Couldn't find Person with ID=7 for Family with ID= That's, I guess, because I'm not adding a field for family id to the nested form, which I suppose I could do, but I don't actually need it to load anything from the database for this anyway, so I'd rather not. I could also hack around this by just digging through the params hash myself for the data I need, but that doesn't feel a slick. It seems nicest to just create an object out of the params hash and then work with it. Is there a better way? How can I just create the nested object?

    Read the article

  • creating tables in ruby-on-rails 3 through migrations?

    - by fayer
    im trying to understand the process of creating tables in ruby-on-rails 3. i have read about migrations. so i am supposed to create tables by editing in the files in: Database Migrations/migrate/20100611214419_create_posts Database Migrations/migrate/20100611214419_create_categories but they were generated by: rails generate model Post name:string description:text rails generate model Category name:string description:text does this mean i have to use "rails generate model" command everytime i want to create a table? what if i create a migration file but want to add columns. do i create another migration file for adding those or do i edit the existing migration file directly? the guide told me to add a new one, but here is the part i dont understand. why would i add a new one? cause then the new state will be dependent of 2 migration files. in symfony i just edit a schema.yml file directly, there are no migration files with versioning and so on. im new to RoR and want to get the picture of creating tables. thanks

    Read the article

  • What is the n in O(n) when comparing sorting algorithms?

    - by Mumfi
    The question is rather simple, but I just can't find a good enough answer. I've taken a look at the most upvoted question regarding the Big-Oh notation, namely this: Plain English explanation of Big O It says there that: For example, sorting algorithms are typically compared based on comparison operations (comparing two nodes to determine their relative ordering). Now let's consider the simple bubble sort algorithm: for (int i = arr.length - 1; i > 0 ; i--) { for (int j = 0; j<i; j++) { if (arr[j] > arr[j+1]) { switchPlaces(...) } } } I know that worst case is O(n^2) and best case is O(n), but what is n exactly? If we attempt to sort an already sorted algorithm (best case), we would end up doing nothing, so why is it still O(n)? We are looping through 2 for-loops still, so if anything it should be O(n^2). n can't be the number of comparison operations, because we still compare all the elements, right? This confuses me, and I appreciate if someone could help me.

    Read the article

  • calculating change (over a period) for a dated field

    - by morpheous
    I have two tables with the following schema: CREATE TABLE sales_data ( sales_time date NOT NULL, product_id integer NOT NULL, sales_amt double NOT NULL ); CREATE TABLE date_dimension ( id integer NOT NULL, datestamp date NOT NULL, day_part integer NOT NULL, week_part integer NOT NULL, month_part integer NOT NULL, qtr_part integer NOT NULL, year_part integer NOT NULL, ); I want to write two types of queries that will allow me to calculate: period on period change (e.g. week on week change) change in period on period change (e.g. change in week on week change) I would prefer to write this in ANSI SQL, since I dont want to be tied to any particular db. [Edit] In light of some of the comments, if I have to be tied to a single database (in terms of SQL dialect), it will have to be PostgreSQL The queries I want to write are of the form (pseudo SQL of course): Query Type 1 (Period on Period Change) ======================================= a). select product_id, ((sd2.sales_amt - sd1.sales_amt)/sd1.sales_amt) as week_on_week_change from sales_data sd1, sales_data sd2, date_dimension dd where {SOME CRITERIA) b). select product_id, ((sd2.sales_amt - sd1.sales_amt)/sd1.sales_amt) as month_on_month_change from sales_data sd1, sales_data sd2, date_dimension dd where {SOME CRITERIA) Query Type 2 (Change in Period on Period Change) ================================================= a). select product_id, ((a2.week_on_week_change - a1.week_on_week_change)/a1.week_on_week_change) as change_on_week_on_week_change from (select product_id, ((sd2.sales_amt - sd1.sales_amt)/sd1.sales_amt) as week_on_week_change from sales_data sd1, sales_data sd2, date_dimension dd where {SOME CRITERIA) as a1), (select product_id, ((sd2.sales_amt - sd1.sales_amt)/sd1.sales_amt) as week_on_week_change from sales_data sd1, sales_data sd2, date_dimension dd where {SOME CRITERIA) as a2) WHERE {SOME OTHER CRITERIA}

    Read the article

  • MySQL Join Question

    - by rbaker86
    Hi i'm struggling to write a particular MySQL Join Query. I have a table containing product data, each product can belong to multiple categories. This m:m relationship is satisfied using a link table. For this particular query I wish to retrieve all products belonging to a given category, but with each product record, I also want to return the other categories that product belongs to. Ideally I would like to achieve this using an Inner Join on the categories table, rather than performing an additional query for each product record, which would be quite inefficient. My simplifed schema is designed roughly as follows: products table: product_id, name, title, description, is_active, date_added, publish_date, etc.... categories table: category_id, name, title, description, etc... product_category table: product_id, category_id I have written the following query, which allows me to retrieve all the products belonging to the specified category_id. However, i'm really struggling to work out how to retrieve the other categories a product belongs to. SELECT p.product_id, p.name, p.title, p.description FROM prod_products AS p LEFT JOIN prod_product_category AS pc ON pc.product_id = p.product_id WHERE pc.category_id = $category_id AND UNIX_TIMESTAMP(p.publish_date) < UNIX_TIMESTAMP() AND p.is_active = 1 ORDER BY p.name ASC I'd be happy just retrieving the category id's releated to each returned product row, as I will have all category data stored in an object, and my application code can take care of the rest. Many thanks, Richard

    Read the article

  • NHibernate Performance Optimization | Suggestions invited!!!

    - by user336749
    Hi, I’m facing an issue with NHibernate performance and can you please suggest me some optimizations? Below mentioned is a small summary of my application architecture I have a windows service which is listening to a messaging bus. On receiving a message the service creates an object out of which a property is the received xml snippet and saves the message to the DB (uses NH). There is a WPF UI with a readonly connection to the DB, and on refresh of the UI it displays the objects on the screen. While the UI does a refresh, it retrieves the xml and deserializes it , from which the object’s properties are derived and binded to the screen. For example assume an xml XXX is received by the service, it deserializes the xml , creates the book object and save it to the DB and a property/column is SCHEMA which contains the xml snippet. The UI while refreshed searches all book objects by ID and creates the book objects out of the xml which is being saved (yes, the xml is the constructor param). Now my issue is that the refresh takes more than 2 minutes to display say 50 book objects. I analyzed it using the NHibernate profiler, and found that the time spend within the DB is negligible, however time spent to create the entities is proportionally huge(10ms:1990 ms).I guess it’s due to the fairly huge size of xml snippet and it’s deserialization. My question is, how can I improve the performance. I dispose sessions after every refresh and is not lazy loading (please note that the time spend in DB is negligible). On every refresh it’s possible that all objects are updated by some downstream systems or maybe one of them are updated.Can I implement some sort of caching mechanism in this case? Thanks in advance for any suggestions. Regards, -Mike

    Read the article

  • What are your best practices for ensuring the correctness of the reports from SQL?

    - by snezmqd4
    Part of my work involves creating reports and data from SQL Server to be used as information for decision. The majority of the data is aggregated, like inventory, sales and costs totals from departments, and other dimensions. When I am creating the reports, and more specifically, I am developing the SELECTs to extract the aggregated data from the OLTP database, I worry about mistaking a JOIN or a GROUP BY, for example, returning incorrect results. I try to use some "best practices" to prevent me for "generating" wrong numbers: When creating an aggregated data set, always explode this data set without the aggregation and look for any obvious error. Export the exploded data set to Excel and compare the SUM(), AVG(), etc, from SQL Server and Excel. Involve the people who would use the information and ask for some validation (ask people to help to identify mistakes on the numbers). Never deploy those things in the afternoon - when possible, try to take a look at the T-SQL on the next morning with a refreshed mind. I had many bugs corrected using this simple procedure. Even with those procedures, I always worry about the numbers. What are your best practices for ensuring the correctness of the reports?

    Read the article

  • How can I use Perl to determine whether the contents of two files are identical?

    - by Zaid
    This question comes from a need to ensure that changes I've made to code doesn't affect the values it outputs to text file. Ideally, I'd roll a sub to take in two filenames and return 1or return 0 depending on whether the contents are identical or not, whitespaces and all. Given that text-processing is Perl's forté, it should be quite easy to compare two files and determine whether they are identical or not (code below untested). use strict; use warnings; sub files_match { my ( $fileA, $fileB ) = @_; open my $file1, '<', $fileA; open my $file2, '<', $fileB; while (my $lineA = <$file1>) { next if $lineA eq <$file2>; return 0 and last; } return 1; } The only way I can think of (sans CPAN modules) is to open the two files in question, and read them in line-by-line until a difference is found. If no difference is found, the files must be identical. But this approach is limited and clumsy. What if the total lines differ in the two files? Should I open and close to determine line count, then re-open to scan the texts? Yuck. I don't see anything in perlfaq5 relating to this. I want to stay away from modules unless they come with the core Perl 5.6.1 distribution.

    Read the article

  • Generate Spring bean definition from a Java object

    - by joeslice
    Let's suggest that I have a bean defined in Spring: <bean id="neatBean" class="com..." abstract="true">...</bean> Then we have many clients, each of which have slightly different configuration for their 'neatBean'. The old way we would do it was to have a new file for each client (e.g., clientX_NeatFeature.xml) that contained a bunch of beans for this client (these are hand-edited and part of the code base): <bean id="clientXNeatBean" parent="neatBean"> <property id="whatever" value="something"/> </bean> Now, I want to have a UI where we can edit and redefine a client's neatBean on the fly. My question is: given a neatBean, and a UI that can 'override' properties of this bean, what would be a straightforward way to serialize this to an XML file as we do [manually] today? For example, if the user set property whatever to be "17" for client Y, I'd want to generate: <bean id="clientYNeatBean" parent="neatBean"> <property id="whatever" value="17"/> </bean> Note that moving this configuration to a different format (e.g., database, other-schema'd-xml) is an option, but not really an answer to the question at hand.

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >