Search Results

Search found 24784 results on 992 pages for 'table udf'.

Page 269/992 | < Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >

  • Reverse Expression.Like criterion

    - by Joel Potter
    How should I go about writing a backwards like statement using NHibernate criteria? WHERE 'somestring' LIKE [Property] + '%' Sub Question: Can you access the abstract root alias in a SQLCriterion expression? This is somewhat achievable using the SQLCriterion expression Expression.Sql("? like {alias}.[Property] + '.%'", value, NHibernateUtil.String); However, in the case of class inheritance, {alias} is replaced with the incorrect alias for the column. Example (these classes are stored in separate tables): public abstract class Parent { public virtual string Property { get; set; } } public class Child : Parent { } The above query executed with Child as the root type will replace {alias} with the alias to the Child table rather than the Parent table. This results in an invalid column exception. I need to execute a like statement as above where the property exists on the parent table rather than on the root type table.

    Read the article

  • Make a usable Join relationship with LINQ on top of a database CSV design error

    - by jdk
    I'm looking for a way to fix and/or abstract away a comma-separated values (CSV) list in a database field in order to reconstruct a usable relationship such that I can properly join the two tables below and query them using LINQ and its Join method. Following is a sample showing the Person table and CsvArticleIds field having a CSV value to represent a one-to-many association with Article records. TABLE [dbo].[Person] Id Name CsvArticleIds -- ---------- -------- 1 Joe "15,22" 5 Ed "22" 10 Arnie "8,15,22" ^^^(Of course a link table should have been created; nonetheless the relationship with articles is trapped inside that list of CSV values.) TABLE [dbo].[Article] Id Title -- ---------- 8 Beginning C# 15 A Historic look at Programming in the 90s 22 Gardening in January Additional Info the fix can be at any level: C#.NET or SQL Server something easy because I will be repeating the solution for many other CSV values in other tables. Elegant is nice too. not looking for efficiency because this is part of a one-time data migration task and can take as long as it wants to run.

    Read the article

  • Saving auto increment in MySQL

    - by oshafran
    Hello, I am trying to sync between 2 tables: I have active table where has auto_increment, and I have archive table with the same values. I would like both ID's to be unique (between the tables as well) - I mean, I would like to save auto incremenet, and if I UNION both table I still have uniqness. How can I do that? Is there a possibility to save auto increment when mysql is off?

    Read the article

  • Joining a one-to-many association with a many-to-many association in Rails 3

    - by Maz
    Hi all, I have a many-to-many association between a User class and a Table class. Additionally, i have a one-to-many association between the User and the Table (one User ultimately owns the table). I am trying to access all of the tables which the user may access (essintally joining both associations). Additionally, it would be nice to do this this with named_scope (now scope) Here's what I have so far: class User < ActiveRecord::Base acts_as_authentic attr_accessible :email, :password, :password_confirmation has_many :feedbacks has_many :tables has_many :user_table_permissions has_many :editableTables, :class_name => "Table", :through => :user_table_permissions def allTables editableTables.merge(tables) end end Thanks.

    Read the article

  • How to make Entity Key Mapping in Entity Framework like sql's foreign key?

    - by programmerist
    I try to give entity map on my entity app. But how can I do it? I try to make it like below: var test = ( from k in Kartlar where k.Rehber..... above codes k.(can not see Rehber or not working ) if you are correct , i can write k.Rehber.ID and others. i can not write: from k in Kartlar where k.Rehber.ID = 123 //assuming that navigation property name is Rehbar and its primary key of Rehbar table is ID && k.Kampanya.ID = 345 //assuming that navigation property name is Kampanya and its primary //key of Kampanya table is ID && k.Birim.ID = 567 //assuming that navigation property name is Birim and its primary key of Birim table is ID select k images you can see: also: You should look : http://i42.tinypic.com/2nqyyc6.png I have a table it includes 3 foreign key field like that: My Table: Kartlar ID (Pkey) RehberID (Fkey) KampanyaID (Fkey) BrimID (Fkey) Name Detail How can i write entity query with LINQ ? select * from Kartlar where RehberID=123 and KampanyaID=345 and BrimID=567 BUT please be careful I can not see RehberID, KampanyaID, BrimID in entity they are foreign key. I should use entity key but how?

    Read the article

  • PHP - Nested Looping Trouble

    - by Jeremy A
    I have an HTML table that I need to populate with the values grabbed from a select statement. The table cols are populated by an array (0,1,2,3). Each of the results from the query will contain a row 'GATE' with a value of (0-3), but there will not be any predictability to those results. One query could pull 4 rows with 'GATE' values of 0,1,2,3, the next query could pull two rows with values of 1 & 2, or 1 & 3. I need to be able to populate this HTML table with values that correspond. So HTML COL 0 would have the TTL_NET_SALES of the db row which also has the GATE value of 0. <?php $gate = array(0,1,2,3); $gate_n = count($gate); /* Database = my_table.ID my_table.TT_NET_SALES my_table.GATE my_table.LOCKED */ $locked = "SELECT * FROM my_table WHERE locked = true"; $locked_n = count($locked); /* EXAMPLE RETURN Row 1: my_table['ID'] = 1 my_table['TTL_NET_SALES'] = 1000 my_table['GATE'] = 1; Row 2: my_table['ID'] = 2 my_table['TTL_NET_SALES'] = 1500 my_table['GATE'] = 3; */ print "<table border='1'>"; print "<tr><td>0</td><td>1</td><td>2</td><td>3</td>"; print "<tr>"; for ($i=0; $i<$locked_n; $i++) { for ($g=0; $g<$gate_n; $g++) { if (!is_null($locked['TTL_NET_SALES'][$i]) && $locked['GATE'][$i] == $gate[$g]) { print "<td>$".$locked['TTL_NET_SALES'][$i]."</td>"; } else { print "<td>-</td>"; } } } print "</tr>"; print "</table>"; /* What I want to see: <table border='1'> <tr> <td>0</td> <td>1</td> <td>2</td> <td>3</td> </tr> <tr> <td>-</td> <td>1000</td> <td>-</td> <td>1500</td> </tr> </table> */ ?>

    Read the article

  • html clickable layout area. best practice

    - by Andrew Florko
    I am bad in html layout but I have to produce it :) I want to make big button on a page that is implemented as div with children tags (maybe - a bad idea). I can handle click event on boundary-div with javascript but it requires javascript enabled. I can wrap boundary-div with "anchor" tag but is doesn't work in IE Please, suggest me the best way to implement this. <a href="..."> <table> <td> ... </td> <td> ... <table> ... </table> </td> </table> </a>

    Read the article

  • SQLAlchemy: a better way for update with declarative?

    - by hadrien
    I am a SQLAlchemy noob. Let's say I have an user table in declarative mode: class User(Base): __tablename__ = 'user' id = Column(u'id', Integer(), primary_key=True) name = Column(u'name', String(50)) When I know user's id without object loaded into session, I update such user like this: ex = update(User.__table__).where(User.id==123).values(name=u"Bob Marley") Session.execute(ex) I dislike using User.__table__, should I stop worrying with that? Is there a better way to do this? Thanx!

    Read the article

  • jQuery animation if load() returns something different...

    - by Dan LaManna
    setInterval(function() { var prevTopArticle = $("#toparticles table:first").html(); $("#toparticles").load("myurloffeed.com/topfeed", function() { alternateBG(); var newTopArticle = $("#toparticles table:first").html(); if (prevTopArticle!=newTopArticle) { $("#toparticles table:first").effect("highlight", {color:"#faffc4"}, 2000); } }); }, 8000); So it sets the current first table item to a variable, loads the toparticles div with the tables off the url, and if they are different it will perform the highlight effect, however it does the highlight effect anyway, completely unsure why it isn't working.

    Read the article

  • SQL Server INSERT, Scope_Identity() and physical writing to disc

    - by TheBlueSky
    Hello everyone, I have a stored procedure that does, among other stuff, some inserts in different table inside a loop. See the example below for clearer understanding: INSERT INTO T1 VALUES ('something') SET @MyID = Scope_Identity() ... some stuff go here INSERT INTO T2 VALUES (@MyID, 'something else') ... The rest of the procedure These two tables (T1 and T2) have an IDENTITY(1, 1) column in each one of them, let's call them ID1 and ID2; however, after running the procedure in our production database (very busy database) and having more than 6250 records in each table, I have noticed one incident where ID1 does not match ID2! Although normally for each record inserted in T1, there is record inserted in T2 and the identity column in both is incremented consistently. The "wrong" records were something like that: ID1 Col1 ---- --------- 4709 data-4709 4710 data-4710 ID2 ID1 Col1 ---- ---- --------- 4709 4710 data-4709 4710 4709 data-4710 Note the "inverted", ID1 in the second table. Knowing not that much about SQL Server underneath operations, I have put the following "theory", maybe someone can correct me on this. What I think is that because the loop is faster than physically writing to the table, and/or maybe some other thing delayed the writing process, the records were buffered. When it comes the time to write them, they were wrote in no particular order. Is that even possible if no, how to explain the above mentioned scenario? If yes, then I have another question to rise. What if the first insert (from the code above) got delayed? Doesn't that mean I won't get the correct IDENTITY to insert into the second table? If the answer of this is also yes, what can I do to insure the insertion in the two tables will happen in sequence with the correct IDENTITY? I appreciate any comment and information that help me understand this. Thanks in advance.

    Read the article

  • Azure batch operations delete several blobs and tables

    - by reft
    I have a function that deletes every table & blob that belongs to the affected user. CloudTable uploadTable = CloudStorageServices.GetCloudUploadsTable(); TableQuery<UploadEntity> uploadQuery = uploadTable.CreateQuery<UploadEntity>(); List<UploadEntity> uploadEntity = (from e in uploadTable.ExecuteQuery(uploadQuery) where e.PartitionKey == "uploads" && e.UserName == User.Idendity.Name select e).ToList(); foreach (UploadEntity uploadTableItem in uploadEntity) { //Delete table TableOperation retrieveOperationUploads = TableOperation.Retrieve<UploadEntity>("uploads", uploadTableItem.RowKey); TableResult retrievedResultUploads = uploadTable.Execute(retrieveOperationUploads); UploadEntity deleteEntityUploads = (UploadEntity)retrievedResultUploads.Result; TableOperation deleteOperationUploads = TableOperation.Delete(deleteEntityUploads); uploadTable.Execute(deleteOperationUploads); //Delete blob CloudBlobContainer blobContainer = CloudStorageServices.GetCloudBlobsContainer(); CloudBlockBlob blob = blobContainer.GetBlockBlobReference(uploadTableItem.BlobName); blob.Delete(); } Each table got its own blob, so if the list contains 3 uploadentities, the 3 table and the 3 blobs will be deleted. I heard you can use table batch operations for reduce cost and load. I tried it, but failed miserable. Anyone intrested in helping me:)? Im guessing tablebatch operations are for tables only, so its a no go for blobs, right? How would you add tablebatchoperations for this code? Do you see any other improvements that can be done? Thanks!

    Read the article

  • Removing "Using temporary; Using filesort" from this MySQL select+join+group by query

    - by claytontstanley
    I have the following query: select t.Chunk as LeftChunk, t.ChunkHash as LeftChunkHash, q.Chunk as RightChunk, q.ChunkHash as RightChunkHash, count(t.ChunkHash) as ChunkCount from chunksubset as t join chunksubset as q on t.ID = q.ID group by LeftChunkHash, RightChunkHash And the following explain table: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE subsets ref PRIMARY,IDIndex,SubsetIndex SubsetIndex 767 const 522014 "Using where; Using temporary; Using filesort" 1 SIMPLE subsets eq_ref PRIMARY,IDIndex,SubsetIndex PRIMARY 771 sotero.subsets.Id,const 1 "Using where; Using index" 1 SIMPLE c ref IDIndex IDIndex 4 sotero.subsets.Id 12 "Using where" 1 SIMPLE c ref IDIndex IDIndex 4 sotero.subsets.Id 12 note the "using temporary; using filesort". When this query is run, I quickly run out of RAM (presumably b/c of the temp table), and then the HDD kicks in, and the query slows to a halt. I thought it might be an index issue, so I started adding a few that sort of made sense: Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment chunks 0 PRIMARY 1 ChunkId A 17796190 NULL NULL BTREE chunks 1 ChunkHashIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 IDIndex 1 Id A 1483015 NULL NULL BTREE chunks 1 ChunkIndex 1 Chunk A 243783 NULL NULL BTREE chunks 1 ChunkTypeIndex 1 ChunkType A 2 NULL NULL BTREE chunks 1 chunkHashByChunkIDIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByChunkIDIndex 2 ChunkId A 17796190 NULL NULL BTREE chunks 1 chunkHashByChunkTypeIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByChunkTypeIndex 2 ChunkType A 261708 NULL NULL BTREE chunks 1 chunkHashByIDIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByIDIndex 2 Id A 17796190 NULL NULL BTREE But still using the temporary table. The db engine is MyISAM. How can I get rid of the using temporary; using filesort in this query? Just changing to InnoDB w/o explaining the underlying cause is not a particularly satisfying answer. Besides, if the solution is to just add the proper index, then that's much easier than migrating to another db engine.

    Read the article

  • User settings mechanism for Yii

    - by Peterim
    Hi guys! I need some help with user settings mechanism for my Yii-based application. I've created the following db structure to store user settings: table user with the following fields id | username | email | etc. table settingslist (to store a list of all possible settings with descriptions) with the following fields id | code | name | description table settings (to store all user settings) with the following fields id | userid | settingslistcode | value Now I'm stuck with the form which allows user to change his settings. I had to deal before with the regular models (i.e. for posts, comments, etc.) where every new model had only one row in the database (Post model - id | title | body |) with the certain amount of attributes (fields of the table). But now I need to store user settings in 10-15 rows and I don't know how to apply Yii model mechanism to work with this, so I can retrieve those settings in a single form (so user could change his preferences). Any suggestions are greatly appreciated. Thank you!

    Read the article

  • A good approach to db planing for reporting service

    - by Itay Moav
    The scenario: Big system (~200 tables). 60,000 users. Complex reports that will require me to do multiple queries for each report and even those will be complex queries with inner queries all over the place + some processing in PHP. I have seen an approach, which I am not sure about: Having one centralized, de-normalized, table that registers any activity in the system which is reportable. This table will hold mostly foreign keys, so she should be fairly compact and fast. So, for example (My system is a virtual learning management system), A user enrolls to course, the table stores the user id, date, course id, organization id, activity type (enrollment). Of course I also store this data in a normalized DB, which the actual application uses. Pros I see: easy, maintainable queries and code to process data and fast retrieval. Cons: there is a danger of the de-normalized table to be out of sync with the real DB. Is this approach worth considering, or (preferably from experience) is total $#%#%t?

    Read the article

  • LinqToSQL: Not possible to update PrimaryKey?

    - by Zuhaib
    I have a simple table (lets call it Table1) that has a NVARCHAR field as the PK. Table1 has no association with any other tables. When I update Table1's PK column using LinqToSQL it fails. If I update other column it succeeds. I could delete this row and insert new one in Table1, but I don't want to. There is a transaction table which has Table1's PK column as a column. When the PK of Table1 is changed I want no effect in the transaction table. But when the row from Table1 is deleted, I want the transaction rows to be deleted. The cascading is done via Trigger. As there is not association between these two tables, if I update the PK column of Table1 using normal SQL, it works and there is no effect on the transaction table as expected. When I delete the row the trigger deletes the rows from transaction table. For this reason I can't delete and then add new row in Table1. So what can be done to successfully update the PrimaryKey of the Table1?

    Read the article

  • Oracle/c#: How do i use bind variables with select statements to return multiple records?

    - by twiga
    I have a question regarding Oracle bind variables and select statements. What I would like to achieve is do a select on a number different values for the primary key. I would like to pass these values via an array using bind values. select * from tb_customers where cust_id = :1 int[] cust_id = { 11, 23, 31, 44 , 51 }; I then bind a DataReader to get the values into a table. The problem is that the resulting table only contains a single record (for cust_id=51). Thus it seems that each statement is executed independently (as it should), but I would like the results to be available as a collective (single table). A workaround is to create a temporary table, insert all the values of cust_id and then do a join against tb_customers. The problem with this approach is that I would require temporary tables for every different type of primary key, as I would like to use this against a number of tables (some even have combined primary keys). Is there anything I am missing?

    Read the article

  • How can I improve the performance of LinqToSql queries that use EntitySet properties?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • JQuery tablesorter - Second click on column header doesn't resort

    - by Jonathan
    I'm using tablesorter in on a table I added to a view in django's admin (although I'm not sure this is relevant). I'm extending the html's header: {% block extrahead %} <script type="text/javascript" src="http://code.jquery.com/jquery-1.4.2.js"></script> <script type="text/javascript" src="http://mysite.com/media/tablesorter/jquery.tablesorter.js"></script> <script type="text/javascript"> $(document).ready(function() { $("#myTable").tablesorter(); } ); </script> {% endblock %} When I click on a column header, it sorts the table using this column in descending order - that's ok. When I click the same column header a second time - it does not reorder to ascending order. What's wrong with it? the table's html looks like: <table id="myTable" border="1"> <thead> <tr> <th>column_name_1</th> <th>column_name_2</th> <th>column_name_3</th> </tr> </thead> <tbody> {% for item in extra.items %} <tr> <td>{{ item.0|safe }} </td> <td>{{ item.1|safe }} </td> <td>{{ item.2|safe }} </td> </tr> {% endfor %} </tbody> </table>

    Read the article

  • foreign key constraints on primary key columns - issues ?

    - by zzzeek
    What are the pros/cons from a performance/indexing/data management perspective of creating a one-to-one relationship between tables using the primary key on the child as foreign key, versus a pure surrogate primary key on the child? The first approach seems to reduce redundancy and nicely constrains the one-to-one implicitly, while the second approach seems to be favored by DBAs, even though it creates a second index: create table parent ( id integer primary key, data varchar(50) ) create table child ( id integer primary key references parent(id), data varchar(50) ) pure surrogate key: create table parent ( id integer primary key, data varchar(50) ) create table child ( id integer primary key, parent_id integer unique references parent(id), data varchar(50) ) the platforms of interest here are Postgresql, Microsoft SQL Server.

    Read the article

  • "Special case" records for foreign key constraints

    - by keithjgrant
    Let's say I have a mysql table, called foo with a foreign key option_id constrained to the option table. When I create a foo record, the user may or may not have selected an option, and 'no option' is a viable selection. What is the best way to differentiate between 'null' (i.e. the user hasn't made a selection yet) and 'no option' (i.e. the user selected 'no option')? Right now, my plan is to insert a special record into the option table. Let's say that winds up with an id of 227 (this table already has a number of records at this point, so '1' isn't available). I have no need to access this record at a database level, and it would act as nothing more than a placeholder that the foreign key in the foo table can reference. So do I just hard-code '227' in my codebase when I'm creating 'foo' records where the user has selected 'no option'? The hard-coded id seems sloppy, and leaves room for error as the code is maintained down the road, but I'm not really sure of another approach.

    Read the article

  • How to approach this SQL query

    - by Kim
    I have data related as follows: A table of Houses A table of Boxes (with an FK back into Houses) A table of Things_in_boxes (with an FK back to Boxes) A table of Owners (with an FK back into Houses) In a nutshell, a House has many Boxes, and each Box has many Things in it. In addition, each House has many Owners. If I know two Owners (say Peter and Paul), how can I list all the Things that are in the Boxes that are in the Houses owned by these guys? Also, I'd like to master this SQL stuff. Can anyone recommend a good book/resource? (I'm using MySQL). Thanks!

    Read the article

  • Comparing an id to id of different tables rows mysql

    - by jett
    So I am trying to retrieve all interests from someone, and be able to list them. This works with the following query. SELECT *,( SELECT GROUP_CONCAT(interest_id SEPARATOR ",") FROM people_interests WHERE person_id = people.id ) AS interests FROM people WHERE id IN ( SELECT person_id FROM people_interests WHERE interest_id = '.$site->db->clean($_POST['showinterest_id']).' ) ORDER BY lastname, firstname In this one which I am having trouble with, I want to select only those who happen to have their id in the table named volleyballplayers. The table just has an id, person_id, team_id, and date fields. SELECT *,( SELECT GROUP_CONCAT(interest_id SEPARATOR ",") FROM people_interests WHERE person_id = people.id ) AS interests FROM people WHERE id IN ( SELECT person_id FROM people_interests WHERE volleyballplayers.person_id = person_id ) ORDER BY lastname, firstname I just want to make sure that only the people who are in the volleyballplayers table show up, but I am getting an error saying that Unknown column 'volleyballplayers.person_id' in 'where clause' although I am quite sure of the name of table and I know the column is named person_id.

    Read the article

  • MySQL won't use index for query?

    - by Jack Sleight
    I have this table: CREATE TABLE `point` ( `id` INT(11) NOT NULL AUTO_INCREMENT, `siteid` INT(11) NOT NULL, `lft` INT(11) DEFAULT NULL, `rgt` INT(11) DEFAULT NULL, `level` SMALLINT(6) DEFAULT NULL, PRIMARY KEY (`id`), KEY `point_siteid_site_id` (`siteid`), CONSTRAINT `point_siteid_site_id` FOREIGN KEY (`siteid`) REFERENCES `site` (`id`) ON DELETE CASCADE ) ENGINE=INNODB AUTO_INCREMENT=35 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci And this query: SELECT * FROM `point` WHERE siteid = 1; Which results in this EXPLAIN information: +----+-------------+-------+------+----------------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+----------------------+------+---------+------+------+-------------+ | 1 | SIMPLE | point | ALL | point_siteid_site_id | NULL | NULL | NULL | 6 | Using where | +----+-------------+-------+------+----------------------+------+---------+------+------+-------------+ Question is, why isn't the query using the point_siteid_site_id index?

    Read the article

  • Mysql - Operand should contain 1 column(s) : What's wrong with the query...?

    - by SpikETidE
    Hi everybody.... I am trying to query a database to find the following If a customer searches for a hotel in a city between dates A and B, find and return the hotels in which rooms are free between the two dates. There will be more than one room in each room type(i.e. 5 Rooms in type A, 10 rooms in Type B etc) and we have to query the db to find only those hotels in which there is atleast one room free in atleast one type. This is my table structure.... **Structure for table 'reservations'** reservation_id hotel_id room_id customer_id payment_id no_of_rooms check_in_date check_out_date reservation_date **Structure for table 'hotels'** hotel_id hotel_name hotel_description hotel_address hotel_location hotel_country hotel_city hotel_type hotel_stars hotel_image hotel_deleted **Structure for table 'rooms'** room_id hotel_id room_name max_persons total_rooms room_price room_image agent_commision room_facilities service_tax vat city_tax room_description room_deleted And this is my query $city_search = '15'; $check_in_date = '29-03-2010'; $check_out_date = '31-03-2010'; $dateFormat_check_in = "DATE_FORMAT('$reservations.check_in_date','%d-%m-%Y')"; $dateFormat_check_out = "DATE_FORMAT('$reservations.check_out_date','%d-%m-%Y')"; $dateCheck = "$dateFormat_check_in >= '$check_in_date' AND $dateFormat_check_out <= '$check_out_date'"; $query = "SELECT $rooms.room_id, $rooms.room_name, $rooms.max_persons, $rooms.room_price, $hotels.hotel_id, $hotels.hotel_name, $hotels.hotel_stars, $hotels.hotel_type FROM $hotels,$rooms,$reservations WHERE $hotels.hotel_city = '$city_search' AND $hotels.hotel_id = $rooms.hotel_id AND $hotels.hotel_deleted = '0' AND $rooms.room_deleted = '0' AND $rooms.total_rooms - (SELECT SUM($reservations.no_of_rooms) as tot, $rooms.room_id as id FROM $reservations,$rooms WHERE $dateCheck GROUP BY $reservations.room_id) > '0'"; The number of rooms already reserved in each room type in each hotel will be stored in the reservations table... The query returns the error : Operand should contain 1 column(s) I tried running the sub-query alone but i don't get any result... And i have lost quite some amount of hair trying to de-bug this query from yesterday... What's wrong with this...? Or is there a better way to do what i mentioned above...? Thanks for your time...

    Read the article

  • Google Visualization Annotated Time Line, removing data points.

    - by Vitaly Babiy
    I am trying to build a graph that will change resolution depending on how far you are zoomed in. Here is what it looks like when you are complete zoomed out. So this looks good so when I zoom in I get a higher resolution data and my graph looks like this: The problem is when I zoom out the higher resolution data does not get cleared out of the graph: The tables below the graphs are table display what is in the DataTable. This is what drawing code looks like. var g_graph = new google.visualization.AnnotatedTimeLine(document.getElementById('graph_div_json')); var table = new google.visualization.Table(document.getElementById('table_div_json')); function handleQueryResponse(response){ log("Drawing graph") var data = response.getDataTable() g_graph.draw(data, {allowRedraw:true, thickness:2, fill:50, scaleType:'maximized'}) table.draw(data, {allowRedraw:true}) } I am try to find a way for it to only displaying the data that is in the DataTable. I have tried removing the allowRedraw flag but then it breaks the zooming operation. Any help would be greatly appreciated. Thanks

    Read the article

< Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >