Search Results

Search found 9929 results on 398 pages for 'azure tables'.

Page 304/398 | < Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >

  • Was: Not unique table :: Now: #1054 - Unknown column - can't understand why!?

    - by Andy Barlow
    Hi! I'm trying to join some tables together in MySQL, but I seem to get an error saying: #1066 - Not unique table/alias: 'calendar_jobs' I really want it to select everything from the cal_events, the 2 user bits and just the destination col from the jobs table, but become "null" if there arn't any job. A right join seemed to fit the bill but doesn't work! Can anyone help!? UPDATE: Thanks for the help on the previous query, I'm now up to this: SELECT calendar_events.* , calendar_users.doctorOrNurse, calendar_users.passportName, calendar_jobs.destination FROM `calendar_events` , `calendar_users` RIGHT JOIN calendar_jobs ON calendar_events.jobID = calendar_jobs.jobID WHERE `start` >= 0 AND calendar_users.userID = calendar_events.userID; But am now getting an error saying: #1054 - Unknown column 'calendar_events.jobID' in 'on clause' What is it this time!? Thanks again!

    Read the article

  • PHP - MySQL - Select runs indefinitely

    - by John
    I have three tables listings: id, pid, beds, baths, etc, etc, etc, db locations: id, pid, zip, lat, lon, etc, etc, etc, db images id, pid, height, width, raw, etc, etc, db id, pid & db are indexed. db just references the mls provider a particular item came from. in images the raw column holds raw image data there are about 15k rows in listings/locations, and about 120k rows in images so there are multiple rows that have the same pid. when i do "select pid from listings" or "select pid from locations" the query completes successfully in about 100ms. when i do "select pid from images" it just hangs in sqlyog and never completes... i was thinking since the raw column contains alot of information that it might be trying to select that too, but my query doesn't try to select that so I can't imagine why it's taking so long... any idea why this is happening??

    Read the article

  • 'cross-referencing' DataTable's

    - by Lee
    I have a DataGridView that is being filled with data from a table. Inside this table is a column called 'group' that has the ID of an individual group in another table. What I would like to do, is when the DataGridView is filled, instead of showing the ID contained in 'group', I'd like it to display the name of the group. Is there some type of VB.net 'magic' that can do this, or do I need to cross-reference the data myself? Here is a breakdown of what the 2 tables look like: table1 id group (this holds the value of column id in table 2) weight last_update table2 id description (this is what I would like to be displayed in the DGV.) BTW - I am using Visual Studio Express.

    Read the article

  • Rename INDEX Column

    - by Lee
    Hey All I have a database with around 40 tables and need to rename every index column. IE USER a table has a bunch of fields like user_id | user_username | user_password | etc... I want to rename the ID columns just to id ie id | user_username | user_password | etc... But I keep getting mysql errors on the alter table command ie. ALTER TABLE database RENAME COLUMN user_id to id; Plus many different variations. Whats the best way to do this ? Hope you can advise

    Read the article

  • Retrieving specific tuples using Mysql

    - by Narayanan
    Hi, I have some problems retrieving specific tuples. I am actually a student trying to build a Room management system. I have two tables: Room(roomID,hotelname,rate) and Reservation(resID,arriveDate,departDate,roomID). I am not sure how to retrieve the rooms that are available between 2 specific dates. This was the query that i used. SELECT Room.roomID,hotelname,rate FROM Room LEFT JOIN Reservation on ( Room.roomID=Reservation.resID and arriveDate >='2010-02-16' and departDate <='2010-02-20' ) GROUP BY roomID,hotelname,rate HAVING count(*)=0;' but it returns an empty set. Can any1 be kind enough to tell me what mistake i am doing??

    Read the article

  • Is it possible to run a SQL-only file from a "rake db:create"?

    - by Somebody still uses you MS-DOS
    I'm trying to install a software called Teambox in my Dreamhost shared account. I have no experience with Rails. I just want to install the software in the shared hosting. In this shared hosting, all dependencies are ok, but I have to create the dabatase from their panel. I can't create in command line (ssh). So, when I run "rake db:create" these's an error, because the db already exists (because I created in panel). I've already contacted support. They can't change this policy. How do I populate my tables "by hand" in this case? Which files should I look inside Teambox's folder... Thanks!

    Read the article

  • MS Sync framework - Identity crisis resolution by partitioning the primary key.

    - by user326136
    Hello, We implementing offline feature to an existing application. We have implemented the syn with SQL Server internal change tracking and over WCF using MS Sync Framework (http://msdn.microsoft.com/en-us/sync/default.aspx) All of our tables have primary key as integer, we cannot move to GUID. So as you are thinking we will have identity crises between applications. So we decided to go with the way Merge replication does(http://msdn.microsoft.com/en-us/library/aa179416(SQL.80).aspx) partition the primary key range. Below is the example scenario - Server Table A - ID Range - 0 to 100 Client 1 Table A - ID Range - 101 to 200 Client 2 Table A - ID Range - 201 to 300 how to implement this ? i know we can use BCC CHECKIDENT (yourtable, reseed, value) CHECK (([ID]<=(100))) but this does not solve the issue.... Merge replication provides an option of "Not for replication"(http://msdn.microsoft.com/en-us/library/aa237102(SQL.80).aspx) to achieve insert form clients and still maintain the set range.. can i use that somehow here? please help...

    Read the article

  • Sum of distinc rows after a 1-many table join

    - by Lock
    I have 2 tables that I am joining. Table 1 has 1-many relationship with table 2. That is, table 2 can return multiple rows for a single row of table 1. Because of this, the records of table 1 is duplicated for as many rows as are on table 2.. which is expected. Now, I have a sum on one of the columns from table 1, but because of the multiple rows that get returned on the join, the sum is obviously multiplying. Is there a way to get this number back to its original number? I tried dividing by the count of rows from table 2 but this didnt quite give me the expected result. Are there any analytical functions that could do this? I almost want something like "if this row has not yet been counted in the sum, add it to the sum"

    Read the article

  • Monitoring Log Shipped Databases

    - by Registered User
    I need a consistent way to monitor databases that are read-only log shipped copies of production databases. In the past I have relied on the following methods: Set the job that restores logs to the database kick off another job as its last step. Set the job that restores logs to the database to insert a record in a control table as its last step. Query the msdb database to check the status of the job that restores logs to the database. Query a control table inside the database itself that gets a value immediately before transaction logs are backed up. Query MAX values from tables inside the database to see if it has recent changes. Although the above methods work, they can't be implemented for every log shipped database that I query for various reasons. What is the best method for monitoring the "data as of" date for a log shipped database?

    Read the article

  • What is the best way to keep database data encrypted with user passwords?

    - by Dan Sosedoff
    Let's say an application has really specific data which belongs to a user, and nobody is supposed to see it except the owner. I use MySQL database with DataMapper ORM mapper. The application is written in Ruby on Sinatra. Application behavior: User signs up for an account. Creates username and password. Logs into his dashboard. Some fields in specific tables must be protected. Basically, I'm looking for auto-encryption for a model properties. Something like this: class Transaction include DataMapper::Resource property :id, Serial property :value, String, :length => 1024, :encrypted => true ... etc ... belongs_to :user end I assume that encryption/decryption on the fly will cause performance problems, but that's ok. At least if that works - I'm fine. Any ideas how to do this?

    Read the article

  • Select rows where column LIKE dictionary word

    - by Gerve
    I have 2 tables: Dictionary - Contains roughly 36,000 words CREATE TABLE IF NOT EXISTS `dictionary` ( `word` varchar(255) NOT NULL, PRIMARY KEY (`word`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; Datas - Contains roughly 100,000 rows CREATE TABLE IF NOT EXISTS `datas` ( `ID` int(11) NOT NULL AUTO_INCREMENT, `hash` varchar(32) NOT NULL, `data` varchar(255) NOT NULL, `length` int(11) NOT NULL, `time` int(11) NOT NULL, PRIMARY KEY (`ID`), UNIQUE KEY `hash` (`hash`), KEY `data` (`data`), KEY `length` (`length`), KEY `time` (`time`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=105316 ; I would like to somehow select all the rows from datas where the column data contains 1 or more words. I understand this is a big ask, it would need to match all of these rows together in every combination possible, so it needs the best optimization. I have tried the below query, but it just hangs for ages: SELECT `datas`.*, `dictionary`.`word` FROM `datas`, `dictionary` WHERE `datas`.`data` LIKE CONCAT('%', `dictionary`.`word`, '%') AND LENGTH(`dictionary`.`word`) > 3 ORDER BY `length` ASC LIMIT 15 I have also tried something similar to the above with a left join, and on clause that specified the like statement.

    Read the article

  • Is there a best practice for maintaining history in a database?

    - by Pete
    I don't do database work that often so this is totally unfamiliar territory for me. I have a table with a bunch of records that users can update. However, I now want to keep a history of their changes just in case they want to rollback. Rollback in this case is not the db rollback but more like revert changes two weeks later when they realized that they made a mistake. The distinction being that I can't have a transaction do the job. Is the current practice to use a separate table, or just a flag in the current table? It's a small database, 5 tables each with < 6 columns, < 1000 rows total.

    Read the article

  • Help in setting the setClientInfo with JPA

    - by enrique
    i am trying to set to jpa the new JDBC method which allows the application to be identified with a name it is the setClientInfo() and i could do it using pure JDBC using the lines Properties jdbcProperties = new Properties(); jdbcProperties.put("user", "system"); jdbcProperties.put("password", "sw01"); jdbcProperties.put("v$session.program", "Clients"); Connection connection = DriverManager.getConnection(url,jdbcProperties); However i have new requirements and i need to make it with JPA EclipseLink i have been googling and could not find something substantial on how to set this property out in JPA i guess it is by annotating something but i do not know what, i was trying to setting the persistence unit by putting the following in the persistence unit: <properties> <property name="toplink.ddl-generation" value="create-tables"/> <property name="v$session.program" value="Clients"/> </properties> but no lucky ...does somebody know how to do it or any idea...? thanks in advanced.-

    Read the article

  • In SQL find the combination of rows whose sum add up to a specific amount (or amt in other table)

    - by SamH
    Table_1 D_ID Integer Deposit_amt integer Table_2 Total_ID Total_amt integer Is it possible to write a select statement to find all the rows in Table_1 whose Deposit_amt sum to the Total_amt in Table_2. There are multiple rows in both tables. Say the first row in Table_2 has a Total_amt=100. I would want to know that in Table_1 the rows with D_ID 2, 6, 12 summed = 100, the rows D_ID 2, 3, 42 summed = 100, etc. Help appreciated. Let me know if I need to clarify. I am asking this question as someone as part of their job has a list of transactions and a list of totals, she needs to find the possible list of transactions that could have created the total. I agree this sounds dangerous as finding a combination of transactions that sums to a total does not guarantee that they created the total. I wasn't aware it is an np-complete problem.

    Read the article

  • Is there any free, open source php CMS\framework for described case?

    - by Ole Jak
    I want that cms\framework to create me tables like "Users" "Cameras" and so on and declare classes and simple default methods for them (like paged sql relults and so on). I mean I say to it: I want Users to have ID, SpecialNumber and Name Flilds. and I want to get from it class for table generation (to call it once) and class containing methods such as Search by ID, SpecialNumber and Name, Create User, Delit User and so on functions. Is there any framework/cms like this for working with CODE not ui's and so on... so to say PHP generator or something... The result should be as that framework free as possible. So Is there any free, open source php CMS\framework for described case?

    Read the article

  • Copying a database into a new database including structure and data

    - by Jason
    In phpMyAdmin under operations I can "Copy database to:" and select Structure and data CREATE DATABASE before copying Add AUTO_INCREMENT value I need to be able to do that without using phpMyAdmin. I know how to create the database and user. I have a source database that's a shell that I can work from so all I really need is the how to copy all the table structure and data part. (I know, the harder part) system() & exec() are not options for me which rules out mysqldump. (I think) How can I loop through each table and recreate it's structure and data? Is it just looping through the results of SHOW TABLES then for each table looping through DESCRIBE tablename Then, is there an easy way for getting the data copied?

    Read the article

  • Django Forms save_m2m

    - by John
    Hi I have a model which has 2 many to many fields in it. one is a standard m2m field which does not use any through tables whereas the other is a bit more complecated and has a through table. I am using the Django forms.modelform to display and save the forms. The code I have to save the form is if form.is_valid(): f = form.save(commit=False) f.modified_by = request.user f.save() form.save_m2m() When i try to save the form I get the following error: Cannot set values on a ManyToManyField which specifies an intermediary model. I know this is happening when I do the form.save_m2m() because of the through table. What I'd liek to do is tell Django to ignore the m2m field with the through table but still save the m2m field without the through table. I can then go on to manually save the data for the through table field. Thanks

    Read the article

  • Stored Procedure - forcing execution order

    - by meepmeep
    I have a stored procedure that itself calls a list of other stored procedures in order: CREATE PROCEDURE [dbo].[prSuperProc] AS BEGIN EXEC [dbo].[prProc1] EXEC [dbo].[prProc2] EXEC [dbo].[prProc3] --etc END However, I sometimes have some strange results in my tables, generated by prProc2, which is dependent on the results generated by prProc1. If I manually execute prProc1, prProc2, prProc3 in order then everything is fine. It appears that when I run the top-level procedure, that Proc2 is being executed before Proc1 has completed and committed its results to the db. It doesn't always go wrong, but it seems to go wrong when Proc1 has a long execution time (in this case ~10s). How do I alter prSuperProc such that each procedure only executes once the preceding procedure has completed and committed? Transactions?

    Read the article

  • How big can a SQL Server row be before it's a problem?

    - by John Leidegren
    Occasionally I run into this limitation using SQL Server 2000 that a row size can not exceed 8K bytes. SQL Server 2000 isn't really state of the art, but it's still in production code and because some tables are denormalized that's a problem. However, this seems to be a non issue with SQL Server 2005. At least, it won't complain that row sizes are bigger than 8K, but what happens instead and why was this a problem in SQL Server 2000? Do I need to care about my rows growing? Should I try and avoid large rows? Are varchar(max) and varbinary(max) a solution or expensive, in terms of size in database and/or CPU time? Why do I care at all about specifying the length of a particular column, when it seems like it's just a matter of time before someones going to hit that upper limit?

    Read the article

  • Complicated Order By Clause?

    - by Todd
    Hi. I need to do what to me is an advanced sort. I have this two tables: Table: Fruit fruitid | received | basketid 1 20100310 2 2 20091205 3 3 20100220 1 4 20091129 2 Table: Basket id | name 1 Big Discounts 2 Premium Fruit 3 Standard Produce I'm not even sure I can plainly state how I want to sort (which is probably a big part of the reason I can't seem to write code to do it, lol). I do a join query and need to sort so everything is organized by basketid. The basketid that has the oldest fruit.received date comes first, then the other rows with the same basketid by date asc, then the basketid with the next earliest fruit.received date followed by the other rows with the same basketid, and so on. So the output would look like this: Fruitid | Received | Basket 4 20091129 Premuim Fruit 1 20100310 Premuim Fruit 2 20091205 Standard Produce 3 20100220 Big Discounts Any ideas how to accomplish this in a single execution?

    Read the article

  • Why does SQLite say it can't read SQL from a file?

    - by Gavin
    Hi all. I have a bunch of SQL in a file, which creates the tables and so forth. The problem is that the .read command simply returns "can't open xxx" when I try to execute it. I've set the permissions to everybody read/write, but that didn't help. I can cat the file from the command line and see it fine. This is under Mac OS 10.6.3. Anybody have any idea here? Thanks!

    Read the article

  • Doctrine: Mixing YAML markup and db manager (navicat) editing?

    - by ropstah
    I think the answer to this question should be: No. However I hope to be corrected. I'd like to edit our database using a mixture of YAML markup + Doctrine createTables() and Navicat editing. Can I maintain the inheritance which is marked up? Example (4 steps, at step 4, Doctrine is in no way able to re-create the inheritance schema... or is it?): Step 1: Create YAML with inheritance --- Entity: columns: username: string(20) password: string(16) created_at: timestamp updated_at: timestamp User: inheritance: extends: Entity type: column_aggregation keyField: type keyValue: 1 Group: inheritance: extends: Entity type: column_aggregation keyField: type keyValue: 2 Step 2: Create tables using Doctrine (and drop/create db if nessecary) Created sql: CREATE TABLE entity (id BIGINT AUTO_INCREMENT, username VARCHAR(20), password VARCHAR(16), created_at DATETIME, updated_at DATETIME, type VARCHAR(255), PRIMARY KEY(id)) ENGINE = INNODB Step 3: Edit table using Navicat Step 4: Refresh YAML file because of 'external' edits...

    Read the article

  • How to pass int value to a ajax enabled wcf service method?

    - by Pandiya Chendur
    I am calling an ajax enabled wcf service method from my aspx page... <script type="text/javascript"> function GetEmployee() { Service.GetEmployeeData('1','5',onGetDataSuccess); } function onGetDataSuccess(result) { Iteratejsondata(result) } </script> and my method doesn't seem to get the value [OperationContract] public string GetEmployeeData(int currentPage,int pageSize) { DataSet dt = GetEmployeeViewData(currentPage,pageSize); return GetJSONString(dt.Tables[0]); } when i used a break point to see what is happening in my method currentPage has a value0x00000001 and pageSize has 0x00000005 Any suggestion what am i doing wrong....

    Read the article

  • Designing entire webpages as SVG files

    - by user1311390
    Disclaimer I realize that given the absurdity of the title, this sounds like a troll. However, it's a genuine question. My background involves OpenGL / x86 assembly. I've recently started learning web programming. I really like SVG + CSS, and was wondering -- why do people not design entire webpages in SVG? Context SVG provides beautiful primitive: quadratic + cubic bezier curves; lines + filling -- all as vector graphics SVG provides text SVG provides affine transformations Questions Are there examples of people designing entire websites as a giant SVG file? If not, what the limitations? Are there performance hits when using SVG primitives as opposed to divs/tables?

    Read the article

  • Storing Data as XML BLOB

    - by NBrowne
    Hi, At the moment the team i am working with is looking into the possibility of storing data which is entered by users from a series of input wizard screens as an XML blob in the database. the main reason for this being that i would like to write the input wizard as a component which can be brought into a number of systems without having to bring with it a large table structure. To try to clarify if the wizard has 100 input fields (for example) then if i go with the normal relational db structure then their will be a 1 to 1 relationship so will have 100 columns in database. So to get this working in another system will have to bring the tables,strore procedures etc into the new system. I have a number of reservations about this but i would like peoples opinions?? thanks

    Read the article

< Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >