Search Results

Search found 15456 results on 619 pages for 'global temporary tables'.

Page 194/619 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • Lost Powerpoint document somewhere between Explorer and C drive

    - by Sarah Frank
    Opened (and not saving) a Powerpoint presentation attached to an online email message. Modified the document and clicked on the Save (not Save As) and now the presentation is nowhere to be found. How do I find this document? I have run a serious search on the C drive to no avail. It's not even in the Temporary Internet Files. Computer system Windows XP Professional version 5.1.2600 Explorer version 6.0.2900

    Read the article

  • DJBDNS DNSCache configuration, svscan won't start

    - by SecurityGate
    I've been wracking my brain the last few days trying to setup DJBDNS on my server. I haven't been having too much luck. I have been following the guide provided by the creator of DJBDNS: http://cr.yp.to/djbdns/run-server.html Here is a run-through of where I am: Both services are up: [root@Happycat tinydns]$ svstat /service/tinydns/ /service/tinydns/: up (pid 18224) 74454 seconds [root@Happycat tinydns]$ svstat /service/dnscache/ /service/dnscache/: up (pid 2733) 2184 seconds My /etc/resolv.conf file: nameserver 127.0.0.1 My $PATH: [root@Happycat ~]$ echo $PATH /usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/sbin:/usr/sbin:/var/qmail/bin/:/usr/nexkit/bin:/root/bin My tinydns/root/data records: ..:69.160.56.65:a:259200 .ns1.benwilk.com:69.160.56.65:a:259200 .ns2.benwilk.com:69.160.56.65:a:259200 .56.160.69.in-addr.arpa:69.160.56.65:a:259200 .56.160.69.in-addr.arpa:69.160.56.65:b:259200 =benwilk.com:69.160.56.65:86400 =openbarrel.net:69.160.56.65:86400 +www.openbarrel.net:69.160.56.65:86400 +www.benwilk.com:69.160.56.65:86400 Tiny dns can recognize the records set: [root@Happycat root]$ tinydns-get a benwilk.com 1 benwilk.com: 78 bytes, 1+1+1+1 records, response, authoritative, noerror query: 1 benwilk.com answer: benwilk.com 86400 A 69.160.56.65 authority: . 259200 NS a.ns additional: a.ns 259200 A 69.160.56.65 But then it comes to a grinding halt: svscan /service/tinydns/ supervise: fatal: unable to start env/run: file does not exist supervise: fatal: unable to acquire log/supervise/lock: temporary failure supervise: fatal: unable to start supervise/run: file does not exist supervise: fatal: unable to start root/run: file does not exist supervise: fatal: unable to start env/run: file does not exist supervise: fatal: unable to start supervise/run: file does not exist supervise: fatal: unable to start root/run: file does not exist supervise: fatal: unable to start env/run: file does not exist supervise: fatal: unable to start supervise/run: file does not exist supervise: fatal: unable to start root/run: file does not exist supervise: fatal: unable to start env/run: file does not exist supervise: fatal: unable to start supervise/run: file does not exist supervise: fatal: unable to start root/run: file does not exist supervise: fatal: unable to start env/run: file does not exist supervise: fatal: unable to start supervise/run: file does not exist supervise: fatal: unable to start root/run: file does not exist supervise: fatal: unable to acquire log/supervise/lock: temporary failure supervise: fatal: unable to start env/run: file does not exist supervise: fatal: unable to start supervise/run: file does not exist supervise: fatal: unable to start root/run: file does not exist I'm assuming I have to set something with DNScache, and to be honest, it gets a bit confusing. I'm not sure whether to set it's IP address to 127.0.0.1 or one of the other IP addresses on the system. What am I missing from here?

    Read the article

  • Can I use JPA/EJB3 on a table that was created at runtime?

    - by tieTYT
    This is weird and probably not possible but I'll ask anyway. I'm making this app that reads in a meta file and creates some tables then populates them with data. I was wondering if I could somehow use JPA to populate those tables. Obviously, there's no way I could have an entity with annotations on it since the table didn't exist at compile time. But perhaps JPA or the entity manager has a way to load data into a nameless table? If possible, I'd expect a method like entityManager.update("myTableName", hashMapOfColumnNamesAndColumnDataValues);

    Read the article

  • What causes Windows 8 Consumer Preview to lock-up / freeze / hang in Oracle VM VirtualBox?

    - by Zack Peterson
    I've installed Windows 8 Consumer Preview to a virtual machine using Oracle VM VirtualBox (4.1.14). It works well except for occasional temporary lock-up / freeze / hang interruptions. It will freeze for about a minute and then resume like normal for several more minutes before freezing again. Host Windows 7 Professional 64-bit 16 GB RAM Intel Core i7 (quad core, hyper threading, virtualization) CPU Guest Windows 8 Consumer Preview 64-bit 2 GB RAM 2 CPUs How should I configure VirtualBox to run Windows 8 well?

    Read the article

  • PHP & MySQL deleting multiple rows script problem.

    - by oReiLLy
    I'm trying to delete two tables rows from two different tables at once when a user clicks the delete button, but for some reason I cant get the table rows to delete can some one help me figure out what is wrong with my script? Thanks Here is the MySQL tables. CREATE TABLE cases ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, file VARCHAR(255) NOT NULL, case VARCHAR(255) NOT NULL, name VARCHAR(255) NOT NULL, PRIMARY KEY (id) ); CREATE TABLE users_cases ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, cases_id INT UNSIGNED NOT NULL, user_id INT UNSIGNED NOT NULL, PRIMARY KEY (id) ); Here is the PHP & MySQL script. if(isset($_POST['delete_case'])) { $cases_ids = array(); $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"SELECT cases.*, users_cases.* FROM cases INNER JOIN users_cases ON users_cases.cases_id = cases.id WHERE users_cases.user_id='$user_id'"); if (!$dbc) { print mysqli_error($mysqli); } else { while($row = mysqli_fetch_array($dbc)){ $cases_ids[] = $row["cases_id"]; } } foreach($_POST['delete_id'] as $di) { if(in_array($di, $cases_ids)) { $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"DELETE FROM users_cases WHERE cases_id = '$delete_id'"); $dbc2 = mysqli_query($mysqli,"DELETE FROM cases WHERE id = '$delete_id'"); } } } Here is the XHTML. <li> <input type="text" name="file[]" size="25" /> <input type="text" name="case[]" size="25" /> <input type="text" name="name[]" size="25" /> <input type="hidden" name="delete_id" value="' . $row['cases_id'] . '" /> </li> <li> <input type="text" name="file[]" size="25" /> <input type="text" name="case[]" size="25" /> <input type="text" name="name[]" size="25" /> <input type="hidden" name="delete_id" value="' . $row['cases_id'] . '" /> </li> <li> <input type="text" name="file[]" size="25" /> <input type="text" name="case[]" size="25" /> <input type="text" name="name[]" size="25" /> <input type="hidden" name="delete_id" value="' . $row['cases_id'] . '" /> </li>

    Read the article

  • Android and SQLite using the SQLiteOpenHelper

    - by tunneling
    I have a SQLite database, and several tables within that database. I am developing a DBAdapter for each table within the database. (reference Reto Meier's Professional Android 2 Application Development, Listing 7.1). I am using the adb shell to interface with the database from the command line and see that the database is being populated as I expect. Occasionally, I want to drop a table so that I can ensure it's being built properly, from scratch. The problem is that SQLiteOpenHelper only checks to see if the database exists. Is there a typical solution to writing a helper to also see that the table(s) exists? Basically once I drop a table, the helper checks to see that the database exists and assumes all is well. Also, the CREATE_DATABASE string used in the reference above only creates the one table. Should I consider using the DBAdapter for an adapter to ALL of my tables? That doesn't seem as clean to me. Reference Material

    Read the article

  • Hibernate - moving annotations from property (method) level to field level

    - by kan
    How do I generate hibernate domain classes from tables with annotations at field level? I used Hibernate Tools project and generated domain classes from the tables in the database. The generated classes have annotations on the getter methods rather than at the field level. Kindly advice a way to generate domain classes that have the fields annotated. Is there any refactoring facility available in eclipse/IDEA etc.. to move the annotations from method level to field level? Appreciate your help and time.

    Read the article

  • PostgreSQL - disabling constraints

    - by azp74
    I have a table with approx 5 million rows which has a fk constraint referencing the primary key of another table (also approx 5 million rows). I need to delete about 75000 rows from both tables. I know that if I try doing this with the fk constraint enabled it's going to take an unacceptable amount of time. Coming from an Oracle background my first thought was to disable the constraint, do the delete & then reenable the constraint. PostGres appears to let me disable constraint triggers if I am a super user (I'm not, but I am logging in as the user that owns/created the objects) but that doesn't seem to be quite what I want. The other option is to drop the constraint and then reinstate it. I'm worried that rebuilding the constraint is going to take ages given the size of my tables. Any thoughts?

    Read the article

  • SQL Server 2008 - Conditional Range

    - by user208662
    Hello, I have a database that has two tables. These two tables are defined as: Movie ----- ID (int) Title (nvchar) MovieReview ----------- ID (int) MovieID (int) StoryRating (decimal) HumorRating (decimal) ActingRating (decimal) I have a stored procedure that allows the user to query movies based on other user's reviews. Currently, I have a temporary table that is populated with the following query: SELECT m.*, (SELECT COUNT(ID) FROM MovieReivew r WHERE r.MovieID=m.ID) as 'TotalReviews', (SELECT AVG((r.StoryRating + r.HumorRating + r.ActingRating) / 3) FROM MovieReview r WHERE r.MovieID=m.ID) as 'AverageRating' FROM Movie m In a later query in my procedure, I basically want to say: SELECT * FROM MyTempTable t WHERE t.AverageRating >= @lowestRating AND t.AverageRating <= @highestRating My problem is, sometimes AverageRating is zero. Because of this, I'm not sure what to do. How do I handle this scenario in SQL?

    Read the article

  • Separating weakly linked database schemas

    - by jldugger
    I've been tasked with revisiting a database schema we designed and use internally for various ticketing and reporting systems. Currently there exists about 40 tables in one Oracle database schema supporting perhaps six webapps. However, there's one unifying relationship amongst them all: a rooms table describing the room. Room name, purpose and other data are thrown into a shared table for each app. My initial idea was to pull each of these applications into a separate database, and perform joins between a given database and the room database. But I've discovered this solution prevents foreign key constraints in SQL Server 2005. It seems silly to duplicate one table for each app and keep those multiple copies synchronized. Should I just leave everything in one large DB, or is there something else I can do separate the tables without losing FK constraints?

    Read the article

  • Materialized Query Table in SQL Server 2005

    - by Azho KG
    In DB2 there is a support for Materialized Query Table (MQT). Basicly you write a query and create a MQT. But the difference from View is that the query is pre-executed and resulting data is stored in MQT and there are some options when to refresh/syncronize the MQT with base tables. I want same functionality in SQL Server. Is there a way to achieve same result? I've tables with millions of rows, and I want to show summary (like total # of members, total expense and etc) in dashboard. So I don't want to count every time user gets to dashboard, instead I want to store them in table and I want that table to be refresh each night. Any kind of hints, answers,suggestions and ideas are welcome. Thanks.

    Read the article

  • PHP: Writing non-english characters to XML - encoding problem

    - by Dean
    Hello, I wrote a small PHP script to edit the site news XML file. I used DOM to manipulate the XML (Loading, writing, editing). It works fine when writing English characters, but when non-English characters are written, PHP throws an error when trying to load the file. If I manually type non-English characters into the file - it's loaded perfectly fine, but if PHP writes the non-English characters the encoding goes wrong, although I specified the utf-8 encoding. Any help is appreciated. Errors: Warning: DOMDocument::load() [domdocument.load]: Entity 'times' not defined in filepath Warning: DOMDocument::load() [domdocument.load]: Input is not proper UTF-8, indicate encoding ! Bytes: 0x91 0x26 0x74 0x69 in filepath Here are the functions responsible for loading and saving the file (self-explanatory): function get_tags_from_xml(){ // Load news entries from XML file for display $errors = Array(); if(!$xml_file = load_news_file()){ // Load file // String indicates error presence $errors = "file not found"; return $errors; } $taglist = $xml_file->getElementsByTagName("text"); return $taglist; } function set_news_lang(){ // Sets the news language global $news_lang; if($_POST["news-lang"]){ $news_lang = htmlentities($_POST["news-lang"]); } elseif($_GET["news-lang"]){ $news_lang = htmlentities($_GET["news-lang"]); } else{ $news_lang = "he"; } } function load_news_file(){ // Load XML news file for proccessing, depending on language global $news_lang; $doc = new DOMDocument('1.0','utf-8'); // Create new XML document $doc->load("news_{$news_lang}.xml"); // Load news file by language $doc->formatOutput = true; // Nicely format the file return $doc; } function save_news_file($doc){ // Save XML news file, depending on language global $news_lang; $doc->saveXML($doc->documentElement); $doc->save("news_{$news_lang}.xml"); } Here is the code for writing to XML (add news): <?php ob_start()?> <?php include("include/xml_functions.php")?> <?php include("../include/functions.php")?> <?php get_lang();?> <?php //TODO: ADD USER AUTHENTICATION! if(isset($_POST["news"]) && isset($_POST["news-lang"])){ set_news_lang(); $news = htmlentities($_POST["news"]); $xml_doc = load_news_file(); $news_list = $xml_doc->getElementsByTagName("text"); // Get all existing news from file $doc_root_element = $xml_doc->getElementsByTagName("news")->item(0); // Get the root element of the new XML document $new_news_entry = $xml_doc->createElement("text",$news); // Create the submited news entry $doc_root_element->appendChild($new_news_entry); // Append submited news entry $xml_doc->appendChild($doc_root_element); save_news_file($xml_doc); header("Location: /cpanel/index.php?lang={$lang}&news-lang={$news_lang}"); } else{ header("Location: /cpanel/index.php?lang={$lang}&news-lang={$news_lang}"); } ?> <?php ob_end_flush()?>

    Read the article

  • Minimizing SQL queries using join with one-to-many relationship

    - by Brian
    So let me preface this by saying that I'm not an SQL wizard by any means. What I want to do is simple as a concept, but has presented me with a small challenge when trying to minimize the amount of database queries I'm performing. Let's say I have a table of departments. Within each department is a list of employees. What is the most efficient way of listing all the departments and which employees are in each department. So for example if I have a department table with: id name 1 sales 2 marketing And a people table with: id department_id name 1 1 Tom 2 1 Bill 3 2 Jessica 4 1 Rachel 5 2 John What is the best way list all departments and all employees for each department like so: Sales Tom Bill Rachel Marketing Jessica John Pretend both tables are actually massive. (I want to avoid getting a list of departments, and then looping through the result and doing an individual query for each department). Think similarly of selecting the statuses/comments in a Facebook-like system, when statuses and comments are stored in separate tables.

    Read the article

  • MySQL - Finding time overlaps

    - by Jude
    Hi, I have 2 tables in the database with the following attributes: Booking ======= booking_id booking_start booking_end resource_booked =============== booking_id resource_id The second table is an associative entity between "Booking" and "Resource" (i.e., 1 booking can contain many resources). Attributes booking_start and booking_end are timestamps with date and time in it. May I know how I might be able to find out for each resource_id (resource_booked) if the date/time overlaps or clashes with other bookings of similar resource_id? I was doodling the answer on paper, pictorially, to see if it might help me visualize how I could solve this and I got this: Joining the 2 tables (Booking, Booked_resource) into one table with the 4 attributes needed. Follow the answer suggested here : http://stackoverflow.com/questions/689458/find-overlapping-date-time-rows-within-one-table I did step 1 but step 2 is leaving me baffled! I would really appreciate any help on this! Thanks!

    Read the article

  • How To Make NHibernate Automatically change an "Updated" column

    - by IanT8
    I am applying NHibernate to an existing project, where tables and columns are already defined and fixed. The database is MS-SQL-2008. NHibernate 2.1.2 Many tables have a column of type timestamp named "ReplicationID", and also a column of type datetime named "UpdatedDT". I understand I might be able to use the element of the mapping file to describe the "ReplicationID" column which will allow NHibernate to manage that column. Is it possible to make NHibernate automatically update the UpdatedDT column when the row is updated? I suppose I could try mapping the UpdatedDT property to be of type timestamp, but that have other side effects.

    Read the article

  • Using views as a data interface between modules in a database

    - by Stefan
    Hello, I am working on the database layout of a straighforward small database in Mysql. We want to modularize this system in order to have more flexiblity for different implementations we are going to make. Now, the idea was to have one module in the database (simple a group of tables with constraints between them) pass its data to the next module via views. In this way, changes in one module would not affect the other ones, as we can make sure in the view that the right data is present there at any time, although the underlying structure of tables might be different. The structure of the App handling the database would likewise be modularized. Is this something that is sometimes done? On a technical side, as I understand views can't have primary keys - how would I then adress such a view? What other issues should be considered?

    Read the article

  • How to migrate large amounts of data from old database to new

    - by adam0101
    I need to move a huge amount of data from a couple tables in an old database to a couple different tables in a new database. The databases are SQL Server 2005 and are on the same box and sql server instance. I was told that if I try to do it all in one shot that the transaction log would fill up. Is there a way to disable the transaction log per table? If not, what is a good method for doing this? Would a cursor do it? This is just a one-time conversion.

    Read the article

  • Should I commit or rollback a transaction that creates a temp table, reads, then deletes it?

    - by Triynko
    To select information related to a list of hundreds of IDs... rather than make a huge select statement, I create temp table, insert the ids into it, join it with a table to select the rows matching the IDs, then delete the temp table. So this is essentially a read operation, with no permanent changes made to any persistent database tables. I do this in a transaction, to ensure the temp table is deleted when I'm finished. My question is... what happens when I commit such a transaction vs. let it roll it back? Performance-wise... does the DB engine have to do more work to roll back the transaction vs committing it? Is there even a difference since the only modifications are done to a temp table? Related question here, but doesn't answer my specific case involving temp tables: http://stackoverflow.com/questions/309834/should-i-commit-or-rollback-a-read-transaction

    Read the article

  • Is using Natural Join or Implicit column names not a good practice when writing SQL in a programming

    - by Jian Lin
    When we use Natural Join, we are joining the tables when both table have the same column names. But what if we write it in PHP and then the DBA add some more fields to both tables, then the Natural Join can break? The same goes for Insert, if we do a insert into gifts values (NULL, "chocolate", "choco.jpg", now()); then it will break the code as well as contaminating the table when the DBA adds some fields to the table (example as column 2 or 3). So it is always best to spell out the column names when the SQL statements are written inside a programming language and stored in a file in a big project.

    Read the article

  • sql select statement with a group by

    - by user85116
    I have data in 2 tables, and I want to create a report. Table A: tableAID (primary key) name Table B: tableBID (primary key) grade tableAID (foreign key, references Table A) There is much more to both tables, but those are the relevant columns. The query I want to run, conceptually, is this: select TableA.name, avg(TableB.grade) where TableB.tableAID = TableA.tableAID The problem of course is that I'm using an aggregate function (avg), and I can rewrite it like this: select avg(grade), tableAID from TableB group by tableAID but then I only get the ID of TableA, whereas I really need that name column which appears in TableA, not just the ID. Is it possible to write a query to do this in one statement, or would I first need to execute the second query I listed, get the list of id's, then query each record in TableA for the name column... seems to me I'm missing something obvious here, but I'm (quite obviously) not an sql guru...

    Read the article

  • MySQL - display rows of names and addresses grouped by name, where name occures more than once

    - by Stoob
    I have two tables, "name" and "address". I would like to list the last_name and joined address.street_address of all last_name in table "name" that occur more than once in table "name". The two tables are joined on the column "name_id". The desired output would appear like so: 213 | smith | 123 bluebird | 14 | smith | 456 first ave | 718 | smith | 12 san antonia st. | 244 | jones | 78 third ave # 45 | 98 | jones | 18177 toronto place | Note that if the last_name "abernathy" appears only once in table "name", then "abernathy" should not be included in the result. This is what I came up with so far: SELECT name.name_id, name.last_name, address.street_address, count(*) FROM `name` JOIN `address` ON name.name_id = address.name_id GROUP BY `last_name` HAVING count(*) > 1 However, this produces only one row per last name. I'd like all the last names listed. I know I am missing something simple. Any help is appreciated, thanks!

    Read the article

  • Please suggest some alternative to Drupal

    - by abovesun
    Drupal propose completely different approach in web development (comparing with RoR like frameworks) and it is extremely good from development speed perspective. For example, it is quite easy to clone 90% of stackoverflow functionality using Drupal. But it has several big drawbacks: it is f''cking slow (100-400 requests per page) db structure very complicated, need at least 2 tables for easy content (entity) type, CCK fields very easy generate tons of new db tables anti-object oriented, rather aspect-oriented bad "view" layer implementation, no strange forward layouts and so on. After all this items I can say I like Drupal, but I would like something same, but more elegant and more object oriented. Probably something like http://drupy.net/ - drupal emulation on the top of django. P.S. I wrote this question not for new holy word flame, just write if you know alternative that uses something similar approach.

    Read the article

  • Table with a lot of attributes

    - by Robert
    Hi, I'm planing to build some database project. One of the tables have a lot of attributes. My question is: What is better, to divide the the class into 2 separate tables or put all of them into one table. below is an example create table User { id, name, surname,... show_name, show_photos, ...) or create table User { id, name, surname,... ) create table UserPrivacy {usr_id, show_name, show_photos, ...) The performance i suppose is similar due to i can use index.

    Read the article

  • Foreign Key Relationships and "belongs to many"

    - by jan
    I have the following model: S belongs to T T has many S A,B,C,D,E (etc) have 1 T each, so the T should belong to each of A,B,C,D,E (etc) At first I set up my foreign keys so that in A, fk_a_t would be the foreign key on A.t to T(id), in B it'd be fk_b_t, etc. Everything looks fine in my UML (using MySQLWorkBench), but generating the yii models results in it thinking that T has many A,B,C,D (etc) which to me is the reverse. It sounds to me like either I need to have A_T, B_T, C_T (etc) tables, but this would be a pain as there are a lot of tables that have this relationship. I've also googled that the better way to do this would be some sort of behavior, such that A,B,C,D (etc) can behave as a T, but I'm not clear on exactly how to do this (I will continue to google more on this) What do you think is the better solution? UML: Here's the DDL (auto generated). Just pretend that there is more than 3 tables referencing T. -- ----------------------------------------------------- -- Table `mydb`.`T` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`T` ( `id` INT NOT NULL AUTO_INCREMENT , PRIMARY KEY (`id`) ) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `mydb`.`S` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`S` ( `id` INT NOT NULL AUTO_INCREMENT , `thing` VARCHAR(45) NULL , `t` INT NOT NULL , PRIMARY KEY (`id`) , INDEX `fk_S_T` (`id` ASC) , CONSTRAINT `fk_S_T` FOREIGN KEY (`id` ) REFERENCES `mydb`.`T` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `mydb`.`A` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`A` ( `id` INT NOT NULL AUTO_INCREMENT , `T` INT NOT NULL , `stuff` VARCHAR(45) NULL , `bar` VARCHAR(45) NULL , `foo` VARCHAR(45) NULL , PRIMARY KEY (`id`) , INDEX `fk_A_T` (`T` ASC) , CONSTRAINT `fk_A_T` FOREIGN KEY (`T` ) REFERENCES `mydb`.`T` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `mydb`.`B` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`B` ( `id` INT NOT NULL AUTO_INCREMENT , `T` INT NOT NULL , `stuff2` VARCHAR(45) NULL , `foobar` VARCHAR(45) NULL , `other` VARCHAR(45) NULL , PRIMARY KEY (`id`) , INDEX `fk_A_T` (`T` ASC) , CONSTRAINT `fk_A_T` FOREIGN KEY (`T` ) REFERENCES `mydb`.`T` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `mydb`.`C` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`C` ( `id` INT NOT NULL AUTO_INCREMENT , `T` INT NOT NULL , `stuff3` VARCHAR(45) NULL , `foobar2` VARCHAR(45) NULL , `other4` VARCHAR(45) NULL , PRIMARY KEY (`id`) , INDEX `fk_A_T` (`T` ASC) , CONSTRAINT `fk_A_T` FOREIGN KEY (`T` ) REFERENCES `mydb`.`T` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB;

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >