Search Results

Search found 31356 results on 1255 pages for 'database backups'.

Page 322/1255 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • How to select product that have the maximum price of each category?

    - by kimleng
    The below is my table that has the item such as: ProductId ProductName Category Price 1 Tiger Beer $12.00 2 ABC Beer $13.99 3 Anchor Beer $9.00 4 Apolo Wine $10.88 5 Randonal Wine $18.90 6 Wisky Wine $30.19 7 Coca Beverage $2.00 8 Sting Beverage $5.00 9 Spy Beverage $4.00 10 Angkor Beer $12.88 And I suppose that I have only three category in this table (I can have a lot of category in this table). And I want to show the maximum product's price of each category in this table.

    Read the article

  • Can I raise System Error in sql Server in a stored procedure.

    - by Shantanu Gupta
    I am writing a stored procedure where i m using try catch block. Now i have a unique column in a table. When i try to insert duplicate value it throws exception with exception no 2627. I want this to be done like this if (exists(select * from tblABC where col1='value')=true) raiseError(2627)--raise system error that would have thrown if i would have used insert query to insert duplicate value And which method will be better, using insert query or checking for duplicate value before insertion using Select query ?

    Read the article

  • e-commerce product data/metadata schemas

    - by Shreko
    Trying to figure out how is product data/metadata schema designed. For example, how does an e-commerce site enter a product spec. Does it copy and paste from mfg spec sheet, enters it in their own fields or something else? Here is an example, looking at the D3000 Nikon DSLR Manufacturer: http://nikon.ca/en/Product.aspx?m=17300&disp=Specs futureshop.ca: www.futureshop.ca/en-CA/product/nikon-nikon-d3000-10-2mp-dslr-camera-with-18-55mm-lens-kit-d3000/10128435.aspx?path=865c2348a1542e848982c9dbd9253483en02 memoryexpress.com: www.memoryexpress.com/Products/PID-MX25539%28ME%29.aspx They are all slightly different in order or in parent/child field? What's storage is used for this type of info rdbms or xml?

    Read the article

  • Best way to model Customer <--> Address

    - by Jen
    Every Customer has a physical address and an optional mailing address. What is your preferred way to model this? Option 1. Customer has foreign key to Address Customer (id, phys_address_id, mail_address_id) Address (id, street, city, etc.) Option 2. Customer has one-to-many relationship to Address, which contains a field to describe the address type Customer (id) Address (id, customer_id, address_type, street, city, etc.) Option 3. Address information is de-normalized and stored in Customer Customer (id, phys_street, phys_city, etc. mail_street, mail_city, etc.) One of my overriding goals is to simplify the object-relational mappings, so I'm leaning towards the first approach. What are your thoughts?

    Read the article

  • make db connection persistent throught zend framework

    - by kamikaze_pilot
    I'm using zend framework. currently everytime I need to use the db I go ahead and connect to the DB: function connect(){ $connParams = array("host" => $host, "port" => $port, "username" => $username, "password" => $password, "dbname" => $dbname); $db = new Zend_Db_Adapter_Pdo_Mysql($connParams); return $db } so I would just call the connect() function everytime I need to use the db My question is...suppose I want to reuse $db everywhere in my site and only connect once in the very initial stage of the site load and then close the connection right before the site gets sent to the user, what would be the best practice to accomplish this? Which file in Zend should I save $db in, what method should I use to save it (global variable?), and which file should I do the connection closing in?

    Read the article

  • How can several different datatypes be saved in one table

    - by poseidon
    This is my situation: I am constructing an ad-like application in Django and Mysql. I am using a flexible-ad approach where we have: a table with ad categories (several categories such as home, furniture, cars, etc.) id_category name a table with details for the ad categories (home: area, squared meters. car: seats, color.) id_detail id_category (the categ the detail describes) name type (boolean, char, int, long, etc.) the ad table (i am selling a house. i am selling a car.) id_ad id_category text date a table where i plan to consolidate the details of the ads (home: A-area, 500 sq-meters. car: 5 seats, red.) id_detail_ad id_ad id_detail value Is this possible? Can I have a table of details for all the ads, even if details include numbers, texts, booleans, etc? Or would I have to save them all as text and then interpret them via code accordingly? Please express your opinions. Thank you.

    Read the article

  • Strange data swapping error occurs when I attempt to update rows in my table from another table in m

    - by Wesley
    So I have a table of data that is 10,000 lines long. Several of the columns in the table simply describe information about one of the columns, meaning, that only one column has the content, and the rest of the columns describe the location of the content (its for a book). Right now, only 6,000 of the 10,000 rows' content column is filled with its content. Rows 6-10,000's content column simply says null. I have another table in the db that has the content for rows 6,000-10,000, with the correct corresponding primary key which would (seemingly) make it easy to update the 10,000 row table. I have been trying an update query such as the following: UPDATE table(10,000) SET content_column = (SELECT content FROM table(6,000-10,000) WHERE table(10,000).id = table(6-10,000.id) Which kind of works, the only problem is that it pulls in the data from the second table just fine, but it replaces the existing content column with null. So rows 1-6,000's content column become null, and rows 6-10,000's content column have the correct values...Pretty strange I thought anyway. Does anybody have any thoughts about where I am going wrong? If you could show me a better sql query, I would appreciate it! Thanks

    Read the article

  • What is the fastest way to get a DataTable into SQL Server?

    - by John Gietzen
    I have a DataTable in memory that I need to dump straight into a SQL Server temp table. After the data has been inserted, I transform it a little bit, and then insert a subset of those records into a permanent table. The most time consuming part of this operation is getting the data into the temp table. Now, I have to use temp tables, because more than one copy of this app is running at once, and I need a layer of isolation until the actual insert into the permanent table happens. What is the fastest way to do a bulk insert from a C# DataTable into a SQL Temp Table? I can't use any 3rd party tools for this, since I am transforming the data in memory. My current method is to create a parameterized SqlCommand: INSERT INTO #table (col1, col2, ... col200) VALUES (@col1, @col2, ... @col200) and then for each row, clear and set the parameters and execute. There has to be a more efficient way. I'm able to read and write the records on disk in a matter of seconds...

    Read the article

  • How to handle expired items?

    - by Mark
    My site allows users to post things on the site with an expiry date. Once the item has expired, it will no longer be displayed in the listings. Posts can also be closed, canceled, or completed. I think it would be be nicest just to be able to check for one attribute or status ("is active") rather than having to check for [is not expired, is not completed, is not closed, is not canceled]. Handling the rest of those is easy because I can just have one "status" field which is essentially an enum, but AFAIK, it's impossible to set the status to "expired" as soon as that time occurs. How do people typically handle this?

    Read the article

  • Hosting an Access DB

    - by Mitciv
    Hey, So I'm inexperienced in hosting DB's and I've always had the luxury of someone else getting the db setup. I was going to help a friend out with getting a webpage setup, I've got experience in Asp.Net MVC so I'm going with that. They want to setup a search page to query a db and display the results. My question I have is in getting the DB setup and hosted. They currently just have the Access DB on a local computer. There is basically only one table that would need to be queried for the search. What is the best approach to getting this table/db accessible? They would like to keep the main copy of the db on the local machine, so copying the entire db over to the hosted site would be time consuming, could the lone table needed be solely copied to the host? Should I try to convince them to make changes on the hosted db and just make copies of that for their local machines? Any suggestions are welcome, Again I'm a total noob when it comes to hosting databases. Thanks

    Read the article

  • Efficient update of SQLite table with many records

    - by blackrim
    I am trying to use sqlite (sqlite3) for a project to store hundreds of thousands of records (would like sqlite so users of the program don't have to run a [my]sql server). I have to update hundreds of thousands of records sometimes to enter left right values (they are hierarchical), but have found the standard update table set left_value = 4, right_value = 5 where id = 12340; to be very slow. I have tried surrounding every thousand or so with begin; .... update... update table set left_value = 4, right_value = 5 where id = 12340; update... .... commit; but again, very slow. Odd, because when I populate it with a few hundred thousand (with inserts), it finishes in seconds. I am currently trying to test the speed in python (the slowness is at the command line and python) before I move it to the C++ implementation, but right now this is way to slow and I need to find a new solution unless I am doing something wrong. Thoughts? (would take open source alternative to SQLite that is portable as well)

    Read the article

  • Can't create a MySQL query that generates 4 rows for each row in the table it references.

    - by UkraineTrain
    I need to create a MySQL query that generates 4 rows for each row in the table it references. I need some of the information in those rows to repeat and some to be different. In the table each row stands for one day. I need to break the day up in 6 hour increments, hence the four rows for each entry. I need to create one column which for each day will have '12AM', '6AM', '12PM', and '6PM' values and another column will have the corresponding numeric values calculated for those entries. Thanks a lot in advance and I will really appreciate any help on this.

    Read the article

  • Walking through an SQLite Table

    - by galford13x
    I would like to implement or use functionality that allows stepping through a Table in SQLite. If I have a Table Products that has 100k rows, I would like to retrive perhaps 10k rows at a time. Somthing similar to how a webpage would list data and have a < Previous .. Next > link to walk through the data. Are there select statements that can make this simple? I see and have tried using the ROWID in conjunction with LIMIT which seems ok if not ordering the data. // This seems works if not ordering. SELECT * FROM Products WHERE ROWID BETWEEN x AND y;

    Read the article

  • 2 Select or 1 Join query ?

    - by xRobot
    I have 2 tables: book ( id, title, age ) ---- 100 milions of rows author ( id, book_id, name, born ) ---- 10 millions of rows Now, supposing I have a generic id of a book. I need to print this page: Title: mybook authors: Tom, Graham, Luis, Clarke, George So... what is the best way to do this ? 1) Simple join like this: Select book.title, author.name From book, author WHERE ( author.book_id = book.id ) AND ( book.id = 342 ) 2) For avoid the join, I could make 2 simple query: Select title FROM book WHERE id = 342 Select name FROM author WHERE book_id = 342 What is the most efficient way ?

    Read the article

  • minimal cover for functional dependencies

    - by user2975836
    I have the following problem: AB -> CD H->B G ->DA CD-> EF A -> HJ J>G I understand the first step (break down right hand side) and get the following results: AB -> C AB -> D H -> B G -> D G -> A CD -> E CD -> F A -> H A -> J J -> G I understand that A - h and h - b, therefore I can remove the B from AB - c and ab - D, to get: A -> C A -> D H -> B G -> D G -> A CD -> E CD -> F A -> H A -> J J -> G The step that follows is what I can't compute (reduce the left hand side) Any help will be greatly appreciated.

    Read the article

  • SQL clone record with a unique index

    - by Milhous
    Is there a clean way of cloning a record in SQL that has an index(auto increment). I want to clone all the fields except the index. I currently have to enumerate every field, and use that in an insert select, and I would rather not explicitly list all of the fields, as they may change over time.

    Read the article

  • Solr; What does this mean?

    - by Camran
    At the end of the README.txt file which is located in the example directory under solr, I find this line: NOTE: This Solr example server references SolrCell jars outside of the server directory with statements in the solrconfig.xml. If you make a copy of this example server and wish to use the ExtractingRequestHandler (SolrCell), you will need to copy the required jars into solr/lib or update the paths to the jars in your solrconfig.xml What does this mean? Do I have to make some adjustment before uploading solr to my server? Also, if you know, what is Solr-nightly:s difference to regular solr? The tutorial states "solr-nightly.zip" but on their download section I cant find it.

    Read the article

  • Better alternative to autonumber primary keys

    - by Comrad_Durandal
    I am looking for a better primary key than the autonumber data type, namely for the reason that it's limited to a long integer, when I really just need the field to reflect a number or text string that will never ever repeat, no matter HOW many records are added or deleted from the table. The problem is I am not sure how to implement something like turning the current date and time into a hexadecimal string and using that as a unique field I can use as a primary key. Am I just being too paranoid about running out of space?

    Read the article

  • Fastest way to do a weighted tag search in SQL Server

    - by Hasan Khan
    My table is as follows ObjectID bigint Tag nvarchar(50) Weight float Type tinyint I want to get search for all objects that has tags 'big' or 'large' I want the objectid in order of sum of weights (so objects having both the tags will be on top) select objectid, row_number() over (order by sum(weight) desc) as rowid from tags where tag in ('big', 'large') and type=0 group by objectid the reason for row_number() is that i want paging over results. The query in its current form is very slow, takes a minute to execute over 16 million tags. What should I do to make it faster? I have a non clustered index (objectid, tag, type) Any suggestions?

    Read the article

  • DB Design - Linking to a parent without circular reference issues

    - by zSysop
    Hi all, I'm having trouble coming up with a solution for the following issue. Lets say i have a db that looks something like the following: Issue Table Id | Details | CreateDate | ClosedDate Issue Notes Table Id | ObjectId | Notes | NoteDate Issue Assignment Table Id | ObjectId | AssignedToId| AssignedDate I'd like allow the linking of an issue to another issue. I thought about adding a column to the Issue table called ParentIssueId and that would allow me the ability to link issues, but i foresee circular references occurring within the issue table if i go through with this implementation. Is there a better way to go about doing this, and if so, how? Thanks

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >