Search Results

Search found 36863 results on 1475 pages for 'database structure'.

Page 67/1475 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • How to handle product ratings in a database

    - by Mel
    Hello, I would like to know what is the best approach to storing product ratings in a database. I have in mind the following two (simplified, and assuming a MySQL db) scenarios: Scenario 1: Create two columns in the product table to store number of votes and the sum of all votes. Use columns to get an average on the product display page: products(productedID, productName, voteCount, voteSum) Pros: I will only need to access one table, and thus execute one query to display product data and ratings. Cons: Write operations will be executed in a table whose original purpose is only to furnish product data. Scenario 2: Create an additional table to store ratings. products(productID, productName) ratings(productID, voteCount, voteSum) Pros: Isolate ratings into a separate table, leaving the products table to furnish data on available products. Cons: I will have to execute two separate queries on product page requests (one for data and another for ratings). In terms of performance, which of the following two approaches is best: Allow users to execute an occasional write query to a table that will handle hundreds of read requests? Execute two queries at every product page, but isolate the write query into a separate table. I'm a novice to database development, and often find myself struggling with simple questions such as these. Many thanks,

    Read the article

  • Unaccounted for database size

    - by Nazadus
    I currently have a database that is 20GB in size. I've run a few scripts which show on each tables size (and other incredibly useful information such as index stuff) and the biggest table is 1.1 million records which takes up 150MB of data. We have less than 50 tables most of which take up less than 1MB of data. After looking at the size of each table I don't understand why the database shouldn't be 1GB in size after a shrink. The amount of available free space that SqlServer (2005) reports is 0%. The log mode is set to simple. At this point my main concern is I feel like I have 19GB of unaccounted for used space. Is there something else I should look at? Normally I wouldn't care and would make this a passive research project except this particular situation calls for us to do a backup and restore on a weekly basis to put a copy on a satellite (which has no internet, so it must be done manually). I'd much rather copy 1GB (or even if it were down to 5GB!) than 20GB of data each week. sp_spaceused reports the following: Navigator-Production 19184.56 MB 3.02 MB And the second part of it: 19640872 KB 19512112 KB 108184 KB 20576 KB while I've found a few other scripts (such as the one from two of the server database size questions here, they all report the same information either found above or below). The script I am using is from SqlTeam. Here is the header info: * BigTables.sql * Bill Graziano (SQLTeam.com) * graz@<email removed> * v1.11 The top few tables show this (table, rows, reserved space, data, index, unused, etc): Activity 1143639 131 MB 89 MB 41768 KB 1648 KB 46% 1% EventAttendance 883261 90 MB 58 MB 32264 KB 328 KB 54% 0% Person 113437 31 MB 15 MB 15752 KB 912 KB 103% 3% HouseholdMember 113443 12 MB 6 MB 5224 KB 432 KB 82% 4% PostalAddress 48870 8 MB 6 MB 2200 KB 280 KB 36% 3% The rest of the tables are either the same in size or smaller. No more than 50 tables. Update 1: - All tables use unique identifiers. Usually an int incremented by 1 per row. I've also re-indexed everything. I ran the dbcc shrink command as well as updating the usage before and after. And over and over. An interesting thing I found is that when I restarted the server and confirmed no one was using it (and no maintenance procs are running, this is a very new application -- under a week old) and when I went to run the shrink, every now and then it would say something about data changed. Googling yielded too few useful answers with the obvious not applying (it was 1am and I disconnected everyone, so it seems impossible that was really the case). The data was migrated via C# code which basically looked at another server and brought things over. The quantity of deletes, at this point in time, are probably under 50k in rows. Even if those rows were the biggest rows, that wouldn't be more than 100M I would imagine. When I go to shrink via the GUI it reports 0% available to shrink, indicating that I've already gotten it as small as it thinks it can go. Update 2: sp_spaceused 'Activity' yields this (which seems right on the money): Activity 1143639 134488 KB 91072 KB 41768 KB 1648 KB Fill factor was 90. All primary keys are ints. Here is the command I used to 'updateusage': DBCC UPDATEUSAGE(0); Update 3: Per Edosoft's request: Image 111975 2407773 19262184 It appears as though the image table believes it's the 19GB portion. I don't understand what this means though. Is it really 19GB or is it misrepresented? Update 4: Talking to a co-worker and I found out that it's because of the pages, as someone else here has also state the potential for that. The only index on the image table is a clustered PK. Is this something I can fix or do I just have to deal with it? The regular script shows the Image table to be 6MB in size. Update 5: I think I'm just going to have to deal with it after further research. The images have been resized to be roughly 2-5KB each and on a normal file system doesn't consume much space but on SqlServer it seems to consume considerably more. The real answer, in the long run, will likely be separating that table in to another partition or something similar.

    Read the article

  • Best XML Based Database

    - by monmonja
    I had been assigned to develop a system on where we would get a XML from multiple sources (millions of xml) and put them in some database like and judging from the xml i would receive, there wont be any concrete structure even if they are from the same source. With this reason i think i cannot suggest RDMS and currently looking at NoSQL databases. We need a system that could do CRUD and is fast on Read. I had been looking at MarkLogic and eXist, which are both XML based NoSQL databases, have anyone had experience with them? and any other suggestion? Thanks

    Read the article

  • Php PEAR, Database Abstraction & Factory Methods

    - by pws5068
    I'm interested in learning more about design practices in PHP for Database Abstraction & Factory methods. For background, my site is a special-interest social networking community currently in beta mode. Currently, I've started moving my old code for object retrieval to factory methods. However, I do feel like I'm limiting myself by keeping a lot of SQL table names and structure separated in each function/method. Questions: 1.) Is there a reason to use PEAR (or similar) if I dont anticipate switching databases? 2.) Can PEAR interface with the MySqli prepared statements I currently use? 3.) Will it help me separate table names from each method? (If no, what other design patterns might I want to research?) 4.) Will it slow down my site once I have a significantly large member base?

    Read the article

  • Three level database - foreign keys

    - by poke
    I have a three level database with the following structure (simplified to only show the primary keys): Table A: a_id Table B: a_id, b_id Table C: a_id, b_id, c_id So possible values for table C would be something like this: a_id b_id c_id 1 1 1 1 1 2 1 1 3 1 2 1 1 2 2 2 1 1 2 2 1 2 2 2 ... I am now unsure, how foreign keys should be set; or if they should be set for the primary keys at all. My idea was to have a foreign key on table B B.a_id -> A.a_id, and two foreign key on C C.a_id -> A.a_id and ( C.a_id, C.b_id ) -> ( B.a_id, B.b_id ). Is that the way I should set up the foreign keys? Is the foreign key from C->A necessary? Or do I even need foreign keys at all given that all those columns are part of the primary keys? Thanks.

    Read the article

  • Order database results by bayesian rating

    - by One Trick Pony
    I'm not sure this is even possible, but I need a confirmation before doing it the "ugly" way :) So, the "results" are posts inside a database which are stored like this: the posts table, which contains all the important stuff, like the ID, the title, the content the post meta table, which contains additional post data, like the rating (this_rating) and the number of votes (this_num_votes). This data is stored in pairs, the table has 3 columns: post ID / key / value. It's basically the WordPress table structure. What I want is to pull out the highest rated posts, sorted based on this formula: br = ( (avg_num_votes * avg_rating) + (this_num_votes * this_rating) ) / (avg_num_votes + this_num_votes) which I stole form here. avg_num_votes and avg_rating are known variables (they get updated on each vote), so they don't need to be calculated. Can this be done with a mysql query? Or do I need to get all the posts and do the sorting with PHP?

    Read the article

  • Database strucure for versioning and multiple languages

    - by phobia
    Hi, I'm wondering how to best solve the issue of content existing in multiple versions and multiple languages. An image of my current structure can be seen here: http://i46.tinypic.com/72fx3k.png Each content can only have one active version in each language, and that's how I'm curious on how to best solve. Right now I have a column of the contentversions table, which means for each change of active version I have to run a update and set active=false on all version and then a update to set active=true for the piece of content in question. Any thoughts? :)

    Read the article

  • Database Abstraction & Factory Methods

    - by pws5068
    I'm interested in learning more about design practices in PHP for Database Abstraction & Factory methods. For background, my site is a common-interest social networking community currently in beta mode. Currently, I've started moving my old code for object retrieval to factory methods. However, I do feel like I'm limiting myself by keeping a lot of SQL table names and structure separated in each function/method. Questions: Is there a reason to use PEAR (or similar) if I dont anticipate switching databases? Can PEAR interface with the MySqli prepared statements I currently use? Will it help me separate table names from each method? (If no, what other design patterns might I want to research?) Will it slow down my site once I have a significantly large member base?

    Read the article

  • How to use SOCI C++ Database library?

    - by NeDark
    I'm trying to implement soci in my program but I don't know how. I'm using C++ on linux, on a project on netbeans. I have followed the steps in http://soci.sourceforge.net/doc/structure.html to install it, and I tried to copy the files soci.h from /src/core and soci-mysql.h from /src/backends/mysql in my proyect but it gives compilation error (these files include other soci files, but it's illogical to copy all files into the directory...). I have read the guide several time but I don't understand what I'm doing wrong, the examples only include these files. Thanks. Edit: I have given more information in a comment below the answer. I don't know what steps I have to do to implement soci.

    Read the article

  • Database normalization and duplicate values

    - by bretddog
    Consider a Parent / Child / GrandChild structure in a database table schema, or even a deeper hierarchy. These being in the same aggregate. One table DAYS keeps a single row per day, and has a "Date" field. This is the root table, or maybe a child of the root. No row can ever be deleted in this table. In this case, however complex my table schema looks like, however far away in the hierarchy any other table is, is there any reason why any other table would hold a Date value? Can't it instead just have a FK to the DAYS table. I obviously assume that the creation of these date fields happen not before such datefield exist in the DAYS table. I'm now thinking just about the date part to be relevant, not the time part. Not sure if all databases can store these individually. That's maybe relevant, but not really the focus of the question.

    Read the article

  • PostgreSQL lots of writes

    - by strife911
    Hi, I am using postgreSQL for a scientific application (unsupervised clustering). The python program is multi-threaded so that each thread manages its own postmaster process (one per core). Hence, their is a lot of concurrency. Each thread-process loop infinitely though two SQL queries. The first is for reading, the second is for writing. The read operation considers 500 time the amount of rows the write operation considers. Here is the output of dstat: ----total-cpu-usage---- ------memory-usage----- -dsk/total- --paging-- --io/total- usr sys idl wai hiq siq| used buff cach free| read writ| in out | read writ 4 0 32 64 0 0|3599M 63M 57G 1893M|1524k 16M| 0 0 | 98 2046 1 0 35 64 0 0|3599M 63M 57G 1892M|1204k 17M| 0 0 | 68 2062 2 0 32 66 0 0|3599M 63M 57G 1890M|1132k 17M| 0 0 | 62 2033 2 1 32 65 0 0|3599M 63M 57G 1904M|1236k 18M| 0 0 | 80 1994 2 0 31 67 0 0|3599M 63M 57G 1903M|1312k 16M| 0 0 | 70 1900 2 0 37 60 0 0|3599M 63M 57G 1899M|1116k 15M| 0 0 | 71 1594 2 1 37 60 0 0|3599M 63M 57G 1898M| 448k 17M| 0 0 | 39 2001 2 0 25 72 0 0|3599M 63M 57G 1896M|1192k 17M| 0 0 | 78 1946 1 0 40 58 0 0|3599M 63M 57G 1895M| 432k 15M| 0 0 | 38 1937 I am pretty sure I could write more often than that for I have seen it write up to 110-140M on dstat. How can I optimize this process?

    Read the article

  • SQL Server 2005 jobs running twice in a row - using LiteSpeed

    - by Malnizzle
    Howdy! I have a SQL server (2005) backing up to a network share, who has a group of maintenance plans setup through LiteSpeed to backup different DBs. They were just set up to run two sub plans on different schedules for full/diff backups and did that just fine for a couple of months. Then I added "Clean Up" task to the subplans. Ever since that point, the backup creates another bak right after the first bak job is completed. I removed the clean up item from the subplan, and it still creates two baks when ran. Both the SQL Activity Monitor and the machine's windows application log show just one job being executed. I did this same thing to a couple of other servers backing up to the same location, and they are behaving correctly. Thoughts?

    Read the article

  • How to warehouse data that is not needed from MS SQL server

    - by I__
    I have been asked to truncate a large table in MS SQL Server 2008. The data is not needed but might be needed once every two years. It will NEVER have to be changed, only viewed. The question is, since I don't need the data on a day-to-day basis, what do I do with it to protect and back it up? Please keep in mind that I will need to have it accessible maybe once every two years, and it is FINE for us if the recovery process takes a few hours. The entire table is about 3 million rows and I need to truncate it to about 1 million rows.

    Read the article

  • PostgreSQL lots of large Arrays and Writes

    - by strife911
    Hi, I am running a python program that spawns 8 threads and as each thread launch its own postmaster process via psycopg2. This is to maximize the use of my CPU-cores (8). Each thread call a series of SQL Functions. Most of these functions go through many thousands of rows each associated to a large FLOAT8[] Array (250-300) values by using unnest() and multiplying each FLOAT8 by an another FLOAT8 associated to each row. This Array approach minimized the size of the Indexes and the Tables. The Function ends with an Insert into another Table of a row of the same form (pk INT4, array FLOAT8[]). Some SQL Functions called by python will Update a row of these kind of Tables (with large Arrays). Now I currently have configured PostgreSQL to use most of the memory for cache (effective_cache_size of 57 GB I think) and only a small amount of it for shared memory (1GB I think). First, I was wondering what the difference between Cache and Shared memory was in regards to PostgreSQL (and my application). What I have noticed is that only about 20-40% of my total CPU processing power is used during the most Read intensive parts of the application (Select unnest(array) etc). So secondly, I was wondering what I could do to improve this so that 100% of the CPU is used. Based on my observations, it does not seem to have anything to do with python or its GIL. Thanks

    Read the article

  • Column locking in innodb?

    - by mingyeow
    I know this sounds weird, but apparently one of my columns is locked. select * from table where type_id = 1 and updated_at < '2010-03-14' limit 1; select * from table where type_id = 3 and updated_at < '2010-03-14' limit 10; the first one would not finish running, while the second one completes smoothly. the only difference is the type_id Thanks in advance for your help - i have an urgent data job to finish, and this problem is driving me crazy

    Read the article

  • Maintenance Plan Reporting - Append To File - Clean Up?

    - by Adam J.R. Erickson
    Background: (SQL Server 2005, Standard Ed.) I have a maintenance plan running backups, taking a full backup 1/day, and t-log every 15 minutes. I have it set to create a text file report of each run, but that creates A LOT of files on the file server. These are hard to sort through, which makes them less useful. Question: There is an option in "Reporting and Logging" settings for appending all logs together, but how do you clean this out? If you're appending to the same log file every time, how should you make sure this file doesn't grow indefinitely? Is there a build-in function to clean out portions of appended logs like there is for cleaning out individual old log files?

    Read the article

  • How do I restore a database on a remote SQL server 2005 from a local backup?

    - by MatsT
    I have been given access to (parts of) a remote SQL Server 2005 with SQL Server authentication in order to be able to make changes to a database without involving other people who is not working on the project. The database have been created on my local machine. Is there any way to restore the remote database from a backup file on my local computer? I do not currently have access to the filesystem on the remote server. EDIT: To clarify, the access I have is that i can log in to the server via the SQL Server Management Studio. I have one connection to my local database server and one connection to the remote server. What I basically want to do is copy the database from one connection to the other.

    Read the article

  • How do I restore a database on a remote SQL server 2005 from a local backup?

    - by MatsT
    I have been given access to (parts of) a remote SQL Server 2005 with SQL Server authentication in order to be able to make changes to a database without involving other people who is not working on the project. The database have been created on my local machine. Is there any way to restore the remote database from a backup file on my local computer? I do not currently have access to the filesystem on the remote server. EDIT: To clarify, the access I have is that i can log in to the server via the SQL Server Management Studio. I have one connection to my local database server and one connection to the remote server. What I basically want to do is copy the database from one connection to the other.

    Read the article

  • ft_stopword_file not picked up

    - by Alex Holsgrove
    I have a VPS server with a company called Webfusion. I want to remove some or all of the FULLTEXT stopwords because some specific words needs to be searchable with my DB content. I opened /etc/mysql/my.cnf and added the line ft_stopword_file="". I restarted the mysql service, ran a repair table and then tried my MATCH query with no success. I ran SHOW VARIABLES LIKE 'ft_%' and it simply shows (built-in) next to the stopword file. I am running WAMP on my workstation, and whilst I realise this isn't configured the same as a commercial VPS, the above method worked just fine. Couple someone please offer some guidance?

    Read the article

  • Multiple columns in a single index versus multiple indexes

    - by Tim Coker
    The short version of my question is what's the difference between three indexes each indexing a single column and one index indexing three columns. Background follows. I'm primarily a programmer but have to do DBA work because we don't have a DBA. I'm evaluating our indexes versus the queries run against a particular table. The table as 3 columns that I'm often filtering against or getting the max value of. Most of the time the queries look like select max(col_a) from table where col_b = 'avalue' or select col_c from table where col_b = 'avalue' and col_a = 'anothervalue' All columns are independently indexed. My question is would I see any difference if I had an index that indexed col_b and col_a together since they can appear in a where clause together?

    Read the article

  • Looking for Firebird GUI

    - by EAMann
    I use phpMyAdmin to manage all of my MySQL databases and SQL Management Studio Express to manage my MS SQL databases. Now I need to start working with Firebird, and I'm looking for a tool along the lines of SQL Management Studio to manage those databases as well. I can be flexible with the UI and can learn a new system, so if there's something freely available that will do the trick but isn't quite the same as SQL management Studio I think I could adapt. Bottom line: What free tools are available that provide an in-depth GUI for Firebird?

    Read the article

  • Add game mechanics through equipment?

    - by Sidar
    In a game with different weapons and armor that actually affect more than just player stats, how would you achieve such effect? (These are just examples not concrete ideas ) For example we could have a handgun, uzi and then you have the graviton-gun. The first two would just shoot bullets, the third one does more than just shoot a simple projectile. It could allow the player to hold an enemy and drag it to use it as a meat shield. The player could also wear generic armor but at some point wears armor that can absorb projectiles. After absorbing enough projectiles you can shoot a giant blast. All these weapons/armor have different "behaviors" that either just raise stats or actually add new mechanics. In a simple case most guns would have similar properties and changing a few settings would create a new weapon (handgun shoots at an interval of x amount of seconds, lower this number and you have a machinegun). This obviously does not work if you intend to do more than just shoot projectiles. I'm pretty much stuck on writing the interface structure. While weapons and armor have different purposes they should both be able to process certain effects that change or add mechanics in the game world.

    Read the article

  • How to structure an application that combines WCF and WPF

    - by CiaranG
    I'm in the process of learning how to use WCF (Windows Communication Foundation) to allow a client/server desktop application to communicate. The application's UI will be implemented using WPF, and we will probably use SQL Server for our database. What I'm struggling with, is understanding how to structure such an application. From what I've read, there are three components of a WCF application (which in the examples I've seen have existed as separate projects): A WCF service A WCF service host A WCF service client My question then, is - should these projects solely implement the functionality of sending/receiving data from the client/server? Would it make better sense this way? Would it make sense to create a separate WPF (Windows Presentation Foundation) project to implement the UI for the application? And so, when I need to send/receive data from the client/server, I could simply invoke the operations provided in the WCF projects that I have created? For anyone who has built similar applications previously, perhaps you could explain what worked best for you in terms of structuring your application? For example, if I create a user registration page. When the user clicks the 'Register' button, the client application will need to send the data to the server. In this case, could I just invoke the methods provided in the WCF projects to send the data? Also, what data structures worked best for you when sending/receiving data? My initial thought is sending/receiving XML containing the data. Is this an option that is easy to implement? I realise that answers to this question may well be a matter of opinion - unless there are specific best practices that I'm not aware of. Thank you

    Read the article

  • using php to list some files in folders

    - by Terix
    I have collected many free themes from around internet. Each of them has a screenshot.jpg or png file on their folder. I want to scan all the folders for that file, and return the full file path to be used with an img html tag. I am not interested on partial path or folders where there are not screenshots. For example, if my fodler structure is: ./a/b/ ./c/d/e/screenshot.jpg ./f/ ./g/screenshot.jpg ./h/i/j/k/ ./l/m/screenshot.png ./n/o/ ./p/screenshot.jpg I want to get: ./c/d/e/screenshot.jpg ./g/screenshot.jpg ./l/m/screenshot.png ./p/screenshot.jpg I managed somehow to get a recursive function, but I figured only the way to return an array and then i can't get rid of what I don't need, and I miss png. Can anyone help me on that? the code I managed to put together is this: function getDirectoryTree( $outerDir , $x){ $dirs = array_diff( scandir( $outerDir ), Array( ".", ".." ) ); $dir_array = Array(); foreach( $dirs as $d ){ if( is_dir($outerDir."/".$d) ){ $dir_array[ $d ] = getDirectoryTree( $outerDir."/".$d , $x); }else{ if ($d==x) $dir_array[ $d ] = $d; } } return $dir_array; } $dirlist = getDirectoryTree('.','screenshot.jpg'); print_r($dirlist);

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >