Search Results

Search found 8943 results on 358 pages for 'audit tables'.

Page 322/358 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • When to use a foreign key in MySQL

    - by Mel
    Is there official guidance or a threshold to indicate when it is best practice to use a foreign key in a MySQL database? Suppose you created a table for movies. One way to do it is to integrate the producer and director data into the same table. (movieID, movieName, directorName, producerName). However, suppose most directors and producers have worked on many movies. Would it be best to create two other tables for producers and directors, and use a foreign key in the movie table? When does it become best practice to do this? When many of the directors and producers are appearing several times in the column? Or is it best practice to employ a foreign key approach at the start? While it seems more efficient to use a foreign key, it also raises the complexity of the database. So when does the trade off between complexity and normalization become worth it? I'm not sure if there is a threshold or a certain number of cell repetitions that makes it more sensible to use a foreign key. I'm thinking about a database that will be used by hundreds of users, many concurrently. Many thanks!

    Read the article

  • Cannot disable index during PL/SQL procedure

    - by nw
    I've written a PL/SQL procedure that would benefit if indexes were first disabled, then rebuilt upon completion. An existing thread suggests this approach: alter session set skip_unusable_indexes = true; alter index your_index unusable; [do import] alter index your_index rebuild; However, I get the following error on the first alter index statement: SQL Error: ORA-14048: a partition maintenance operation may not be combined with other operations ORA-06512: [...] 14048. 00000 - "a partition maintenance operation may not be combined with other operations" *Cause: ALTER TABLE or ALTER INDEX statement attempted to combine a partition maintenance operation (e.g. MOVE PARTITION) with some other operation (e.g. ADD PARTITION or PCTFREE which is illegal *Action: Ensure that a partition maintenance operation is the sole operation specified in ALTER TABLE or ALTER INDEX statement; operations other than those dealing with partitions, default attributes of partitioned tables/indices or specifying that a table be renamed (ALTER TABLE RENAME) may be combined at will The problem index is defined so: CREATE INDEX A11_IX1 ON STREETS ("SHAPE") INDEXTYPE IS "SDE"."ST_SPATIAL_INDEX" PARAMETERS ('ST_GRIDS=890,8010,72090 ST_SRID=2'); This is a custom index type from a 3rd-party vendor, and it causes chronic performance degradation during high-volume update/insert/delete operations. Any suggestions on how to work around this error? By the way, this error only occurs within a PL/SQL block.

    Read the article

  • css, fixed size

    - by gloris
    Hi, I need three tables (div). The left and right sides of the occupied 50% of the free window. The center is fixed. Everything seems fine, but right down to jump off the table. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en"> <head> <style type="text/css"> body{ margin:0; padding:0; } #left{ float: left; width: 50%; background: #FDA02E; margin-left: -300px; } #center{ float: left; width: 600px; margin-right: 300px; background: #C8FF98; } #right{ float: left; width: 50%; margin-left: -300px; background: #FDE95E; } </style> </head> <body> <div id="pag"> <div id="left"> Left </div> <div id="center"> Center </div> <div id="right"> Right </div> </div> </body> </html>

    Read the article

  • jQueryUI dialog width

    - by user35295
    Fiddle Full Screen Example I use jQuery dialog to open tables. Some of them have a large amount of text and they tend to be too long and go way off the screen. How can I make the dialog wider if the table is too long like the first one in the fiddle? I've tried width:'auto' but it seems to just occupy the entire screen. HTML: <button class='label'>Click</button><div class='dialog'><p><table>.....</table></div> <button class='label'>Click</button><div class='dialog'><p><table>.....</table></div> Javascript: $(document).ready(function(){ $('.label').each(function() { var dialogopen = $(this).next(".dialog"); dialogopen.dialog({width:'auto',autoOpen: false,modal: true, open: function(){ jQuery('.ui-widget-overlay').bind('click',function(){ dialogopen.dialog('close'); }) } }); $(this).click(function(){ dialogopen.dialog('open'); return false; } ); }); });

    Read the article

  • Why does this SELECT ... JOIN statement return no results?

    - by Stephen
    I have two tables: 1. tableA is a list of records with many columns. There is a timestamp column called "created" 2. tableB is used to track users in my application that have locked a record in tableA for review. It consists of four columns: id, user_id, record_id, and another timestamp collumn. I'm trying to select up to 10 records from tableA that have not been locked by for review by anyone in tableB (I'm also filtering in the WHERE clause by a few other columns from tableA like record status). Here's what I've come up with so far: SELECT tableA.* FROM tableA LEFT OUTER JOIN tableB ON tableA.id = tableB.record_id WHERE tableB.id = NULL AND tableA.status = 'new' AND tableA.project != 'someproject' AND tableA.created BETWEEN '1999-01-01 00:00:00' AND '2010-05-06 23:59:59' ORDER BY tableA.created ASC LIMIT 0, 10; There are currently a few thousand records in tableA and zero records in tableB. There are definitely records that fall between those timestamps, and I've verified this with a simple SELECT * FROM tableA WHERE created BETWEEN '1999-01-01 00:00:00' AND '2010-05-06 23:59:59' The first statement above returns zero rows, and the second one returns over 2,000 rows.

    Read the article

  • LINQ to SQL - database relationships won't update after submit

    - by Quantic Programming
    I have a Database with the tables Users and Uploads. The important columns are: Users -> UserID Uploads -> UploadID, UserID The primary key in the relationship is Users -> UserID and the foreign key is Uploads -> UserID. In LINQ to SQL, I do the following operations: Retrieve files var upload = new Upload(); upload.UserID = user.UserID; upload.UploadID = XXX; db.Uploads.InsertOnSubmit(upload) db.SubmitChanges(); If I do that and rerun the application (and the db object is re-built, of course) - if do something like this: foreach(var upload in user.Uploads) I get all the uploads with that user's ID. (like added in the previous example) The problem is, that my application, after adding an upload an submitting changes, doesn't update the user.Uploads collection. i.e - I don't get the newly added uploads. The user object is stored in the Session object. At first, I though that the LINQ to SQL Framework doesn't update the reference of the object, therefore I should simply "reset" the user object from a new SQL request. I mean this: Session["user"] = db.Users.Where(u => u.UserID == user.UserID).SingleOrDefault(); (Where user is the previous user) But it didn't help. Please note: After rerunning the application, user.Uploads does have the new upload! Did anyone experience this type of problem, or is it normal behavior? I am a newbie to this framework. I would gladly take any advice. Thank you!

    Read the article

  • Creating a procedure with PLSQL

    - by user1857460
    I am trying to write a PLSQL function that implements a constraint that an employee cannot be both a driver and a mechanic at the same time, in other words, the same E# from TRKEMPLOYEE cannot be in TRKDRIVER and TRKMECHANIC at the same time. The abovementioned tables are something like as follows: TRKEMPLOYEE(E# NUMBER(12) NOT NULL CONSTRAINT TRKEMPLOYEE_PKEY PRIMARY KEY(E#)) TRKDRIVER(E# NUMBER(12) NOT NULL CONSTRAINT TRKDRIVER_PKEY PRIMARY KEY(E#), CONSTRAINT TRKDRIVER_FKEY FOREIGN KEY(E#) REFERENCES TRKEMPLOYEE(E#)) TRKMECHANIC(E# NUMBER(12) NOT NULL CONSTRAINT TRKMECHANIC_PKEY PRIMARY KEY(E#), CONSTRAINT TRKMECHANIC_FKEY FOREIGN KEY(E#) REFERENCES TRKEMPLOYEE(E#)) I have attempted to write a function but keep getting a compile error in line 1 column 7. Can someone tell me why my code doesn't work? My code is as follows CREATE OR REPLACE FUNCTION Verify() IS DECLARE E# TRKEMPLOYEE.E#%TYPE; CURSOR C1 IS SELECT E# FROM TRKEMPLOYEE; BEGIN OPEN C1; LOOP FETCH C1 INTO EMPNUM; IF(EMPNUM IN(SELECT E# FROM TRKMECHANIC )AND EMPNUM IN(SELECT E# FROM TRKDRIVER)) SELECT E#, NAME FROM TRKEMPLOYEE WHERE E#=EMPNUM; ELSE dbms_output.put_line(“OK”); ENDIF EXIT WHEN C1%NOTFOUND; END LOOP; CLOSE C1; END; / Any help would be appreciated. Thanks.

    Read the article

  • Representing versions of objects with CakePHP

    - by user636901
    Hi people, Have just started to get into CakePHP since a couple of weeks back. I have some experience with MVC-frameworks, but this problem holds me back a bit. I am currently working on a model foo, containing a primary id and some attributes. Since a complete history of the changes of foo is necessary, the content of foo is saved in the table foo_content. The two tables are connected through foo_content.foo_id = foo.id, in Cake with a foo hasMany foo_content-relationship. To track the versions of foo, foo_content also contains the column version, and foo itself the field currentVersion. The version is an number incremented by one everytime the user updates foo. This is an older native PHP-app btw, to be rewritten on top of Cake. 9 times out of 10 in the app, the most recent version (foo.currentVersion) is the db-entry that need to be represented in the frontend. My question is simply: is there someway of representing this directly in the model? Or does this kind of logic simply need to be defined in the controller? Most grateful for your help!

    Read the article

  • Postgresql Output column from another table

    - by muffin
    i'm using Postgresql, my question is specifically about querying from a table that's in another table and i'm really having trouble with this one. In fact, i'm absolutely mentally blocked. I'll try to define the relations of the tables as much as I can. I have a table entry which is like this: Each of the entries has a group_id; when they are 'advanced' to the next stage, the old entries are marked is_active = false, and a new assignment is done, so C & D are advanced stages of A & B. I have another table (which acts as a record keeper) , in which the storage_log_id refers to, this is the storage_log table : But then I have another table, to really find out where the entries are actually stored - storage table : To define my problem properly. Each entry has a storage_log_id (but some doesn't have yet), and a storage_log has a storage_id to refer to the actual table and find the storage label. The sql query i'm trying to do should output this one: Where The actual storage label is shown instead of the log id. This is so far what i've done: select e.id, e.group_id, e.name, e.stage, s.label from operational.entry e, operational.storage_log sl, operational.storage s where e.storage_log_id = sl.id and sl.storage_id = s.id But this just returns 3 rows, showing only the ones that have the seed_storage_log_id set; I should be able to see even those without logs, and especially the active ones. adding e.is_active = true to the condition makes the results empty. So, yeah i'm stuck. Need help, Thanks guys!

    Read the article

  • Copy Word format into Outlook message

    - by Jaster
    Hi, I have an outlook automation. I would like to use a Word document as template for the message content. Lets say i have some formatted text containing tables, colors, sizes, etc. Now I'd like to copy/paste this content into an Outlook message object.I'm used to the interop stuff, i just have no idea how to copy/paste this correctly. Here is some Sample Code (no cleanup): String path = @"file.docx"; String savePath = @"file.msg"; Word.Application wordApp = new Word.Application(); Word.Document currentDoc = wordApp.Documents.Open(path); Word.Range range = currentDoc.Range(0, m_CurrentDoc.Characters.Count); String wordText = range.Text; oApp = new Outlook.Application(); Outlook.NameSpace ns = oApp.GetNamespace("MAPI"); ns.Logon("MailBox"); Outlook._MailItem oMsg = oApp.CreateItem(Outlook.OlItemType.olMailItem); oMsg.To = "[email protected]"; oMsg.Body = wordtext; oMsg.SaveAs(savePath); Using Outlook/Word 2007, however the word files still mayb in 2000/2003 format (.doc). Visual Studio 2010 with .net 4.0 (should obvious due to the samplecode). Any suggestions?

    Read the article

  • Grails - Where to store properties related to domains

    - by GalmWing
    This is something I have been struggling about for some time now. The thing is: I have many (20 or so) static arrays of values. I say static because that is how I'm actually storing them, as static arrays inside some domains. For example, if I have a list of known websites, I do: class Website { ... static websites = ["web1", "web2" ...] } But I do this just while developing, because I can easily change the arrays if needed, but what I'm going to do when the application is ready for deployment? In my project it is very probable that, at some point, these arrays of values change. I've been researching on that matter, one can store application properties inside an external .properties file, but it will be impossible to store an array, even futile, because if some array gets an additional value, the application can't recognize it until the name of the new property is added where needed. Another approach is to store this information in the database, but for some reason it seems like a waste to add 20 or more tables that will have just two rows, an id and a name. And the last option, as far as I know, would be an XML, but I'm not very experienced with those. It seems groovy has a way of creating and reading XML files relatively easy, but I don't know how difficult would be to modify an XML whose layout is predefined in the application. Needless to say that storing them in the config.groovy is not an option since any change will require to recompile. I haven't come across some "standard" (maybe a best practice?) way of dealing with these. So the questions is: Where to store these arrays?

    Read the article

  • How to close and open access to SQL Server 2008 in Windows application?

    - by hgulyan
    Hi, I have a MS Access 97 application (but the question is general) working directly with SQL Server 2008 (without application server or anything). Numbers of users can be up to 1000. Windows Authentication is used. The question is: How to handle modes, so some users will be allowed to work in read-only mode some users won't have access to db for some time My versions: Using a table with a mode id for every group of users, that will work the same way. On Form Load application will query that table for mode id. Using trigger on the tables, that must work according to that mode. The trigger will query mode value and doesn't work if access is closed or it's in read-only mode I know these are not the best solutions, that's why I'm asking for your advice. There's one more point. If the mode is changed to "access-is-closed" for a group of users, that group must not be able to query to DB starting that moment. With first solution I wrote it won't work, because user can be in application at that moment and no form load event will work. How can I do this? Is there any optimal solution? Thank you. Any help would be appreciated.

    Read the article

  • Storing Number in an integer from sql Database

    - by ar31an
    i am using database with table RESUME and column PageIndex in it which type is number in database but when i want to store this PageIndex value to an integer i get exception error Specified cast is not valid. here is the code string sql; string conString = "Provider=Microsoft.ACE.OLEDB.12.0; Data Source=D:\\Deliverable4.accdb"; protected OleDbConnection rMSConnection; protected OleDbDataAdapter rMSDataAdapter; protected DataSet dataSet; protected DataTable dataTable; protected DataRow dataRow; on Button Click sql = "select PageIndex from RESUME"; rMSConnection = new OleDbConnection(conString); rMSDataAdapter = new OleDbDataAdapter(sql, rMSConnection); dataSet = new DataSet("pInDex"); rMSDataAdapter.Fill(dataSet, "RESUME"); dataTable = dataSet.Tables["RESUME"]; int pIndex = (int)dataTable.Rows[0][0]; rMSConnection.Close(); if (pIndex == 0) { Response.Redirect("Create Resume-1.aspx"); } else if (pIndex == 1) { Response.Redirect("Create Resume-2.aspx"); } else if (pIndex == 2) { Response.Redirect("Create Resume-3.aspx"); } } i am getting error in this line int pIndex = (int)dataTable.Rows[0][0];

    Read the article

  • Problem in mutiple :dependent=> :destroy when multiple polymorphic is true

    - by piemesons
    I have four models question, answer, comment and vote.Consider it same as stackoverflow. Question has_many comments Answers has_many comments Questions has_many votes answers has_many votes comments has_many votes Here are the models (only relevant things) class Comment < ActiveRecord::Base belongs_to :commentable, :polymorphic => true has_many :votes, :as => :votable, :dependent => :destroy end class Question < ActiveRecord::Base has_many :comments, :as => :commentable, :dependent => :destroy has_many :answers, :dependent => :destroy has_many :votes, :as => :votable, :dependent => :destroy end class Vote < ActiveRecord::Base belongs_to :votable, :polymorphic => true end class Answer < ActiveRecord::Base belongs_to :question, :counter_cache => true has_many :comments, :as => :commentable , :dependent => :destroy end Now the problem is whenever i am trying to delete any question/answer/comment its giving me an error NoMethodError in QuestionsController#destroy undefined method `each' for 0:Fixnum if i remove this line from any of the model (question/answer/comment) has_many :votes, :as => :votable, :dependent => :destroy then it works perfectly. It seems there is a problem while deleting the records active record is not able to find out the proper path because of multiple joins within the tables.

    Read the article

  • [Ruby on Rails] complex model relationship

    - by siulamvictor
    I am not sure am I doing these correct. I have 3 models, Account, User, and Event. Account contains a group of Users. Each User have its own username and password for login, but they can access the same Account data under the same Account. Events is create by a User, which other Users in the same Account can also read or edit it. I created the following migrations and models. User migration class CreateUsers < ActiveRecord::Migration def self.up create_table :users do |t| t.integer :account_id t.string :username t.string :password t.timestamps end end def self.down drop_table :users end end Account migration class CreateAccounts < ActiveRecord::Migration def self.up create_table :accounts do |t| t.string :name t.timestamps end end def self.down drop_table :accounts end end Event migration class CreateEvents < ActiveRecord::Migration def self.up create_table :events do |t| t.integer :account_id t.integer :user_id t.string :name t.string :location t.timestamps end end def self.down drop_table :events end end Account model class Account < ActiveRecord::Base has_many :users has_many :events end User model class User < ActiveRecord::Base belongs_to :account end Event model class Event < ActiveRecord::Base belongs_to :account belongs_to :user end so.... Is this setting correct? Every time when a user create a new account, the system will as for the user information, i.e. username and password. How can I add them into correct tables? How can I add a new event? I am sorry for such a long question. I am not very understand the rails way in handling such data structure. Thank you guys for answering me. :)

    Read the article

  • performance issue: difference between select s.* vs select *

    - by kamil
    Recently I had some problem in performance of my query. The thing is described here: poor Hibernate select performance comparing to running directly - how debug? After long time of struggling, I've finally discovered that the query with select prefix like: select sth.* from Something as sth... Is 300x times slower then query started this way: select * from Something as sth.. Could somebody help me, and asnwer why is that so? Some external documents on this would be really useful. The table used for testing was: SALES_UNIT table contains some basic info abot sales unit node such as name and etc. The only association is to table SALES_UNIT_TYPE, as ManyToOne. The primary key is ID and field VALID_FROM_DTTM which is date. SALES_UNIT_RELATION contains relation PARENT-CHILD between sales unit nodes. Consists of SALES_UNIT_PARENT_ID, SALES_UNIT_CHILD_ID and VALID_TO_DTTM/VALID_FROM_DTTM. No association with any tables. The PK here is ..PARENT_ID, ..CHILD_ID and VALID_FROM_DTTM The actual query I've done was: select s.* from sales_unit s left join sales_unit_relation r on (s.sales_unit_id = r.sales_unit_child_id) where r.sales_unit_child_id is null select * from sales_unit s left join sales_unit_relation r on (s.sales_unit_id = r.sales_unit_child_id) where r.sales_unit_child_id is null Same query, both uses left join and only difference is with select.

    Read the article

  • MySQL subquery and bracketing

    - by text
    Here are my tables respondents: field sample value respondentid : 1 age : 2 gender : male survey_questions: id : 1 question : Q1 answer : sample answer answers: respondentid : 1 question : Q1 answer : 1 --id of survey question I want to display all respondents who answered the certain survey, display all answers and total all the answer and group them according to the age bracket. I tried using this query: SELECT res.Age, res.Gender, answer.id, answer.respondentid, SUM(CASE WHEN res.Gender='Male' THEN 1 else 0 END) AS males, SUM(CASE WHEN res.Gender='Female' THEN 1 else 0 END) AS females, CASE WHEN res.Age < 1 THEN 'age1' WHEN res.Age BETWEEN 1 AND 4 THEN 'age2' WHEN res.Age BETWEEN 4 AND 9 THEN 'age3' WHEN res.Age BETWEEN 10 AND 14 THEN 'age4' WHEN res.Age BETWEEN 15 AND 19 THEN 'age5' WHEN res.Age BETWEEN 20 AND 29 THEN 'age6' WHEN res.Age BETWEEN 30 AND 39 THEN 'age7' WHEN res.Age BETWEEN 40 AND 49 THEN 'age8' ELSE 'age9' END AS ageband FROM Respondents AS res INNER JOIN Answers as answer ON answer.respondentid=res.respondentid INNER JOIN Questions as question ON answer.Answer=question.id WHERE answer.Question='Q1' GROUP BY ageband ORDER BY res.Age ASC I was able to get the data but the listing of all answers are not present. Do I have to subquery SELECT into my current SELECT statement to show the answers? I want to produce something like this: ex: # of Respondents is 3 ages: 2,3 and 6 Question: what are your favorite subjects? Ages 1-4: subject 1: 1 subject 2: 2 subject 3: 2 total respondents for ages 1-4 : 2 Ages 5-10: subject 1: 1 subject 2: 1 subject 3: 0 total respondents for ages 5-10 : 1

    Read the article

  • Get info from multiple files, match it and then display to end user, what is fastest?

    - by Patrick
    Hi, I need to build a website where we display data that is refreshed every 5minutes in a text file with a | separator. I currently use Java to do this. What I do now: I grab the textfile for every request through the website and process it and then display the data to the end user, this works fine, since Java can go through like 5000 lines of data fast, and when I filter it it is still extremely fast. However now the management wants the following: They added 3 textfiles with the | separator to it, and now want me to also read those files and match the information on certain fields, and if there is a match also display that information to the end user. I think soon enough, although Java is fast, I will run into trouble when 10 people want that information and I have to run through 4 total files matching the information. What can I do to make this process super fast? My Creative solutions so far: -Leave it this way, since Java is fast and end users can wait (probably less then 1second) -Have a background process that dumps new data into a MySQL database every 5minutes, since databases are extremely good at getting the same data from multiple tables. Thank you!

    Read the article

  • Why can't we just use a hash of passphrase as the encryption key (and IV) with symmetric encryption algorithms?

    - by TX_
    Inspired by my previous question, now I have a very interesting idea: Do you really ever need to use Rfc2898DeriveBytes or similar classes to "securely derive" the encryption key and initialization vector from the passphrase string, or will just a simple hash of that string work equally well as a key/IV, when encrypting the data with symmetric algorithm (e.g. AES, DES, etc.)? I see tons of AES encryption code snippets, where Rfc2898DeriveBytes class is used to derive the encryption key and initialization vector (IV) from the password string. It is assumed that one should use a random salt and a shitload of iterations to derive secure enough key/IV for the encryption. While deriving bytes from password string using this method is quite useful in some scenarios, I think that's not applicable when encrypting data with symmetric algorithms! Here is why: using salt makes sense when there is a possibility to build precalculated rainbow tables, and when attacker gets his hands on hash he looks up the original password as a result. But... with symmetric data encryption, I think this is not required, as the hash of password string, or the encryption key, is never stored anywhere. So, if we just get the SHA1 hash of password, and use it as the encryption key/IV, isn't that going to be equally secure? What is the purpose of using Rfc2898DeriveBytes class to generate key/IV from password string (which is a very very performance-intensive operation), when we could just use a SHA1 (or any other) hash of that password? Hash would result in random bit distribution in a key (as opposed to using string bytes directly). And attacker would have to brute-force the whole range of key (e.g. if key length is 256bit he would have to try 2^256 combinations) anyway. So either I'm wrong in a dangerous way, or all those samples of AES encryption (including many upvoted answers here at SO), etc. that use Rfc2898DeriveBytes method to generate encryption key and IV are just wrong.

    Read the article

  • .tpl files and website problem

    - by whitstone86
    Apologies if the title is in lowercase but it's describing an extension format. I've started using Dwoo as my template engine for PHP, and am not sure how to convert my PHP files into .tpl templates. My site is similar to, but not the same as, http://library.digiguide.com/lib/programme/Medium-319648/Drama/ with its design (except colour scheme and site name are different, plus it's in PHP - so copyright issues are avoided here, the design arguably could be seen as parody even though the content is different. The database is called tvguide, and it has these tables: Programmes House M.D. Medium Police Stop! American Dad! The tablenames of the above programmes are: housemdonair mediumonair policestopair americandad1 Episodes The tablenames for the above programmes' episode guides are: housemdepidata mediumepidata policestopepidata americandad1epidata All of them have the following rows: id (not an auto-increment, since I wish to dynamically generate a page from this) episodename seriesnumber episodenumber episodesynopsis (the above four after id do exactly as stated) I have a pagination script that works, it displays 20 records per page as I want it to. This is called pmcPagination.php - but I won't post it in full since it would take up too much space. However, I'm trying to get it so that variables are filled in like this: (ok, so the examples below are ASP.NET, but if there's a PHP/MySQL equivalent I would gratefully appreciate this!!): http://library.digiguide.com/lib/episode/741168 http://library.digiguide.com/lib/episode/714829 with the episode detail and data. My site works, but it's fairly basic, and it's not online yet until my bugs are fixed. Mod_rewrite is enabled so my site reads as http://mytvguide.com/episode/123456 or http://mytvguide.com/programme/123456 http://mytvguide.com/WorldInAction/123456/Documentary/ I've tried looking on Google, but am not sure how to get this TV guide script to work at its best - but I think .tpl, and .php/MySQL is the way to go. Any advice anyone has on making this project into a fully workable, ready to use site would be much appreciated, I've spent months refining this project! P.S. Apologies for the length of this, hope it describes my project well.

    Read the article

  • Database/Object Mapping

    - by Eric
    Hello everyone, This is a beginner question, but it's been frustrating me... I am using C#, by the way. I'd like to make a few classes, each with their own properties and methods. I would also like to have a database to store certain instances of these classes in case I would ever need to look at them again. So, for example... class Polygon { String name; Double perimiter; int numSides; public Double GetArea() { // ... } } class Circle { String name; Double radius; public void PrintName() { // ... } } Say I've got these classes. I also want a database that has the TABLES "Polygon" and "Circle" with the COLUMNS "name" "perimeter" "radius" etc. And I want an easy way to save a class instance into the database, or pull a class instance out of the database. I have previously been using MS Access for my database stuff, which I don't mind using, but I would prefer if nothing other than .NET need to be installed. I've been researching online a bit, but I wanted to get some opinions on here. I have looked at Linq-to-Sql, but it seems you need Sql-Server. Is this true? If so, I'd really rather not use it because I don't want to have to have it installed everywhere. Anway, I'm just fishing for some ideas/insights/suggestions/etc. so please help me out if you can. Thanks.

    Read the article

  • rake db:migrate fails when trying to do inserts

    - by anthony
    I'm trying to get a database populate so I can begin working on a project. THis project is already built and I'm being brought in to helpwith front-end work. Problem is I can't get rake db:migrate to do any inserts. Every time I run rake db:migrate I get this: ... == 20081220084043 CreateTimeDimension: migrating ============================== -- create_table(:time_dimension) - 0.0870s INSERT time_dimension(time_key, year, month, day, day_of_week, weekend, quarter) VALUES(20080101, 2008, 1, 1, 'Tuesday', false, 1) rake aborted! Could not load driver (uninitialized constant Mysql::Driver) ... I'm building on a MBP with Snow Leopard. I've installed XCode from the disk that comes with the mac. I've updated ruby, installed rails and all the needed gems. I have the 64 bit version of MySQL installed. I've tried the 32 bit version of MySQL and I've even tried installing from macports (via http://www.robbyonrails.com/articles/2010/02/08/installing-ruby-on-rails-passenger-postgresql-mysql-oh-my-zsh-on-snow-leopard-fourth-edition) The mysql gemis installed using: sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/path/to/mysql/bin/mysql_config the migrate creates the tables just fine but it dies every. single. time. it trys an insert. Any help would be great

    Read the article

  • MySQL: optimization of table (indexing, foreign key) with no primary keys

    - by Haradzieniec
    Each member has 0 or more orders. Each order contains at least 1 item. memberid - varchar, not integer - that's OK (please do not mention that's not very good, I can't change it). So, thera 3 tables: members, orders and order_items. Orders and order_items are below: CREATE TABLE `orders` ( `orderid` INT(11) UNSIGNED NOT NULL AUTO_INCREMENT, `memberid` VARCHAR( 20 ), `Time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP , `info` VARCHAR( 3200 ) NULL , PRIMARY KEY (orderid) , FOREIGN KEY (memberid) REFERENCES members(memberid) ) ENGINE = InnoDB; CREATE TABLE `order_items` ( `orderid` INT(11) UNSIGNED NOT NULL, `item_number_in_cart` tinyint(1) NOT NULL , --- 5 items in cart= 5 rows `price` DECIMAL (6,2) NOT NULL, FOREIGN KEY (orderid) REFERENCES orders(orderid) ) ENGINE = InnoDB; So, order_items table looks like: orderid - item_number_in_cart - price: ... 1000456 - 1 - 24.99 1000456 - 2 - 39.99 1000456 - 3 - 4.99 1000456 - 4 - 17.97 1000457 - 1 - 20.00 1000458 - 1 - 99.99 1000459 - 1 - 2.99 1000459 - 2 - 69.99 1000460 - 1 - 4.99 ... As you see, order_items table has no primary keys (and I think there is no sense to create an auto_increment id for this table, because once we want to extract data, we always extract it as WHERE orderid='1000456' order by item_number_in_card asc - the whole block, id woudn't be helpful in queries). Once data is inserted into order_items, it's not UPDATEd, just SELECTed. The questions are: I think it's a good idea to put index on item_number_in_cart. Could anybody please confirm that? Is there anything else I have to do with order_items to increase the performance, or that looks pretty good? I could miss something because I'm a newbie. Thank you in advance.

    Read the article

  • Why do I get this exception? {An item with the same key has already been added."})

    - by Alan
    Aknittel NewSellerID is the result of a lookup on tblSellers. These tables (tblSellerListings and tblSellers) are not "officially" joined with a foreign key relationship, either in the model or in the database, but I want some referential integrity maintained for the future. So my issue remains. Why do I get the exception ({"An item with the same key has already been added."}) with this code, if I don't begin each iteration of the foreach loop with a new ObjectContext and end it with SaveChanges, which I think will affect performance. Also, could you tell me why ORCSolutionsDataService.tblSellerListings (An ADO.NET DataServices/WCF object is not IDisposable, like LINQ to Entities?? ============================================== // Add listings to previous seller int NewSellerID = 0; // Look up existing Seller key using SellerUniqueEBAYID var qryCurrentSeller = from s in service.tblSellers where s.SellerEBAYUserID == SellerUserID select s; foreach (var s in qryCurrentSeller) NewSellerID = s.SellerID; // Save the selected listings for this seller foreach (DataGridViewRow dgr in dgvRows) { ORCSolutionsDataService.tblSellerListings NewSellerListing = new ORCSolutionsDataService.tblSellerListings(); NewSellerListing.ItemID = dgr.Cells["txtSellerItemID"].Value.ToString(); NewSellerListing.Title = dgr.Cells["txtSellerItemTitle"].Value.ToString(); NewSellerListing.CurrentPrice = Convert.ToDecimal(dgr.Cells["txtSellerItemPrice"].Value); NewSellerListing.QuantitySold = Convert.ToInt32(dgr.Cells["txtSellerItemSold"].Value); NewSellerListing.EndTime = Convert.ToDateTime(dgr.Cells["txtSellerItemEnds"].Value); NewSellerListing.CategoryName = dgr.Cells["txtSellerItemCategory"].Value.ToString(); NewSellerListing.ExtendedPrice = Convert.ToDecimal(dgr.Cells["txtExtendedReceipts"].Value); NewSellerListing.RetrievedDtime = Convert.ToDateTime(dtSellerDataRetrieved.ToString()); NewSellerListing.SellerID = NewSellerID; service.AddTotblSellerListings(NewSellerListing); } service.SaveChanges(); } catch (Exception ex) { MessageBox.Show("Unable to add a new case. Exception: " + ex.Message); }

    Read the article

  • get_browser not working

    - by tazphoenix
    it's not working.i mean i have many scripts to get ip and os but anyway get_browser is internal function and should work but its not.when i try to get a print_r on the function i get. Array ( [browser_name_regex] => §^.*$§ [browser_name_pattern] => * [browser] => Default Browser [version] => 0 [majorver] => 0 [minorver] => 0 [platform] => unknown [alpha] => [beta] => [win16] => [win32] => [win64] => [frames] => 1 [iframes] => [tables] => 1 [cookies] => [backgroundsounds] => [cdf] => [vbscript] => [javaapplets] => [javascript] => [activexcontrols] => [isbanned] => [ismobiledevice] => [issyndicationreader] => [crawler] => [cssversion] => 0 [supportscss] => [aol] => [aolversion] => 0 ) I'm using win7 and firefox.

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >