Search Results

Search found 8267 results on 331 pages for 'insert'.

Page 131/331 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • mysqldb interfaceError

    - by Johanna
    I have a very weird probleme with mysqldb (mysql module for python). I have a file with queries for inserting records in tables. If I call the functions from the file, it works just fine. But when I try to call one of the functions from another file it throws me a _mysql_exception.InterfaceError: (0, '') I really don't get what I'm doing wrong here.. I call the function from buildDB.py : import create create.newFormat("HD", 0,0,0) The function newFormat(..) is in create.py (imported) : from Database import Database db = Database() def newFormat(name, width=0, height=0, fps=0): format_query = "INSERT INTO Format (form_name, form_width, form_height, form_fps) VALUES ('"+name+"',"+str(width)+","+str(height)+","+str(fps)+");" db.execute(format_query) And the class Databse is the following : import MySQLdb from MySQLdb.constants import FIELD_TYPE class Database(): def __init__(self): server = "localhost" login = "seq" password = "seqmanager" database = "Sequence" my_conv = { FIELD_TYPE.LONG: int } self.conn = MySQLdb.connection(host=server, user=login, passwd=password, db=database, conv=my_conv) # self.cursor = self.conn.cursor() def close(self): self.conn.close() def execute(self, query): self.conn.query(query) (I put only relevant code) Traceback : Z:\sequenceManager\mysql>python buildDB.py D:\ProgramFiles\Python26\lib\site-packages\MySQLdb\__init__.py:34: DeprecationWa rning: the sets module is deprecated from sets import ImmutableSet INSERT INTO Format (form_name, form_width, form_height, form_fps) VALUES ('HD',0 ,0,0); Traceback (most recent call last): File "buildDB.py", line 182, in <module> create.newFormat("HD") File "Z:\sequenceManager\mysql\create.py", line 52, in newFormat db.execute(format_query) File "Z:\sequenceManager\mysql\Database.py", line 19, in execute self.conn.query(query) _mysql_exceptions.InterfaceError: (0, '') The warning has never been a problem before so I don't think it's related.

    Read the article

  • copy rows before updating them to preserve archive in Postgres

    - by punkish
    I am experimenting with creating a table that keeps a version of every row. The idea is to be able to query for how the rows were at any point in time even if the query has JOINs. Consider a system where the primary resource is books, that is, books are queried for, and author info comes along for the ride CREATE TABLE authors ( author_id INTEGER NOT NULL, version INTEGER NOT NULL CHECK (version > 0), author_name TEXT, is_active BOOLEAN DEFAULT '1', modified_on TIMESTAMP DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (author_id, version) ) INSERT INTO authors (author_id, version, author_name) VALUES (1, 1, 'John'), (2, 1, 'Jack'), (3, 1, 'Ernest'); I would like to be able to update the above like so UPDATE authors SET author_name = 'Jack K' WHERE author_id = 1; and end up with 2, 1, Jack, t, 2012-03-29 21:35:00 2, 2, Jack K, t, 2012-03-29 21:37:40 which I can then query with SELECT author_name, modified_on FROM authors WHERE author_id = 2 AND modified_on < '2012-03-29 21:37:00' ORDER BY version DESC LIMIT 1; to get 2, 1, Jack, t, 2012-03-29 21:35:00 Something like the following doesn't really work CREATE OR REPLACE FUNCTION archive_authors() RETURNS TRIGGER AS $archive_author$ BEGIN IF (TG_OP = 'UPDATE') THEN -- The following fails because author_id,version PK already exists INSERT INTO authors (author_id, version, author_name) VALUES (OLD.author_id, OLD.version, OLD.author_name); UPDATE authors SET version = OLD.version + 1 WHERE author_id = OLD.author_id AND version = OLD.version; RETURN NEW; END IF; RETURN NULL; -- result is ignored since this is an AFTER trigger END; $archive_author$ LANGUAGE plpgsql; CREATE TRIGGER archive_author AFTER UPDATE OR DELETE ON authors FOR EACH ROW EXECUTE PROCEDURE archive_authors(); How can I achieve the above? Or, is there a better way to accomplish this? Ideally, I would prefer to not create a shadow table to store the archived rows.

    Read the article

  • NHibernate collections: many-to-many relationships

    - by Brad Heller
    I've got two models, a Product model and a ShoppingCart model. The ShoppingCart model has a collection of products as a property called Products (List). Here is the mapping for my ShoppingCart model. <class name="MyProject.ShoppingCart, MyProject" table="ShoppingCarts"> <id name="Id" column="Id"> <generator class="native" /> </id> <many-to-one name="Company" class="MyProject.Company, MyProject" column="CompanyId" /> <property name="ExternalId" column="GUID" generated="insert" /> <property name="Name" column="Name" /> <property name="Total" column="Total" /> <property name="CreationDate" column="CreationDate" generated="insert" /> <property name="UpdatedDate" column="UpdatedDate" generated="always" /> <bag name="Products" table="ShoppingCartContents" lazy="false"> <key column="ShoppingCartId" /> <many-to-many column="ProductId" class="MyProjectMyProject.Product, MyProject" fetch="join" /> </bag> </class> When I try to save to the DB, the ShoppingCart is saved, but the mapping rows in ShoppingCartContents aren't save, making me thing that there's an issue with the mapping. Where am I going wrong here?

    Read the article

  • Best practices, PHP, tracking millions of impressions per day.

    - by John
    What do I have to do to make 20k mysql inserts per second possible (during peak hours around 1k/sec during slower times)? I've been doing some research and I've seen the "INSERT DELAYED" suggestion, writing to a flat file, "fopen(file,'a')", and then running a chron job to dump the "needed" data into mysql, etc. I've also heard you need multiple servers and "load balancers" which I've never heard of, to make something like this work. I've also been looking at these "cloud server" thing-a-ma-jigs, and their automatic scalability, but not sure about what's actually scalable. The application is just a tracker script, so if I have 100 websites that get 3 million page loads a day, there will be around 300 million inserts a day. The data will be ran through a script that will run every 15-30 minutes which will normalize the data and insert it into another mysql table. How do the big dogs do it? How do the little dogs do it? I can't afford a huge server anymore so any intuitive ways, if there are multiple ways of going at it, you smart people can think of.. please let me know :)

    Read the article

  • What's the most efficient query?

    - by Aaron Carlino
    I have a table named Projects that has the following relationships: has many Contributions has many Payments In my result set, I need the following aggregate values: Number of unique contributors (DonorID on the Contribution table) Total contributed (SUM of Amount on Contribution table) Total paid (SUM of PaymentAmount on Payment table) Because there are so many aggregate functions and multiple joins, it gets messy do use standard aggregate functions the the GROUP BY clause. I also need the ability to sort and filter these fields. So I've come up with two options: Using subqueries: SELECT Project.ID AS PROJECT_ID, (SELECT SUM(PaymentAmount) FROM Payment WHERE ProjectID = PROJECT_ID) AS TotalPaidBack, (SELECT COUNT(DISTINCT DonorID) FROM Contribution WHERE RecipientID = PROJECT_ID) AS ContributorCount, (SELECT SUM(Amount) FROM Contribution WHERE RecipientID = PROJECT_ID) AS TotalReceived FROM Project; Using a temporary table: DROP TABLE IF EXISTS Project_Temp; CREATE TEMPORARY TABLE Project_Temp (project_id INT NOT NULL, total_payments INT, total_donors INT, total_received INT, PRIMARY KEY(project_id)) ENGINE=MEMORY; INSERT INTO Project_Temp (project_id,total_payments) SELECT `Project`.ID, IFNULL(SUM(PaymentAmount),0) FROM `Project` LEFT JOIN `Payment` ON ProjectID = `Project`.ID GROUP BY 1; INSERT INTO Project_Temp (project_id,total_donors,total_received) SELECT `Project`.ID, IFNULL(COUNT(DISTINCT DonorID),0), IFNULL(SUM(Amount),0) FROM `Project` LEFT JOIN `Contribution` ON RecipientID = `Project`.ID GROUP BY 1 ON DUPLICATE KEY UPDATE total_donors = VALUES(total_donors), total_received = VALUES(total_received); SELECT * FROM Project_Temp; Tests for both are pretty comparable, in the 0.7 - 0.8 seconds range with 1,000 rows. But I'm really concerned about scalability, and I don't want to have to re-engineer everything as my tables grow. What's the best approach?

    Read the article

  • How can I optimize retrieving lowest edit distance from a large table in SQL?

    - by Matt
    Hey, I'm having troubles optimizing this Levenshtein Distance calculation I'm doing. I need to do the following: Get the record with the minimum distance for the source string as well as a trimmed version of the source string Pick the record with the minimum distance If the min distances are equal (original vs trimmed), choose the trimmed one with the lowest distance If there are still multiple records that fall under the above two categories, pick the one with the highest frequency Here's my working version: DECLARE @Results TABLE ( ID int, [Name] nvarchar(200), Distance int, Frequency int, Trimmed bit ) INSERT INTO @Results SELECT ID, [Name], (dbo.Levenshtein(@Source, [Name])) As Distance, Frequency, 'False' As Trimmed FROM MyTable INSERT INTO @Results SELECT ID, [Name], (dbo.Levenshtein(@SourceTrimmed, [Name])) As Distance, Frequency, 'True' As Trimmed FROM MyTable SET @ResultID = (SELECT TOP 1 ID FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @Result = (SELECT TOP 1 [Name] FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @ResultDist = (SELECT TOP 1 Distance FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @ResultTrimmed = (SELECT TOP 1 Trimmed FROM @Results ORDER BY Distance, Trimmed, Frequency) I believe what I need to do here is to.. Not dumb the results to a temporary table Do only 1 select from `MyTable` Setting the results right in the select from the initial select statement. (Since select will set variables and you can set multiple variables in one select statement) I know there has to be a good implementation to this but I can't figure it out... this is as far as I got: SELECT top 1 @ResultID = ID, @Result = [Name], (dbo.Levenshtein(@Source, [Name])) As distOrig, (dbo.Levenshtein(@SourceTrimmed, [Name])) As distTrimmed, Frequency FROM MyTable WHERE /* ... yeah I'm lost */ ORDER BY distOrig, distTrimmed, Frequency Any ideas?

    Read the article

  • need help fixing unique key in rails. rails is adding id causing duplicate key

    - by railsnew
    I need some help in fixing the below issue. I had transaction blocks in my rails code like below: @sqlcontact = "INSERT INTO contacts (id,\"cid\", \"hphone\", mphone, provider, cemail, email, sms , mail, phone) VALUES ('"+@id1+"','" + @id1 + "', '"+ params[:hphone] + "', '"+params[:mphone]+ "', '" + params[:provider] + "', '" + params[:cemail]+ "', '" + @varemail+ "', '"+@varsms+ "', '"+ @varmail+"', '"+@varphone+"')" my app was deployed to heroku so I was advised by them to remove transaction blocks. So I changed the above to: @cont = Contact.new(:id => @id1, :cid => @id1, :hphone => params[:hphone], :mphone => params[:mphone], :provider => params[:provider], :cemail => params[:cemail], :email => @varemail, :sms => @varsms, :mail => @varmail, :phone => @varphone) @cont.save My app also already had data stored. Now the problem is that when I try to save a record ...I keep getting the error: duplicate key value violates unique constraint "contacts_pkey" The error also shows the sql query trying to insert data ...however, in that sql query i Do not see id value. As you can see from my code that I am passing the id. then why is rails not accepting it? does it always include its own sequential id? can I not overwrite the default rails magic? and if it does that...does it not look at data that is already in the DB?? I am really stuck here. What should I do? should I just go back to my transaction block

    Read the article

  • Tree data in MySql database table

    - by Robert Koritnik
    I have a table that uses Adjacency list model for hierarchy storage. My most relevant columns in this table are therefore: ItemId // is auto_increment ParentId Level ParentTrail // in the form of "parentId/../parentId/itemId" then I created a before insert tigger, that populates columns Level and ParentTrail. Since the last column also includes current item's ID I had to use a trick in my trigger because auto_increment columns are not available in the before insert trigger. So I get that value from the information_schema.tables table. All works fine, until I try to write an update trigger, that would update my item and its descendants when the item changes its parent (ParentId has changed). But I can't make an update on my table inside the update trigger. All I can do is to change current record's values but not other's. I could use a separate table for hierarchy data, but that would mean that I would also have to create a view that would combine these two tables (1:1 relation) and I would like to avoid this is at all possible. Is there a way to have all these in the same table so that these fields (Level and ParetTrail) set/update themselves automagically using triggers?

    Read the article

  • How to append \line into RTF using RichTextBox control

    - by Steve Sheldon
    When using the Microsoft RichTextBox control it is possible to add new lines like this... richtextbox.AppendText(System.Environment.NewLine); // appends \r\n However, if you now view the generated rtf the \r\n characters are converted to \par not \line How do I insert a \line control code into the generated RTF? What does't work: Token Replacement Hacks like inserting a token at the end of the string and then replacing it after the fact, so something like this: string text = "my text"; text = text.Replace("||" "|"); // replace any '|' chars with a double '||' so they aren't confused in the output. text = text.Replace("\r\n", "_|0|_"); // replace \r\n with a placeholder of |0| richtextbox.AppendText(text); string rtf = richtextbox.Rtf; rtf.Replace("_|0|_", "\\line"); // replace placeholder with \line rtf.Replace("||", "|"); // set back any || chars to | This almost worked, it breaks down if you have to support right to left text as the right to left control sequence always ends up in the middle of the placeholder. Sending Key Messages public void AppendNewLine() { Keys[] keys = new Keys[] {Keys.Shift, Keys.Return}; SendKeys(keys); } private void SendKeys(Keys[] keys) { foreach(Keys key in keys) { SendKeyDown(key); } } private void SendKeyDown(Keys key) { user32.SendMessage(this.Handle, Messages.WM_KEYDOWN, (int)key, 0); } private void SendKeyUp(Keys key) { user32.SendMessage(this.Handle, Messages.WM_KEYUP, (int)key, 0); } This also ends up being converted to a \par Is there a way to post a messaged directly to the msftedit control to insert a control character? I am totally stumped, any ideas guys? Thanks for your help!

    Read the article

  • :first-child fails when an element of a different class is dynamically inserted above

    - by koko
    So, I've encountered a situation where inserting an element of a different class/id breaks all css-rules on that :first-child. <div id="nav"> <div class="nSub">abcdef</div> <div class="nSub">abcdef</div> <div class="nSub">abcdef</div> <div class="nSub">abcdef</div> <div class="nSub">abcdef</div> </div> .nSub:first-child { margin-top:15px; -moz-border-radius-topleft:5px; /* ... */ } .nSub { background:#666; /* ... */ } .nSub:last-child { -moz-border-radius-bottomleft:5px; /* ... */ } As soon as I insert an element of another class/id above, like this: $('nav').insert({top:'<div id="newWF"></div>'}); all declarations for .nSub:first-child are being ignored in both FF 3.6 and Safari 4.

    Read the article

  • Object-oriented GUI development in python

    - by ptabatt
    Hey guys, new programmer here. I have an assignment for class and I'm stuck... What I need to do is a create a GUI that gives someone a basic arithmetic problem in one box, asks the person to answer it, evaluates it, and tells you if you're right or wrong... Basically, what I have is this: [code] class Lesson(Frame): def init (self, parent=None): Frame.init(self, parent) self.pack() Lesson.make_widgets(self) def make_widgets(self): Label(self, text="").pack(side=TOP) ent = Entry(self) self.a = randrange(1,10) self.b = randrange(1,10) self.expr = choice(["+","-"]) ent.insert(END, str(self.a) + str(self.expr) + str(self.a)) [/code] I've broken this down into many little steps and basically, what I'm trying to do right now is insert a default random expression into the first entry widget. When I run this code, I just get a blank Label. Why is that? How can I put a something like "7+7" into the box? If you absolutely need background to the problem, it's question #3 on this link. http://reed.cs.depaul.edu/lperkovic/csc242/homeworks/Homework8.html -Thanks for all help in advance.

    Read the article

  • How to detect how many time the users have click the button...

    - by Jerry
    Hello guys. Just want to know if there is a way to detect how many times a user has clicked a button by using Jquery. My main application has a button that can add input fields depend on the users. He/She can adds as many input fields as they need. When they submit the form, The add page will add the data to my database. My current idea is to create a hidden input field and set the value to zero. Every time a user clicks the button, jquery would update the attribute of the hidden input field value. Then the "add page" can detect the loop time. See the example below. I just want to know if there are better practices to do this. Thanks for the helps. main page <form method='post' action='add.php'> //omit <input type="hidden" id="add" name="add" value="0"/> <input type="button" id="addMatch" value="Add a match"/> //omit </form> jquery $(document).ready(function(){ var a =0; $("#addMatch").live('click', function(){ $('#table').append("<input name='match"+a+"Name' />") //the input field will append //as many as the user wants. a++; $('#add').attr('name', 'a'); //pass the a value to hidden input field return false; }); Add Page $a=$_POST['a']; // for($k=0;$k<$a;$k++){ //get all matchName input field $matchName=$_POST['match'.$k.'Name']; //insert the match $updateQuery=mysql_query("INSERT INTO game (team) values('$matchName')",$connection); if(!$updateQuery){ DIE('mysql Error:'+mysql_error()); }

    Read the article

  • Error creating Google Calendar

    - by Don
    Hi, I'm trying to use the Google Calendar Java API to create a secondary calendar for a Google apps user. The relevant code is: // Initialise the API client CalendarService googleCalendar = new GCalendarService("canimo.ca"); googleCalendar.setUserCredentials("[email protected]", "secret"); // Create the calendar CalendarEntry cal = new CalendarEntry(); cal.title = new PlainTextConstruct(user.email); cal.summary = new PlainTextConstruct("Collection calendar"); cal.timeZone = new TimeZoneProperty("America/Montreal"); cal.hidden = HiddenProperty.FALSE; googleCalendar.insert(CALENDAR_URL, calendar); The call to insert() above results in com.google.gdata.util.ServiceException: Internal Server Error and no calendar is created. If I try and perform the same operation through the Google calendar website, it also fails with the error message: We could not save changes. Please try again in a few minutes. An obvious conclusion is that there's some problem on Google's side, but this has been going on for several days now. Strangely, if I create a new user, everything works fine for a while. I can create calendars via the website, and download them via the API. But as soon as I try and create a new calendar using the API, I get the exception above, and thereafter can't create new calendars using either the website or the API. Thanks, Don

    Read the article

  • Unable to persist objects in GAE JDO

    - by Basil Dsouza
    Hello Guys, I am completely fresh to both JDO and GAE, and have been struggling to get my data layer to persist any code at all! The issues I am facing may be very simple, but I just cant seem to find any a way no matter what solution I try. Firstly the problem: (Slightly simplified, but still contains all the info necessary) My data model is as such: User: (primary key) String emailID String firstName Car: (primary key) User user (primary key) String registration String model This was the initial datamodel. I implemented a CarPK object to get a composite primary key of the User and the registration. However that ran into a variety of issues. (Which i will save for another time/question) I then changed the design as such: User: (Unchanged) Car: (primary key) String fauxPK (here fauxPK = user.getEmailID() + SEP + registration) User user String registration String model This works fine for the user, and it can insert and retrieve user objects. However when i try to insert a Car Object, i get the following error: "Cannot have a java.lang.String primary key and be a child object" Found the following helpful link about it: http://stackoverflow.com/questions/2063467/persist-list-of-objects Went to the link suggested there, that explains how to create Keys, however they keep talking about "Entity Groups" and "Entity Group Parents". But I cant seem to find any articles or sites that explain what are "Entity Group"s or an "Entity Group Parents" I could try fiddling around some more to figure out if i can store an object somehow, But I am running sort on patience and also would rather understand and implement than vice versa. So i would appreciate any docs (even if its huge) that covers all these points, and preferably has some examples that go beyond the very basic data modeling. And thanks for reading such a long post :)

    Read the article

  • python gui events out of order

    - by dave
    from Tkinter import * from tkMessageBox import * class Gui: def __init__(self, root): self.container = Frame(root) self.container.grid() self.inputText = Text(self.container, width=50, height=8) self.outputText = Text(self.container, width=50, height=8, bg='#E0E0E0', state=DISABLED) self.inputText.grid(row=0, column=0) self.outputText.grid(row=0, column=1) self.inputText.bind("<Key>", self.translate) def translate(self, event): input = self.inputText.get(0.0, END) output = self.outputText.get(0.0, END) self.outputText.config(state=NORMAL) self.outputText.delete(0.0, END) self.outputText.insert(INSERT, input) self.outputText.config(state=DISABLED) showinfo(message="Input: %s characters\nOutput: %s characters" % (len(input), len(input))) root = Tk() #toplevel object app = Gui(root) #call to the class where gui is defined root.mainloop() #enter event loop Working on a gui in tkinter I'm a little confused as to the sequence the event handlers are run. If you run the above code you'll hopefully see... 1) Editing the text widget triggers the event handler but it seems to fire it off without registering the actual change, 2) Even when the text widget is cleared (ie, keep pressing BackSpace) it still seems to have a one character length string, 3) The output widget only receives its update when the NEXT event trigger is fired despite the fact the data came on the previous event. Is this just how bindings work in tkinter or am i missing something here? The behaviour i would like when updating the input widget is: 1) Show the change, 2) Enter event handler, 3) Update output widget, 4) Show message box.

    Read the article

  • how to use a parameterized function for the Default Binding of a Sql Server column

    - by Walt Gaber
    I have a table that catalogs selected files from multiple sources. I want to record whether a file is a duplicate of a previously cataloged file at the time the new file is cataloged. I have a column in my table (“primary_duplicate”) to record each entry as ‘P’ (primary) or ‘D’ (duplicate). I would like to provide a Default Binding for this column that would check for other occurrences of this file (i.e. name, length, timestamp) at the time the new file is being recorded. I have created a function that performs this check (see “GetPrimaryDuplicate” below). But I don’t know how to bind this function which requires three parameters to the table’s “primary_duplicate” column as its Default Binding. I would like to avoid using a trigger. I currently have a stored procedure used to insert new records that performs this check. But I would like to ensure that the flag is set correctly if an insert is performed outside of this stored procedure. How can I call this function with values from the row that is being inserted? USE [MyDatabase] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[FileCatalog]( [id] [uniqueidentifier] NOT NULL, [catalog_timestamp] [datetime] NOT NULL, [primary_duplicate] nchar NOT NULL, [name] nvarchar NULL, [length] [bigint] NULL, [timestamp] [datetime] NULL ) ON [PRIMARY] GO ALTER TABLE [dbo].[FileCatalog] ADD CONSTRAINT [DF_FileCatalog_id] DEFAULT (newid()) FOR [id] GO ALTER TABLE [dbo].[FileCatalog] ADD CONSTRAINT [DF_FileCatalog_catalog_timestamp] DEFAULT (getdate()) FOR [catalog_timestamp] GO ALTER TABLE [dbo].[FileCatalog] ADD CONSTRAINT [DF_FileCatalog_primary_duplicate] DEFAULT (N'GetPrimaryDuplicate(name, length, timestamp)') FOR [primary_duplicate] GO USE [MyDatabase] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE FUNCTION [dbo].[GetPrimaryDuplicate] ( @name nvarchar(255), @length bigint, @timestamp datetime ) RETURNS nchar(1) AS BEGIN DECLARE @c int SELECT @c = COUNT(*) FROM FileCatalog WHERE name=@name and length=@length and timestamp=@timestamp and primary_duplicate = 'P' IF @c > 0 RETURN 'D' -- Duplicate RETURN 'P' -- Primary END GO

    Read the article

  • charsets in MySQL replication

    - by niklassaers
    Hi guys, What can I do to ensure that replication will use latin1 instead of utf-8? I'm migrating between an MySQL 5.1.22 server (master) on a Linux system and a MySQL 5.1.42 server (slave) on a FreeBSD system. My replication works well, but when non-ascii characters are in my varchars, they turn "weird". The Linux/MySQL-5.1.22 shows the following character set variables: character_set_client=latin1 character_set_connection=latin1 character_set_database=latin1 character_set_filesystem=binary character_set_results=latin1 character_set_server=latin1 character_set_system=utf8 character_sets_dir=/usr/share/mysql/charsets/ collation_connection=latin1_swedish_ci collation_database=latin1_swedish_ci collation_server=latin1_swedish_ci While the FreeBSD shows character_set_client=utf8 character_set_connection=utf8 character_set_database=utf8 character_set_filesystem=binary character_set_results=utf8 character_set_server=utf8 character_set_system=utf8 character_sets_dir=/usr/local/share/mysql/charsets/ collation_connection=utf8_general_ci collation_database=utf8_general_ci collation_server=utf8_general_ci Setting any of these variables from the MySQL CLI has no effect, and setting them in my.cnf or at the command line makes the server not start. Of course, both servers have the tables in question created the same way, in this case with DEFAULT CHARSET=latin1. Let me give you an example: CREATE TABLE `test` ( `test` varchar(5) DEFAULT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 When I on the master do, in a Latin1 terminal, "INSERT INTO test VALUES ('æøå')", this becomes on the slave, when I select it from a Latin1 based terminal +--------+ | test | +--------+ | æøå | +--------+ On a UTF-8 based terminal on the replication slave, test contains: +--------+ | test | +--------+ | æøå | +--------+ So my conclusion is that it is converted to utf8, even though the table definition is latin1. Is this a correct conclusion? Of course, on the master, in a latin1 terminal, it still says: +------+ | test | +------+ | æøå | +------+ Since both system character sets are utf-8, if I set both terminals to utf-8 and do again "INSERT INTO test VALUES ('æøå')" on the master with a utf-8 terminal, on the slave with utf-8 I get: +------------+ | test | +------------+ | æøà | +------------+ If my conclusion is correct, all my replicated data is converted to utf8 (if it is utf8, it is treated as latin1 and converted to utf8), while all the old data in the table is, as the CREATE TABLE suggests, latin1. I'd love to convert it all to utf-8 if it weren't for the fact that legacy applications rely on it being latin1, so I need to keep it in latin1 while they still exist. What can I do to ensure that the replication reads latin1, treats it as latin1 and writes it on the slave as latin1? Cheers Nik

    Read the article

  • Is it possible to definitively identify whether a DML command was issued from a stored procedure?

    - by Ed Harper
    I have inherited a SQL Server 2008 database to which calling applications have access through stored procedures. Each table in the database has a shadow audit table into which Insert/Update/Delete operations for are logged. Performance testing on populating the audit tables showed that inserting the audit records using OUTPUT clauses was 20% or so faster than using triggers, so this has been implemented in the stored procedures. However, because this design cannot track changes made directly to the tables through DML statements issued directly against the tables, triggers have also been implemented which use the value of @@NESTLEVEL to determine whether or not to run the trigger (the assumption being that all DML run through stored procedures will have @@NESTLEVEL 1). i.e. the body of the trigger code looks something like: IF @@NESTLEVEL = 1 -- implies call is direct sql so generate history from here BEGIN ... insert into audit table This design is flawed because it won't track updates where DML statements are executed in dynamic SQL, or any other context where @@NESTLEVEL is raised above 1. Can anyone suggest a completely reliable method we can use in the triggers to execute them only if not triggered by a stored procedure? Or is this (as I suspect) not possible?

    Read the article

  • Good strategy for copying a "sliding window" of data from a table?

    - by chiborg
    I have a MySQL table from a third-party application that has millions of rows and only one index - the timestamp of each entry. Now I want to do some heavy self-joins and queries on the data using fields other than the timestamp. Doing the query on the original table would bring the database to a crawl, adding indexes to the table is not an option. Additionally, I only need entries that are newer than one week. My current strategy for doing the queries efficiently is to use a separate table (aux_table) that has the necessary indexes. My questions are: Is there another way to do the queries? and if not, How do I update the data in the indexed table efficiently? So far I have found two approaches for updating aux_table: Truncate aux_table and insert the desired data from the original table. Not very efficient because all the indexes must be re-crated. Check for the biggest timestamp in aux_table and insert all entries with a greater or equal timestamp from the original table. Occasionally drop older entries. Only copying entries with greater timestamp leads to dropped entries (because of entries with same timestamp that were inserted into the original table after the last update).

    Read the article

  • New record may be written twice in clusterd index structure

    - by Cupidvogel
    As per the article at Microsoft, under the Test 1: INSERT Performance section, it is written that For the table with the clustered index, only a single write operation is required since the leaf nodes of the clustered index are data pages (as explained in the section Clustered Indexes and Heaps), whereas for the table with the nonclustered index, two write operations are required—one for the entry into the index B-tree and another for the insert of the data itself. I don't think that is necessarily true. Clustered Indexes are implemented through B+ tree structures, right? If you look at at this article, which gives a simple example of inserting into a B+ tree, we can see that when 8 is initially inserted, it is written only once, but then when 5 comes in, it is written to the root node as well (thus written twice, albeit not initially at the time of insertion). Also when 8 comes in next, it is written twice, once at the root and then at the leaf. So won't it be correct to say, that the number of rewrites in case of a clustered index is much less compared to a NIC structure (where it must occur every time), instead of saying that rewrite doesn't occur in CI at all?

    Read the article

  • Improve Log Exceptions

    - by Jaider
    I am planning to use log4net in a new web project. In my experience, I see how big the log table can get, also I notice that errors or exceptions are repeated. For instance, I just query a log table that have more than 132.000 records, and I using distinct and found that only 2.500 records are unique (~2%), the others (~98%) are just duplicates. so, I came up with this idea to improve logging. Having a couple of new columns: counter and updated_dt, that are updated every time try to insert same record. If want to track the user that cause the exception, need to create a user_log or log_user table, to map N-N relationship. Create this model may made the system slow and inefficient trying to compare all these long text... Here the trick, we should also has a hash column of binary of 16 or 32, that hash the message and the exception, and configure an index on it. We can use HASHBYTES to help us. I am not an expert in DB, but I think that will made the faster way to locate a similar record. And because hashing doesn't guarantee uniqueness, will help to locale those similar record much faster and later compare by message or exception directly to make sure that are unique. This is a theoretical/practical solution, but will it work or bring more complexity? what aspects I am leaving out or what other considerations need to have? the trigger will do the job of insert or update, but is the trigger the best way to do it?

    Read the article

  • duplicate rows in join table with has_many => through and accepts_nested_attributes_for

    - by shalako
    An event has many artists, and an artist has many events. The join of an artist and an event is called a performance. I want to add artists to an event. This works except that I'm getting duplicate entries into my join table when creating a new event. This causes problems elsewhere. event.rb has_many :performances, :dependent => :destroy has_many :artists, :through => :performances accepts_nested_attributes_for :artists, :reject_if => proc {|a| a['name'].blank?} accepts_nested_attributes_for :performances, :reject_if => proc { |a| a['artist_id'].blank? }, :allow_destroy => true artist.rb has_many :performances, :dependent => :destroy has_many :artists, :through => :performances performance.rb belongs_to :artist belongs_to :event events_controller.rb def new @event = Event.new @event.artists.build respond_to do |format| format.html # new.html.erb format.xml { render :xml => @event } end end def create @event = Event.new(params[:event]) respond_to do |format| if @event.save flash[:notice] = 'Event was successfully created.' format.html { redirect_to(admin_events_url) } format.xml { render :xml => @event, :status => :created, :location => @event } else format.html { render :action => "new" } format.xml { render :xml => @event.errors, :status => :unprocessable_entity } end end end output Performance Create (0.2ms) INSERT INTO `performances` (`event_id`, `artist_id`) VALUES(7, 19) Performance Create (0.1ms) INSERT INTO `performances` (`event_id`, `artist_id`) VALUES(7, 19)

    Read the article

  • Synchronizing in SQL Replication works when manually syncing, but not automatically

    - by Dominic Zukiewicz
    I'm using SQL Server 2005 to create a replication copy of the main databases, so that the reports can point to the replication copy instead of locking out our main databases. I have set up the 3 databases as publications and then 3 subscribers moving the transactions over to the subscribers, instantaneously I hope! What seems to be happening is that when using the "Insert Tracer" function, replication take publisher to distributor < 2 seconds, but to replicate to the subscribers can take over 7 minutes (and these are local databases on a SAN). This could be for 2 reasons: The SQL statements used to query the database are obtaining locks which are stopping the transactions updating the subscribers. The subscribers are just too busy for the replication to apply the changes. What seems to trouble me more, is that although the Replication Monitor / Insert Tracer are showing these statistics, if you use the "View Subscription Details" and then click Start, it will sync within seconds. My goal would be to have the data syncing (ideally) continuously, or every minute, perhaps I should reduce the batch size of the transactions? What am I doing wrong? [Note that the -Continuous flag is set!]

    Read the article

  • In Corona SDK the background image always cover other images

    - by user1446126
    I'm currently making a tower defense game with Corona SDK. However, while I'm making the gaming scene, The background scene always cover the monster spawn, I've tried background:toBack() ,however it's doesn't work.Here is my code: module(..., package.seeall) function new() local localGroup = display.newGroup(); local level=require(data.levelSelected); local currentDes = 1; monsters_list = display.newGroup() --The background local bg = display.newImage ("image/levels/1/bg.png"); bg.x = _W/2;bg.y = _H/2; bg:toBack(); --generate the monsters function spawn_monster(kind) local monster=require("monsters."..kind); newMonster=monster.new() --read the spawn(starting point) in level, and spawn the monster there newMonster.x=level.route[1][1];newMonster.y=level.route[1][2]; monsters_list:insert(newMonster); localGroup:insert(monsters_list); return monsters_list; end function move(monster,x,y) -- Using pythagoras to calauate the moving distace, Hence calauate the time consumed according to speed transition.to(monster,{time=math.sqrt(math.abs(monster.x-x)^2+math.abs(monster.y-y)^2)/(monster.speed/30),x=x, y=y, onComplete=newDes}) end function newDes() currentDes=currentDes+1; end --moake monster move according to the route function move_monster() for i=1,monsters_list.numChildren do move(monsters_list[i],200,200); print (currentDes); end end function agent() spawn_monster("basic"); end --Excute function above. timer2 = timer.performWithDelay(1000,agent,10); timer.performWithDelay(100,move_monster,-1); timer.performWithDelay(10,update,-1); move_monster(); return localGroup; end and the monster just stuck at the spawn point and stay there. but, When i comment these 3 lines of code: --local bg = display.newImage ("image/levels/1/bg.png"); --bg.x = _W/2;bg.y = _H/2; --bg:toBack(); The problem disappear Any ideas??Thanks for helping

    Read the article

  • How to Update with LINQ?

    - by DaveDev
    currently, I'm doing an update similar to as follows, because I can't see a better way of doing it. I've tried suggestions that I've read in blogs but none work, such as http://msdn.microsoft.com/en-us/library/bb425822.aspx and http://weblogs.asp.net/scottgu/archive/2007/05/19/using-linq-to-sql-part-1.aspx Maybe these do work and I'm missing some point. Has anyone else had luck with them? // please note this isn't the actual code. // I've modified it to clarify the point I wanted to make // and also I didn't want to post our code here! public bool UpdateMyStuff(int myId, List<int> funds) { // get MyTypes that correspond to the ID I want to update IQueryable<MyType> myTypes = database.MyTypes.Where(xx => xx.MyType_MyId == myId); // delete them from the database foreach (db.MyType mt in myTypes) { database.MyTypes.DeleteOnSubmit(mt); } database.SubmitChanges(); // create a new row for each item I wanted to update, and insert it foreach (int fund in funds) { database.MyType mt = new database.MyType { MyType_MyId = myId, fund_id = fund }; database.MyTypes.InsertOnSubmit(mt); } // try to commit the insert try { database.SubmitChanges(); return true; } catch (Exception) { return false; throw; } } Unfortunately, there isn't a database.MyTypes.Update() method so I don't know a better way to do it. Can sombody suggest what I could do? Thanks. ..as a side note, why isn't there an Update() method?

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >