Search Results

Search found 27118 results on 1085 pages for 'mysql python'.

Page 46/1085 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Correct MySQL username/password, but getting Access Denied error when run from script

    - by Nick
    I'm currently trying to run the following command from within a shell script. /usr/bin/mysql -u username -ppassword -h localhost database It works perfectly fine when executed manually, and not from within a script. When I try to execute a script that contains that command, I get the following error: ERROR 1045 (28000) at line 3: Access denied for user 'username'@'localhost' (using password: YES) I literally copied and pasted the working command into the script. Why the error? As a sidenote: the ultimate intent is to run the script with cron. EDIT: Here is a stripped down version of my script that I'm trying to run. You can ignore most of it up until the point where it connects to MySQL around line 19. #!/bin/sh #Run download script to download product data cd /home/dir/Scripts/Linux /bin/sh script1.sh #Run import script to import product data to MySQL cd /home/dir/Mysql /bin/sh script2.sh #Download inventory stats spreadsheet and rename it cd /home/dir /usr/bin/wget http://www.url.com/file1.txt mv file1.txt sheet1.csv #Remove existing export spreadsheet rm /tmp/sheet2.csv #Run MySQL queries in "here document" format /usr/bin/mysql -u username -ppassword -h localhost database << EOF --Drop old inventory stats table truncate table table_name1; --Load new inventory stats into table Load data local infile '/home/dir/sheet1.csv' into table table_name1 fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n'; --MySQL queries to combine product data and inventory stats here --Export combined data in spreadsheet format group by p.value into outfile '/tmp/sheet2.csv' fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n'; EOF EDIT 2: After some more testing, the issue is with the << EOF that is at the end of the command. This is there for the "here document". When removed, the command works fine. The problem is that I need << EOF there so that the MySQL queries will run.

    Read the article

  • Mac OS X: Update Python for Shell

    - by Nathan G.
    So, I see similar questions, but none of the answers work for me. I updated Python to 3.1.3 from 2.6.1. Everything works, except: When I type python into Terminal, I get: Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> So, how do I change the version of Python that runs in the Shell? I've tried the script that they provide. It adds their directory to my $PATH, but it still doesn't change the version that'd displayed from Terminal. Here's what I get when I echo $PATH: /Library/Frameworks/Python.framework/Versions/3.1/bin:/Library/Frameworks/Python.framework/Versions/3.1/bin:/Library/Frameworks/Python.framework/Versions/3.1/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin It appears that the script provided has added their directory for every time I ran the script (I tried it a few times, naturally). I'll gives links to caps of what is in the other relevant folders it mentions: /Library/Frameworks/Python.framework/Versions/3.1/bin /usr/local/bin /usr/bin Thakns in advance for any ideas!

    Read the article

  • MySQL Memory Limit Windows Server 2003

    - by Matt
    I am running MySQL 5.0.51a on Windows Server 2003 Standard Edition on an HP DL580 G4 with 3GB installed. One of my database tables has grown to 5.3 GB with an index file of 2.5 GB, which I believe is causing MySQL to be slow due to having to constantly load and unload the index file when updates are made to the table. The server itself seems to be performing OK because MySQL is only using about 500MB of memory (there are other apps running on the system, but MySQL uses the most memory). The table is fairly active with new records getting adding all during day but no deletes, ever. The MySQL server has up to 600 connections allowed, but only small number (10 or 20) would actually be writing to this table. I increased the memory limits in MySQL but since the max connections is so high I don't think I can give each connection 1GB without risking a problem. Is there some tuning that would let just certain connections get a lot of memory? So I have started to look for alternatives to avert the crisis I know is coming soon. Some of the options I have: Upgrade to Server 2003 Enterprise to install 64GB of memory. Question: would 32 bit MySQL be able to access more than 2GB? Would that be 2GB per thread? That would still be smaller than the index table size so it might not solve the problem completely, but it would be better than now. Upgrade to Server 200x 64 bit and MySQL 64 bit. Switch to a *nix 64 bit server. If anybody has suggestions for things to do in the meantime, opinions on which way to go, or other things that I have overlooked I would appreciate the help. Thanks

    Read the article

  • Random “Lost connection to MySQL server at 'reading initial communication packet', system error: 0”

    - by user1606545
    Sometimes I get the error from MYSQL server: Lost connection to MySQL server at 'reading initial communication packet', system error: 0 I cannot find the cause, since most of the time it works, but every week for some hours I get this error. I googled, but there seem to be only users which have this error permanently. But in this case, it only occurs sometimes. I checked hosts.allow and hosts.deny, but the host is allowed and not denied. Also sometimes I get the error: File './database/table.MYD' not found (Errcode: 24) It occurs very rarely. But it occurs for some hours once a week, sometimes on multiple days, but suddenly the problem disappears again. I have checked the open files limit. It's 2048 and should be absolutely enough. I also tried to increase the number of open files nevertheless, but no effect. I thought, perhaps the process does not close some tables. But this is impossible, because after a while everythings o.k. again and the process opens maximum 100 tables at once. I also checked the MySQL-runtime-environment, and there were 930 opened files. I cannot explain that. After a while it's 129. I am running a MySQL-Server on a SUSE-Linux machine. I connect to the MySQL-Server from another host by the command line tool "mysql" and by MySQL-C-connector. The MySQL-Server is version 5.0.67.

    Read the article

  • mysql cluster problem in ubuntu

    - by Firman
    I have a problem while installing and configuring mysql cluster runnign on ubuntu 10.10 This is configuration for Cluster management [NDBD DEFAULT] NoOfReplicas=2 DataMemory=10MB IndexMemory=25MB MaxNoOfTables=256 MaxNoOfOrderedIndexes=256 MaxNoOfUniqueHashIndexes=128 [MYSQLD DEFAULT] [NDB_MGMD DEFAULT] [TCP DEFAULT] [NDB_MGMD] Id=1 # the NDB Management Node (this one) HostName=192.168.10.101 [NDBD] Id=2 # the first NDB Data Node HostName=192.168.10.11 DataDir= /var/lib/mysql-cluster [NDBD] Id=3 # the second NDB Data Node HostName=192.168.10.12 DataDir=/var/lib/mysql-cluster [MYSQLD] [MYSQLD] and this is configuration for both node : [mysqld] ndbcluster ndb-connectstring=192.168.10.101 # the IP of the MANAGMENT (THIRD) SERVER [mysql_cluster] ndb-connectstring=192.168.10.101 # the IP of the MANAGMENT (THIRD) SERVER After running all node and management, and I use ndb_mgm, the type 'show' command, and something appear like this : ndb_mgm> show Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=2 @192.168.10.11 (mysql-5.1.39 ndb-7.0.9, Nodegroup: 0, Master) id=3 @192.168.10.12 (mysql-5.1.39 ndb-7.0.9, Nodegroup: 0) [ndb_mgmd(MGM)] 1 node(s) id=1 @192.168.10.101 (mysql-5.1.39 ndb-7.0.9) [mysqld(API)] 1 node(s) id=4 (not connected, accepting connect from 192.168.10.101) look at two last line.. not as what http://dev.mysql.com/tech-resources/articles/mysql-cluster-for-two-servers.html look like (see at point 4) anyone have ever had this problem ?

    Read the article

  • python pdb not breaking in files properly?

    - by YGA
    Hi Folks, I wish I could provide a simple sample case that occurs using standard library code, but unfortunately it only happens when using one of our in-house libraries that in turn is built on top of sql alchemy. Basically, the problem is that this break command: (Pdb) print sqlalchemy.engine.base.__file__ /prod/eggs/SQLAlchemy-0.5.5-py2.5.egg/sqlalchemy/engine/base.py (Pdb) break /prod/eggs/SQLAlchemy-0.5.5-py2.5.egg/sqlalchemy/engine/base.py:946 Is just being totally ignored, it seems, by pdb. As in, even though I am positive the code is being hit (both because I can see log messages, and because I've used sys.settrace to check which lines in which files are being hit), pdb is just not breaking there. I suspect that somehow the use of an egg is confusing pdb as to what files are being used (I can't reproduce the error if I use a non-egg'ed library, like pickle; there everything works fine). It's a shot in the dark, but has anyone come across this before? Thanks, /YGA

    Read the article

  • Python MySQLdb placeholders syntax

    - by ensnare
    I'd like to use placeholders as seen in this example: cursor.execute (""" UPDATE animal SET name = %s WHERE name = %s """, ("snake", "turtle")) Except I'd like to have the query be its own variable as I need to insert a query into multiple databases, as in: query = """UPDATE animal SET name = %s WHERE name = %s """, ("snake", "turtle")) cursor.execute(query) cursor2.execute(query) cursor3.execute(query) What would be the proper syntax for doing something like this?

    Read the article

  • How to optimize this MySQL query

    - by James Simpson
    This query was working fine when the database was small, but now that there are millions of rows in the database, I am realizing I should have looked at optimizing this earlier. It is looking at over 600,000 rows and is Using where; Using temporary; Using filesort (which leads to an execution time of 5-10 seconds). It is using an index on the field 'battle_type.' SELECT username, SUM( outcome ) AS wins, COUNT( * ) - SUM( outcome ) AS losses FROM tblBattleHistory WHERE battle_type = '0' && outcome < '2' GROUP BY username ORDER BY wins DESC , losses ASC , username ASC LIMIT 0 , 50

    Read the article

  • Python - CSV: Large file with rows of different lengths

    - by dassouki
    In short, I have a 20,000,000 line csv file that has different row lengths. This is due to archaic data loggers and proprietary formats. We get the end result as a csv file in the following format. MY goal is to insert this file into a postgres database. How Can I do the following: Keep the first 8 columns and my last 2 columns, to have a consistent CSV file Add a Column to the file. 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0 img_id.jpg, -50

    Read the article

  • Problem installing MySQL gem on Fedora

    - by Shreyas Satish
    When I try rake db:migrate, I get the following error: !!! The bundled mysql.rb driver has been removed from Rails 2.2. Please install the mysql gem and try again: gem install mysql. rake aborted! no such file to load -- mysql And when I try to "gem install mysql" Building native extensions. This could take a while... ERROR: Error installing mysql: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb Can't find header files for ruby. Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/mysql-2.8.1 for inspection. $ sudo gem install mysql -- --with-mysql-config=/usr/local/mysql/bin/mysql_config has also been tried but same error. I'm on a Fedora 10. Help will be much appreciated.Cheers!

    Read the article

  • MySQL query from subquery not working

    - by James Goodwin
    I am trying to return a number based on the count of results from a table and to avoid having to count the results twice in the IF statement I am using a subquery. However I get a syntax error when trying to run the query, the subquery I have tested by itself runs fine. Any ideas what is wrong with the query? The syntax looks correct to me SELECT IF(daily_count>8000,0,IF(daily_count>6000,1,2)) FROM ( SELECT count(*) as daily_count FROM orders201003 WHERE DATE_FORMAT(date_sub(curdate(), INTERVAL 1 DAY),"%d-%m-%y") = DATE_FORMAT(reqDate,"%d-%m-%y") ) q

    Read the article

  • mySQL date range problem

    - by StealthRT
    Hey all i am in need of a little help with figuring out how to get a range of days for my select query. Here is the code i am trying out: select id, idNumber, theDateStart, theDateEnd from clients WHERE idNumber = '010203' AND theDateStart >= '2010-04-09' AND theDateEnd <= '2010-04-09'; This is what the data in the table looks like: TheDateStart = 2010-04-09 TheDateEnd = 2010-04-11 When testing that code above, it does not populate anything. If i take out the TheEndDate, it populates but with some other tables data as well which it should not do (it should only get one record). I know the problem is within the two date's. I'm not sure how to go about getting a date range for theDateStart and theDateEnd since if someone tries it, say, on 2010-04-10, it still needs to know its within rage of the 2010-04-09 - 2010-04-11. But right now, it does not... Any help would be great! :o) David

    Read the article

  • Using httplib2 in python 3 properly? (Timeout problems)

    - by Sho Minamimoto
    Hey, first time post, I'm really stuck on httplib2. I've been reading up on it from diveintopython3.org, but it mentions nothing about a timeout function. I look up the documentation, but the only thing I see is an ability to put a timeout int but there are no units specified (seconds? milliseconds? What's the default if None?) This is what I have (I also have code to check what the response is and try again, but it's never tried more than once) h = httplib2.Http('.cache', timeout=None) for url in list: response, content = h.request(url) more stuff... So the Http object stays around until some arbitrary time, but I'm downloading a ton of pages from the same server, and after a while, it hangs on getting a page. No errors are thrown, the thing just hangs at a page. So then I try: h = httplib2.Http('.cache', timeout=None) for url in list: try: response, content = h.request(url) except: h = httplib2.Http('.cache', timeout=None) more stuff... But then it recreates another Http object every time (goes down the 'except' path)...I dont understand how to keep getting with the same object, until it expires and I make another. Also, is there a way to set a timeout on an individual request? Thanks for the help!

    Read the article

  • Simple Select Statement on MySQL Database Hanging

    - by AlishahNovin
    I have a very simple sql select statement on a very large table, that is non-normalized. (Not my design at all, I'm just trying to optimize while simultaneously trying to convince the owners of a redesign) Basically, the statement is like this: SELECT FirstName, LastName, FullName, State FROM Activity Where (FirstName=@name OR LastName=@name OR FullName=@name) AND State=@state; Now, FirstName, LastName, FullName and State are all indexed as BTrees, but without prefix - the whole column is indexed. State column is a 2 letter state code. What I'm finding is this: When @name = 'John Smith', and @state = '%' the search is really fast and yields results immediately. When @name = 'John Smith', and @state = 'FL' the search takes 5 minutes (and usually this means the web service times out...) When I remove the FirstName and LastName comparisons, and only use the FullName and State, both cases above work very quickly. When I replace FirstName, LastName, FullName, and State searches, but use LIKE for each search, it works fast for @name='John Smith%' and @state='%', but slow for @name='John Smith%' and @state='FL' When I search against 'John Sm%' and @state='FL' the search finds results immediately When I search against 'John Smi%' and @state='FL' the search takes 5 minutes. Now, just to reiterate - the table is not normalized. The John Smith appears many many times, as do many other users, because there is no reference to some form of users/people table. I'm not sure how many times a single user may appear, but the table itself has 90 Million records. Again, not my design... What I'm wondering is - though there are many many problems with this design, what is causing this specific problem. My guess is that the index trees are just too large that it just takes a very long time traversing the them. (FirstName, LastName, FullName) Anyway, I appreciate anyone's help with this. Like I said, I'm working on convincing them of a redesign, but in the meantime, if I someone could help me figure out what the exact problem is, that'd be fantastic.

    Read the article

  • mysql query the latest date

    - by user295189
    I am running this query SEL ECT sh.*, u.initials AS initals FROM database1.table1 AS sh JOIN database2.user AS u ON u.userID = sh.userid WHERE id = 123456 AND dts = ( SELECT MAX(dts) from database1.table1 ) ORDER BY sort_by, category In the table1 I have records like this dts status category sort_by 2010-04-29 12:20:27 Civil Engineers Occupation 1 2010-04-28 12:20:27 Civil Engineers Occupation 1 2010-04-28 12:20:54 Married Marital Status 2 2010-04-28 12:21:15 Smoker Tobbaco 3 2010-04-27 12:20:27 Civil Engineers Occupation 1 2010-04-27 12:20:54 Married Marital Status 2 2010-04-27 12:21:15 Smoker Tobbaco 3 2010-04-26 12:20:27 Civil Engineers Occupation 1 2010-04-26 12:20:54 Married Marital Status 2 2010-04-26 12:21:15 Smoker Tobbaco 3 so if you look at my data, I am choosing the latest entry by category and sort_id. however in some case such as on 29th (2010-04-29 12:20:27) I have only one record. So in this case I want to show occupation for latest and then the rest of them (latest). But currently it displays only one row. Thanks

    Read the article

  • Generating permutation in Python with specific rule

    - by twfx
    Let say a=[A, B, C, D], each element has a weight w, and is set to 1 if selected, 0 if otherwise. I'd like to generate permutation in the below order 1,1,1,1 1,1,1,0 1,1,0,1 1,1,0,0 1,0,1,1 1,0,1,0 1,0,0,1 1,0,0,0 Let's w=[1,2,3,4] for item A,B,C,D ... and max_weight = 4. For each permutation, if the accum weight has exceeded max_weight, stop calculation for that permutation, move to next permutation. For eg. 1,1,1 --> 6 > 4, exceeded, stop, move to next 1,1,1 --> 6 > 4, exceeded, stop, move to next 1,1,0,1 --> 7 > 4 finished, move to next 1,1,0,0 --> 3 finished, move to next 1,0,1,1 --> 8 > 4, finished, stop, move to next 1,0,1,0 --> 4 finished, move to next 1,0,0,1 --> 5 > 4 finished, move to next 1,0,0,0 --> 1 finished, move to next [1,0,1,0] is the best combination which does not exceeded max_weight 4 My questions are What's the algorithm which generate the required permutation? Or any suggestion I could generate the permutation? As the number of element can be up to 10000, and the calculation stop if the accum weight for the branch exceeds max_weight, it is not necessary to generate all permutation first before the calculation. How can the algo in (1) generate permutation on the fly?

    Read the article

  • How do I handle mysql replication in EC2 using private IPs?

    - by chris
    I am trying to set up a mysql master/slave configuration in two EC2 instances. However, every time I reboot an instance, the IP address (and hostname) changes. I could assign an Elastic IP address, but would prefer to use the internal IP address. I can't be the first person to do this, but I can't seem to find a solution. There are a lot of "getting started" guides, but none of them mention how to handle changing IP addresses. So what are the best practices to manage master/slave replication in EC2?

    Read the article

  • Two n x m relationships with the same table in mysql

    - by Christian
    I want to create a database in which there's an n x m relationship between the table drug and the table article and an n x m relationship between the table target and the table article. I get the error: Cannot delete or update a parent row: a foreign key constraint fails What do I have to change in my code? DROP TABLE IF EXISTS `textmine`.`article`; CREATE TABLE `textmine`.`article` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT COMMENT 'Pubmed ID', `abstract` blob NOT NULL, `authors` blob NOT NULL, `journal` varchar(256) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; DROP TABLE IF EXISTS `textmine`.`drugs`; CREATE TABLE `textmine`.`drugs` ( `id` int(10) unsigned NOT NULL COMMENT 'This ID is taken from the biosemantics dictionary', `primaryName` varchar(256) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; DROP TABLE IF EXISTS `textmine`.`targets`; CREATE TABLE `textmine`.`targets` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `primaryName` varchar(256) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; DROP TABLE IF EXISTS `textmine`.`containstarget`; CREATE TABLE `textmine`.`containstarget` ( `targetid` int(10) unsigned NOT NULL, `articleid` int(10) unsigned NOT NULL, KEY `target` (`targetid`), KEY `article` (`articleid`), CONSTRAINT `article` FOREIGN KEY (`articleid`) REFERENCES `article` (`id`), CONSTRAINT `target` FOREIGN KEY (`targetid`) REFERENCES `targets` (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; DROP TABLE IF EXISTS `textmine`.`contiansdrug`; CREATE TABLE `textmine`.`contiansdrug` ( `drugid` int(10) unsigned NOT NULL, `articleid` int(10) unsigned NOT NULL, KEY `drug` (`drugid`), KEY `article` (`articleid`), CONSTRAINT `article` FOREIGN KEY (`articleid`) REFERENCES `article` (`id`), CONSTRAINT `drug` FOREIGN KEY (`drugid`) REFERENCES `drugs` (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8;

    Read the article

  • Optimize this MySQL query?

    - by HipHop-opatamus
    The following query takes FOREVER to execute (30+ hrs on a Macbook w/4gig ram) - I'm looking for ways to make it run more efficiently. Any thoughts are appreciated! CREATE TABLE fc AS SELECT threadid, title, body, date, userlogin FROM f WHERE pid NOT IN (SELECT pid FROM ft) ORDER BY date; (table "f" is ~1 Gig / 1,843,000 row, table "ft" is 168mb, 216,000 rows) )

    Read the article

  • In python, changing MySQL query based on function variables

    - by ensnare
    I'd like to be able to add a restriction to the query if user_id != None ... for example: "AND user_id = 5" but I am not sure how to add this into the below function? Thank you. def get(id, user_id=None): query = """SELECT * FROM USERS WHERE text LIKE %s AND id = %s """ values = (search_text, id) results = DB.get(query, values) This way I can call: get(5) get(5,103524234) (contains user_id restriction)

    Read the article

  • Help with MySQL query - Product orders report without duplicate shipping charges

    - by Paul
    Hello, I have an issue creating a custom report for an e-commerce store running on osCommerce. The client wants the report to have the following columns: Date, Order ID, Product Class, Product Price, Product Tax, Shipping, Order Total The criteria for generating the report are Date Range and Product Class (Textbooks for example) The client wants the report to list each Textbook purchased on its own line. Orders with multiple textbooks would display a separate line for each Textbook in the order. I have it all working except for one part: the shipping amount is order-specific (based on the order total), not product-specific, and is displaying for each product. I need it to display only for the first product of each order, so it is not counted more than once. My current query is: SELECT op.date_funds_captured as 'Date', op.orders_id as 'Order ID', pc.class as 'Product Class', round(op.products_price,2) as 'Product Price', round(op.products_tax*op.products_price/100,2) as 'Product Tax', round(otship.value,2) as 'Shipping', round(ot.value,2) as 'Order Total' from orders_products op, orders_total ot, orders_total otship, productclasses pc, products p where ot.orders_id = op.orders_id and ot.class='ot_total' and op.orders_id = otship.orders_id and otship.class = 'ot_shipping' and p.products_class_id = pc.id and op.products_id = p.products_id and pc.id = 1 pc.id = 1 -- Product class = Textbook Here is an example of the current report output. You can see the problem with order 2256 showing the shipping value three times instead of once: Date Order Product Class Price Tax Shipping Total 2010-01-04 2253 Textbook 24.95 2.43 10.03 37.41 2010-01-04 2256 Textbook 34.95 0.00 18.09 240.37 2010-01-04 2256 Textbook 55.50 0.00 18.09 240.37 2010-01-04 2256 Textbook 36.95 0.00 18.09 240.37 2010-01-04 2258 Textbook 55.50 5.41 12.17 124.24 Please help!!! Thanks, Paul

    Read the article

  • MySQL Trigger with Update

    - by Matthew
    I'm trying to update a row when it gets updated to keep one of the columns consistant, CREATE TRIGGER user_country BEFORE UPDATE ON user_billing FOR EACH ROW BEGIN IF NEW.billing_country = OLD.billing_country AND NEW.country_id != OLD.country_id THEN SET NEW.billing_country = cms.country.country_name WHERE cms.country.country_id = NEW.country_id; END IF; END But I keep recieving error #1064, Is there a way to update a row based on another row's data when the row is getting updated?

    Read the article

  • How to build sqlite for Python 2.4?

    - by Verrtex
    I would like to use pysqlite interface between Python and sdlite database. I have already Python and SQLite on my computer. But I have troubles with installation of pysqlite. During the installation I get the following error message: error: command 'gcc' failed with exit status 1 As far as I understood the problems appears because version of my Python is 2.4.3 and SQLite is integrated in Python since 2.5. However, I also found out that it IS possible to build sqlite for Python 2.4 (using some tricks, probably). Does anybody know how to build sqlite for Python 2.4? As another option I could try to install higher version of Python. However I do not have root privileges. Does anybody know what will be the easiest way to solve the problem (build SQLite fro Python 2.4, or install newer version of Python)? I have to mention that I would not like to overwrite the old version version of Python. Thank you in advance.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >