Search Results

Search found 40581 results on 1624 pages for 'mysql select db'.

Page 251/1624 | < Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >

  • Getting a KeyError in DB backend of django-digest

    - by rtmie
    I have just started to integrate django_digest into my app. As a start I have added the @httpdigest decorator to one of my views. If I try to connect to it I get a KeyError exception thrown in django_digest/backend/db.py . Depending on which db I configure I get a different KeyError in a different location. I am using Django 1.2.1, with MySql (also tested with sqlite). I am using the default values for all the settings options. As far as I can see I have followed all instructions but am struggling all day with this. I am using the repository versions of django-digest and python-digest. Any steer would be greatly appreciated. Tracebacks for sqlite and mysql below: with sqlite: Traceback (most recent call last): File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/servers/basehttp.py", line 674, in __call__ return self.application(environ, start_response) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/handlers/wsgi.py", line 248, in __call__ signals.request_finished.send(sender=self.__class__) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/dispatch/dispatcher.py", line 162, in send response = receiver(signal=self, sender=sender, **named) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django_digest-1.8-py2.5.egg/django_digest/backend/db.py", line 16, in close_connection _connection.close() File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/db/backends/sqlite3/base.py", line 186, in close if self.settings_dict['NAME'] != ":memory:": KeyError: 'NAME' with mysql: Traceback (most recent call last): File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/servers/basehttp.py", line 674, in __call__ return self.application(environ, start_response) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/handlers/wsgi.py", line 241, in __call__ response = self.get_response(request) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/handlers/base.py", line 142, in get_response return self.handle_uncaught_exception(request, resolver, exc_info) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/handlers/base.py", line 166, in handle_uncaught_exception return debug.technical_500_response(request, *exc_info) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/core/handlers/base.py", line 80, in get_response response = middleware_method(request) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django_digest-1.8-py2.5.egg/django_digest/middleware.py", line 13, in process_request if (not self._authenticator.authenticate(request) and File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django_digest-1.8-py2.5.egg/django_digest/__init__.py", line 86, in authenticate partial_digest = self._account_storage.get_partial_digest(digest_response.username) File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django_digest-1.8-py2.5.egg/django_digest/backend/db.py", line 97, in get_partial_digest cursor = get_connection().cursor() File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/db/backends/__init__.py", line 75, in cursor cursor = self._cursor() File "/home/robm/projects/gcs/server/gcs2.5/lib/python2.5/site-packages/django/db/backends/mysql/base.py", line 281, in _cursor if settings_dict['USER']: KeyError: 'USER'

    Read the article

  • How to monitor MySQL query errors, timeouts and logon attempts?

    - by Abel
    While setting up a third party closed source CMS (Sitefinity) the setup doesn't create all tables and procedures necessary to run it. The software lacks a logging system itself and it made me wonder: could I trace and monitor failing SQL statements from MySQL? This serves more than only the purpose of solving my issue with Sitefinity. More often I wonder what's send to the MySQL server, not wanting to dive into the software products or setup a debugging environment etc. I tried JetProfiler (only performance) and looked through a few others, but although they monitor a lot, they don't monitor query failures, timeouts or logon attempts. Does anyone know a profiler, tracer, monitoring tool, commercial or free, that can show me this information?

    Read the article

  • It is possible to record a data that have a straight row in mysql based on date or sequence?

    - by user1987816
    I want to get the data that have a straight Sell more than 3 times, it is possible in mysql? If not, how to get it right? I'm need it on mysql or php. my database:- +----------+---------------------+--------+ | Username | Date | Action | +----------+---------------------+--------+ | Adam | 2014-08-20 22:30:20 | Sell | | Adam | 2014-08-20 22:30:20 | Sell | | Adam | 2014-08-20 22:30:20 | Sell | | Adam | 2014-08-20 22:30:20 | Buy | | Adam | 2014-08-20 22:30:20 | Buy | | Adam | 2014-08-20 22:30:20 | Sell | | Adam | 2014-08-20 22:30:20 | Sell | | Adam | 2014-08-20 22:30:20 | Sell | | Adam | 2014-08-20 22:30:20 | Sell | | Nick | 2014-08-20 22:30:20 | Sell | | Nick | 2014-08-20 22:30:20 | Sell | | Nick | 2014-08-20 22:30:20 | Sell | | Nick | 2014-08-20 22:30:20 | Sell | | Nick | 2014-08-20 22:30:20 | Buy | +----------+---------------------+--------+ From the table above, I need to list out all data that have a straight sell more then 3 times. RESULT +----------+---------------------+--------+-------------+ | Username | Date | Action | Straight 3+ | +----------+---------------------+--------+-------------+ | Adam | 2014-08-20 22:30:20 | Sell | 3 | | Adam | 2014-08-20 22:30:20 | Sell | 4 | | Nick | 2014-08-20 22:30:20 | Sell | 4 | +----------+---------------------+--------+-------------+

    Read the article

  • How to insert bulk data into mysql table from asp.net at once.

    - by kranthi
    Hi, I have a requirement that I need to read an excel sheet using asp.net/C# and insert all the records into mysql table.The excel sheet consists of around 2000 rows and 50 columns. Currently,upon reading the excel records ,I am inserting the records one by one using a prepare statement into mysql table.But its taking around 70 secs to do so because of the huge data. I've also thought of creating a new datarow, assigning values to each cell,adding the resulting datarow to datatable and finally calling dataadapter.update(...).But it seems to be complex because I got around 50 columns and hence I'll have to assign 50 values to the datarow. Could someone please suggest if there is an alternate to improve the performance of the insertion? Thanks

    Read the article

  • Hierarchical Data in MySQL is as fast as XML to retrieve?

    - by ajsie
    i've got a list of all countries - states - cities (- subcities/villages etc) in a XML file and to retrieve for example a state's all cities it's really quick with XML (using xml parser). i wonder, if i put all this information in mysql, is retrieving a state's all cities as fast as with XML? cause XML is designed to store hierarchical data while relational databases like mysql are not. the list contains like 500 000 entities. so i wonder if its as fast as XML using either of: Adjacency list model Nested Set model And which one should i use? Cause (theoretically) there could be unlimited levels under a state (i heard that adjacency isn't good for unlimited child-levels). And which is fastest for this huge dataset? Thanks!

    Read the article

  • Mysql SET NAMES UTF8 - how to get rid of?

    - by Nir
    In a very busy PHP script we have a call at the beginning to "Set names utf8" which is setting the character set in which mysql should interpret and send the data back from the server to the client. http://dev.mysql.com/doc/refman/5.0/en/charset-applications.html I want to get rid of it so I set default-character-set=utf8 In our server ini file. (see link above) The setting seems to be working since the relevant server parameters are : 'character_set_client', 'utf8' 'character_set_connection', 'utf8' 'character_set_database', 'latin1' 'character_set_filesystem', 'binary' 'character_set_results', 'utf8' 'character_set_server', 'latin1' 'character_set_system', 'utf8' But after this change and commenting out set names utf8 call still the data starts to come out garbled. Please advise....

    Read the article

  • How to SELECT the last 10 rows of an SQL table which has no ID field?

    - by Edward Tanguay
    I have an MySQL table with 25000 rows. This is an imported CSV file so I want to look at the last ten rows to make sure it imported everything. However, since there is no ID column, I can't say: SELECT * FROM big_table ORDER BY id DESC What SQL statement would show me the last 10 rows of this table? The structure of the table is simply this: columns are: A, B, C, D, ..., AA, AB, AC, ... (like Excel) all fields are of type TEXT

    Read the article

  • Autmatically create table on MySQL server based on date?

    - by Anthony
    Is there an equivalent to cron for MySQL? I have a PHP script that queries a table based on the month and year, like: SELECT * FROM data_2010_1 What I have been doing until now is, every time the script executes it does a query for the table, and if it exists, does the work, if it doesn't it creates the table. I was wondering if I can just set something up on the MySQL server itself that will create the table (based on a default table) at the stroke of midnight on the first of the month. Update Based on the comments I've gotten, I'm thinking this isn't the best way to achieve my goal. So here's two more questions: If I have a table with thousands of rows added monthly, is this potentially a drag on resources? If so, what is the best way to partition this table, since the above is verboten? What are the potential problems with my home-grown method I originally thought up?

    Read the article

  • How to Connect to a Mysql database using PHP?

    - by Karthik
    <?php mysql_connect("localhost", "username", "password") or die(mysql_error()); echo "Connected to MySQL<br />"; ?> This is the code I am using to check if I am able to connect to Mysql. I am having problem connecting using this code. Should I change localhost to the name of the website? I tried ("www.abc.com","login username", "password") even this is not working.

    Read the article

  • Hopefully simple topic to spark some good opinions, Question is MySQL or SQL Server???

    - by magellings
    I'm beginning development of a website and a high priority is for it to be extremely optimized, quick responses, etc. There will ultimately end up being large amounts of rows in the main tables (millions), so scalability is also important. It will need to use a database on the back-end for data storage and my web hosting service supports either MySQL or Sql Server. This website will be developed with .NET ASP.NET MVC with NHibernate (hopefully it can run in medium trust mode, as that is a requirement of my web hosting and reflection requirements of NHibernate may be problematic, maybe someone has a comment on this too). I'd also prefer to use the database that will require the least attention in regards to management. I don't want to have to be a DBA here. :) I wanted to through this topic out to the public to see what the community thinks? So MySQL or Sql Server, generally, which one would be better to use?

    Read the article

  • wordpress generating slow mysql queries - is it index problem?

    - by tash
    Hello Stack Overflow I've got very slow Mysql queries coming up from my wordpress site. It's making everything slow and I think this is eating up CPU usage. I've pasted the Explain results for the two most frequently problematic queries below. This is a typical result - although very occasionally teh queries do seem to be performed at a more normal speed. I have the usual wordpress indexes on the database tables. You will see that one of the queries is generated from wordpress core code, and not from anything specific - like the theme - for my site. I have a vague feeling that the database is not always using the indexes/is not using them properly... Is this right? Does anyone know how to fix it? Or is it a different problem entirely? Many thanks in advance for any help anyone can offer - it is hugely appreciated Query: [wp-blog-header.php(14): wp()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort Query time: 34.2829 (ms) 9) Query: [wp-content/themes/LMHR/index.php(40): query_posts()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.ID NOT IN ( SELECT tr.object_id FROM wp_term_relationships AS tr INNER JOIN wp_term_taxonomy AS tt ON tr.term_taxonomy_id = tt.term_taxonomy_id WHERE tt.taxonomy = 'category' AND tt.term_id IN ('217', '218', '223', '224') ) AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort 2 DEPENDENT SUBQUERY tr ref PRIMARY,term_taxonomy_id PRIMARY 8 func 1 Using index 2 DEPENDENT SUBQUERY tt eq_ref PRIMARY,term_id_taxonomy,taxonomy PRIMARY 8 antin1_lovemusic2010.tr.term_taxonomy_id 1 Using where Query time: 70.3900 (ms)

    Read the article

  • Which collation should I use to store these country names in MySQL?

    - by morpheous
    I am trying to store a list of countries in a mySQL database. I am having problems storing (non English) names like these: São Tomé and Príncipe República de El Salvador They are stored with strange characters in the db, (and therefore output strangely in my HTML pages). I have tried using different combinations of collations for the database and the MySQL connection collation: The "obvious" setting was to use utf8_unicode_ci for both the databse and the connection information. To my utter surprise, that did not solve the problem. Does anyone know how to resolve this issue?

    Read the article

  • How do I get phpMyAdmin to connect to a local mysql server?

    - by bryan
    Hi everyone, I have my config.inc.php file, and have set my host name to localhost. Unfortunately, I have no idea what my username/password should be. Is that something I need to configure on the MySql side? I tried creating an arbitrary username/password (admin/password), but when I try to log into phpMyAdmin with those credentials, I get an error: (#1045 - Access denied for user 'admin'@'localhost' (using password: YES)) Can anyone point me in the right direction? (Sorry for the dumb question; I just haven't had to install mysql before. I've always had a host name / username / password given to me.) Thanks!

    Read the article

  • Connect to a MySQL database and count the number of rows.

    - by Hugo
    Hi there! I need to connect to a MySQL database and then show the number of rows. This is what I've got so far; <?php include "connect.php"; db_connect(); $result = mysql_query("SELECT * FROM hacker"); $num_rows = mysql_num_rows($result); echo $num_rows; ?> When I use that code I end up with this error; Warning: mysql_num_rows(): supplied argument is not a valid MySQL result resource in C:\Documents and Settings\username\Desktop\xammp\htdocs\news2\results.php on line 10 Thanks in advance :D

    Read the article

  • Does it make sense to use BOTH mongodb and mysql in the same rails application?

    - by Brian Armstrong
    I have a good reason to use mongodb for part of my app. But people generally describe it as not a good fit for "transactional" applications like a bank where transactions have to be exact/consistent, etc. Does it make sense to split the models up in Rails and have some of them use MySql and others mongo? Or will this generally cause more problems than it's worth? I'm not building a banking app or anything, but was thinking it might make sense for my users table or or transactions table (recording revenue) to do that part in MySql.

    Read the article

  • Can MySQL reasonably perform queries on billions of rows?

    - by haxney
    I am planning on storing scans from a mass spectrometer in a MySQL database and would like to know whether storing and analyzing this amount of data is remotely feasible. I know performance varies wildly depending on the environment, but I'm looking for the rough order of magnitude: will queries take 5 days or 5 milliseconds? Input format Each input file contains a single run of the spectrometer; each run is comprised of a set of scans, and each scan has an ordered array of datapoints. There is a bit of metadata, but the majority of the file is comprised of arrays 32- or 64-bit ints or floats. Host system |----------------+-------------------------------| | OS | Windows 2008 64-bit | | MySQL version | 5.5.24 (x86_64) | | CPU | 2x Xeon E5420 (8 cores total) | | RAM | 8GB | | SSD filesystem | 500 GiB | | HDD RAID | 12 TiB | |----------------+-------------------------------| There are some other services running on the server using negligible processor time. File statistics |------------------+--------------| | number of files | ~16,000 | | total size | 1.3 TiB | | min size | 0 bytes | | max size | 12 GiB | | mean | 800 MiB | | median | 500 MiB | | total datapoints | ~200 billion | |------------------+--------------| The total number of datapoints is a very rough estimate. Proposed schema I'm planning on doing things "right" (i.e. normalizing the data like crazy) and so would have a runs table, a spectra table with a foreign key to runs, and a datapoints table with a foreign key to spectra. The 200 Billion datapoint question I am going to be analyzing across multiple spectra and possibly even multiple runs, resulting in queries which could touch millions of rows. Assuming I index everything properly (which is a topic for another question) and am not trying to shuffle hundreds of MiB across the network, is it remotely plausible for MySQL to handle this? UPDATE: additional info The scan data will be coming from files in the XML-based mzML format. The meat of this format is in the <binaryDataArrayList> elements where the data is stored. Each scan produces = 2 <binaryDataArray> elements which, taken together, form a 2-dimensional (or more) array of the form [[123.456, 234.567, ...], ...]. These data are write-once, so update performance and transaction safety are not concerns. My naïve plan for a database schema is: runs table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | start_time | TIMESTAMP | | name | VARCHAR | |-------------+-------------| spectra table | column name | type | |----------------+-------------| | id | PRIMARY KEY | | name | VARCHAR | | index | INT | | spectrum_type | INT | | representation | INT | | run_id | FOREIGN KEY | |----------------+-------------| datapoints table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | spectrum_id | FOREIGN KEY | | mz | DOUBLE | | num_counts | DOUBLE | | index | INT | |-------------+-------------| Is this reasonable?

    Read the article

< Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >