Search Results

Search found 48190 results on 1928 pages for 'mysql slow query log'.

Page 188/1928 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • How to create HTML tables from MySQL?

    - by Chris
    Hi Guys, I'm building a part of a system where the user can define "views" from a mysql database. I want some way of generating simple HTML tables/reports from the data, but not just plain output of sql queries- simple joins probably isn't enough. Any ideas?

    Read the article

  • MySQL Grouping of data

    - by text
    I want to group my data by Age and by gender: like this sample data: Age: 1 Male: 2 Female: 3 Age 1 Total: 5 Age: 2 Male: 6 Female: 3 Age 2 Total: 9 How can I group the data according to age and count all the male and females in that age from mysql database?

    Read the article

  • mysql query problem

    - by PiePowa
    hey everybody i'm trying to do somthing simple like this SET i = 0; WHILE (i <= 2230686) DO INSERT INTO customers(check) VALUE(0); SET i=i+1; END WHILE; in the phpmyadmin SQL query box and it doesntwork and the mysql doesnt throw an understandable reason any clues ?

    Read the article

  • SQL Server 05, which is optimal, LIKE %<term>% or CONTAINS() for searching large column

    - by Spud1
    I've got a function written by another developer which I am trying to modify for a slightly different use. It is used by a SP to check if a certain phrase exists in a text document stored in the DB, and returns 1 if the value is found or 0 if its not. This is the query: SELECT @mres=1 from documents where id=@DocumentID and contains(text, @search_term) The document contains mostly XML, and the search_term is a GUID formatted as an nvarchar(40). This seems to run quite slowly to me (taking 5-6 seconds to execute this part of the process), but in the same script file there is also this version of the above, commented out. SELECT @mres=1 from documents where id=@DocumentID and textlike '%' + @search_term + '%' This version runs MUCH quicker, taking 4ms compared to 15ms for the first example. So, my question is why use the first over the second? I assume this developer (who is no longer working with me) had a good reason, but at the moment I am struggling to find it.. Is it possibly something to do with the full text indexing? (this is a dev DB I am working with, so the production version may have better indexing..) I am not that clued up on FTI really so not quite sure at the moment. Thoughts/ideas?

    Read the article

  • SQL SERVER – DMV – sys.dm_exec_query_optimizer_info – Statistics of Optimizer

    - by pinaldave
    Incredibly, SQL Server has so much information to share with us. Every single day, I am amazed with this SQL Server technology. Sometimes I find several interesting information by just querying few of the DMV. And when I present this info in front of my client during performance tuning consultancy, they are surprised with my findings. Today, I am going to share one of the hidden gems of DMV with you, the one which I frequently use to understand what’s going on under the hood of SQL Server. SQL Server keeps the record of most of the operations of the Query Optimizer. We can learn many interesting details about the optimizer which can be utilized to improve the performance of server. SELECT * FROM sys.dm_exec_query_optimizer_info WHERE counter IN ('optimizations', 'elapsed time','final cost', 'insert stmt','delete stmt','update stmt', 'merge stmt','contains subquery','tables', 'hints','order hint','join hint', 'view reference','remote query','maximum DOP', 'maximum recursion level','indexed views loaded', 'indexed views matched','indexed views used', 'indexed views updated','dynamic cursor request', 'fast forward cursor request') All occurrence values are cumulative and are set to 0 at system restart. All values for value fields are set to NULL at system restart. I have removed a few of the internal counters from the script above, and kept only documented details. Let us check the result of the above query. As you can see, there is so much vital information that is revealed in above query. I can easily say so many things about how many times Optimizer was triggered and what the average time taken by it to optimize my queries was. Additionally, I can also determine how many times update, insert or delete statements were optimized. I was able to quickly figure out that my client was overusing the Query Hints using this dynamic management view. If you have been reading my blog, I am sure you are aware of my series related to SQL Server Views SQL SERVER – The Limitations of the Views – Eleven and more…. With this, I can take a quick look and figure out how many times Views were used in various solutions within the query. Moreover, you can easily know what fraction of the optimizations has been involved in tuning server. For example, the following query would tell me, in total optimizations, what the fraction of time View was “reference“. As this View also includes system Views and DMVs, the number is a bit higher on my machine. SELECT (SELECT CAST (occurrence AS FLOAT) FROM sys.dm_exec_query_optimizer_info WHERE counter = 'view reference') / (SELECT CAST (occurrence AS FLOAT) FROM sys.dm_exec_query_optimizer_info WHERE counter = 'optimizations') AS ViewReferencedFraction Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – Introduction to FIRST _VALUE and LAST_VALUE – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    SQL Server 2012 introduces new analytical functions FIRST_VALUE() and LAST_VALUE(). This function returns first and last value from the list. It will be very difficult to explain this in words so I’d like to attempt to explain its function through a brief example. Instead of creating a new table, I will be using the AdventureWorks sample database as most developers use that for experiment purposes. Now let’s have fun following query: USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO The above query will give us the following result: What’s the most interesting thing here is that as we go from row 1 to row 10, the value of the FIRST_VALUE() remains the same but the value of the LAST_VALUE is increasing. The reason behind this is that as we progress in every line – considering that line and all the other lines before it, the last value will be of the row where we are currently looking at. To fully understand this statement, see the following figure: This may be useful in some cases; but not always. However, when we use the same thing with PARTITION BY, the same query starts showing the result which can be easily used in analytical algorithms and needs. Let us have fun through the following query: Let us fun following query. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO The above query will give us the following result: Let us understand how PARTITION BY windows the resultset. I have used PARTITION BY SalesOrderID in my query. This will create small windows of the resultset from the original resultset and will follow the logic or FIRST_VALUE and LAST_VALUE in this resultset. Well, this is just an introduction to these functions. In the future blog posts we will go deeper to discuss the usage of these two functions. By the way, these functions can be applied over VARCHAR fields as well and are not limited to the numeric field only. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • uWSGI log file...permission denied to read file

    - by bkev
    I have a server running Django/Nginx/uWSGI with uWSGI in emperor mode, and the error log for it (the vassal-level error log, not the emperor-level log) has a continual permissions error every time it spawns a new worker, like so: Tue Jun 26 19:34:55 2012 - Respawned uWSGI worker 2 (new pid: 9334) Error opening file for reading: Permission denied Problem is, I don't know what file it's having trouble opening; it's not the log file, obviously, since I'm looking at it and it's writing to that without issue. Any way to find out? I'm running the apt-get version of uWSGI 1.0.3-debian through Upstart on Ubuntu 12.04. The site is working successfully, aside from what seems like a memory leak...hence my looking at the log file. My Upstart conf file description "uWSGI" start on runlevel [2345] stop on runlevel [06] respawn env UWSGI=/usr/bin/uwsgi env LOGTO=/var/log/uwsgi/emperor.log exec $UWSGI \ --master \ --emperor /etc/uwsgi/vassals \ --die-on-term \ --auto-procname \ --no-orphans \ --logto $LOGTO \ --logdate My Vassal ini file: [uwsgi] # Variables base = /srv/env/mysiteenv # Generic Config uid = uwsgi gid = uwsgi socket = 127.0.0.1:5050 master = true processes = 2 reload-on-as = 128 harakiri = 60 harakiri-verbose = true auto-procname = true plugins = http,python cache = 2000 home = %(base) pythonpath = %(base)/mysite module = wsgi logto = /srv/log/mysite/uwsgi_error.log logdate = true

    Read the article

  • Configured MySQL for SSL , but SLL is still not in use..!

    - by Sunrays
    I configured SSL for MySQL using the following script. #!/bin/bash # mkdir -p /root/abc/ssl_certs cd /root/abc/ssl_certs # echo "--> 1. Create CA cert, private key" openssl genrsa 2048 > ca-key.pem echo "--> 2. Create CA cert, certificate" openssl req -new -x509 -nodes -days 1000 -key ca-key.pem > ca-cert.pem echo "--> 3. Create Server certificate, key" openssl req -newkey rsa:2048 -days 1000 -nodes -keyout server-key.pem > server-req.pem echo "--> 4. Create Server certificate, cert" openssl x509 -req -in server-req.pem -days 1000 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 > server-cert.pem echo "" echo echo "" echo "--> 5. Create client certificate, key. Use DIFFERENT common name then server!!!!" echo "" openssl req -newkey rsa:2048 -days 1000 -nodes -keyout client-key.pem > client-req.pem echo "6. Create client certificate, cert" openssl x509 -req -in client-req.pem -days 1000 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 > client-cert.pem exit 0 The following files were created: ca-key.pem ca-cert.pem server-req.pem server-key.pem server-cert.pem client-req.pem client-key.pem client-cert.pem Then I combined server-cert.pem and client-cert.pem into ca.pem (I read in a post to do so..) I created a ssl user in MySQL: GRANT ALL ON *.* to sslsuer@hostname IDENTIFIED BY 'pwd' REQUIRE SSL; Next I added the following in my.cnf [mysqld] ssl-ca = /root/abc/ssl_certs/ca.pem ssl-cert = /root/abc/ssl_certs/server-cert.pem ssl-key = /root/abc/ssl_certs/server-key.pem After restarting the server,I connected to mysql but SSL was still not in use :( mysql -u ssluser -p SSL: Not in use Even the have_ssl parameter was still showing disabled.. :( mysql> show variables like '%ssl%'; +---------------+---------------------------------------------+ | Variable_name | Value | +---------------+---------------------------------------------+ | have_openssl | DISABLED | | have_ssl | DISABLED | | ssl_ca | /root/abc/ssl_certs/ca.pem | | ssl_capath | | | ssl_cert | /root/abc/ssl_certs/server-cert.pem | | ssl_cipher | | | ssl_key | /root/abc/ssl_certs/server-key.pem | +---------------+---------------------------------------------+ Have I missed any step, or whats wrong.. Answers with missed steps in detail will be highly appreciated..

    Read the article

  • Ubuntu, user can't write to a directory and I don't see why not.

    - by Peter
    I've got a directory, /var/www/someProject/backup/mysql, and I want the user mysql to write to it. Each time I try to write to it with the mysql user, I get a "can't read/write" error. Yet the directory is 777 as you can see here: drwxrwxrwx 2 aUser users 4096 2010-03-17 17:14 mysql I also tried to chown the directory to mysql:mysql, just like the home dir of the mysql user, but no luck, that changed nothing. What am I doing wrong here? Or is the mysql user limited to his home dir in some other way in Ubuntu? Been bugging me for days now, this problem so any help greatly appreciated.

    Read the article

  • How do I permanently delete /var/log/lastlog?

    - by GregB
    My /var/log/lastlog file is huge. I know it's really only a few kilobytes, but tar isn't smart enough to know that, so when I image a virtual machine, my restore fails because it thinks I'm trying to load more data than I have capacity on my disk. I want to delete /var/log/lastlog and stop any and all logging to the file. I'm aware of the security implications. This logging needs to stop to preserve my backup strategy. I've made a change to /etc/pam.d/login which I was told would disable logging to /var/log/lastlog, but it does not appear to work as /var/log/lastlog keeps growing. # Prints the last login info upon succesful login # (Replaces the `LASTLOG_ENAB' option from login.defs) #session optional pam_lastlog.so Any ideas? EDIT For anyone interested, I use Centrify Express to authenticate my users via LDAP. Centrify Express is "free", but one of the drawbacks is that I can't manage user UIDs via LDAP, so they are given a dynamic UID when they login to a server. Centrify picks some crazy high UID values (so they don't conflict with local users on the server, presumably). /var/log/lastlog is indexed by UID, and grows to accommodate the largest UID on the system. This means that when a Centrify user logs in, they get a UID in the upper-end of the UID range, which causes lastlog to allocate an obscene amount of space, according to the file system. ~$ ll /var/log/lastlog -rw-rw-r-- 1 root root 291487675780 Apr 10 16:37 /var/log/lastlog ~$ du -h /var/log/lastlog 20K /var/log/lastlog More Into --- Sparse Files

    Read the article

  • connection to apache server switches sockets connection

    - by Newben
    I have just post a question but I post an other one because the problem is not the one I had in thought when asking the latter. So, I am running some rails app on osx, when I run rails s, everything works fine. If I shut down the apache server (mamp) and if I run rails s again, I have this message Can't connect to local MySQL server through socket '/Applications/MAMP/tmp/mysql/mysql.sock', which for sure is normal. For info, my mamp server is running, and the connection must pass through /Applications/MAMP/Library/bin/mysql, so I aliased it by setting in my bash profile : alias mysql="/Applications/MAMP/Library/bin/mysql" Now, when I launch a rails generate command type, I get this message : /$root/vendor/bundle/ruby/2.0.0/gems/mysql2-0.3.11/lib/mysql2/client.rb:44:in `connect': Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2) (Mysql2::Error) So how it can be ?

    Read the article

  • PHP Runs Very Slow on IIS7. Need Help optimizing our config

    - by Kendor
    Am running a PHP based web app on our Windows 2008 cloud-based server. The app, which runs fine outside of our environment (e.g. a different IIS server), but is VERY slow in our environment. Based on googling this is a relatively common situation. I installed PHP and MySQL via the IIS web deployment method... Here's our setup: Windows 2008 Server Enterprise SP2 (32-bit) Microsoft-IIS/7.0 MySQL client version: mysqlnd 5.0.8-dev - 20102224 $Revision: 321634 $ PHP extension: mysqli Update for IIS 7.0 FastCGI Windows Cache Extension 1.1 for PHP 5.3 I had read elsewhere that ipv6 might be an issue, so I turned this off on the network adapter. The app is using: localhost as its connection Be easy on me, as I'm a bit green about some of these components... Also, rewriting the PHP app or modifying it is NOT an option. I'm reasonably SURE that our config is the issue.

    Read the article

  • How can give privilege for DB to a user [ ERROR 1044 (42000): Access denied for user ''@'localhost']

    - by Ahn
    I have created user in mysql 5.1 and given the ALL privilege, details given below, mysql> show GRANTS FOR test; +-------------------------------------------------------------+ | Grants for test@% | +-------------------------------------------------------------+ | GRANT ALL PRIVILEGES ON *.* TO 'test'@'%' WITH GRANT OPTION | | GRANT ALL PRIVILEGES ON `tt`.* TO 'test'@'%' | +-------------------------------------------------------------+ 2 rows in set (0.00 sec) But the show databases is not showing the databases on the mysql. It only shows as given below. How can give privilege for other DB s tables as well for the user 'test'? mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | test | +--------------------+ Error while I tried to use the mysql DB as user test: mysql> use mysql; ERROR 1044 (42000): Access denied for user ''@'localhost' to database 'mysql'

    Read the article

  • Trouble Starting MySL Community Server on Windows 7

    - by CodeAngel
    I have installed Netbeans 7 on my Windows 7. In addition, the MySQL Community Server 5.6.12 is installed with the MSI installer on thesame 7 PC. The MySQL server is integrated with the Netbeans IDE. However , it is not possible to start or stop the MySQL server from the command prompt or the Netbeans IDE. I am only able to start or stop the server from the Windows 7 services tool. Also , it is difficult running SQL queries from the Netbeans IDE even though it shows there is connection with the MySQL server. I have added the my.ini file to the installed directory of the MySQL server , that is : C:\Program Files\MySQL\MySQL Server 5.6 below is the my.ini file : # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html # *** DO NOT EDIT THIS FILE. It's a template which will be copied to the # *** default location during install, and will be replaced if you # *** upgrade to a newer version of MySQL. [mysqld] # Remove leading # and set to the amount of RAM for the most important data # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. # innodb_buffer_pool_size = 128M # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin # These are commonly set, remove the # and set as required. # basedir = ..... # datadir = ..... port = 3306 # server_id = ..... # Remove leading # to set options mainly useful for reporting servers. # The server defaults are faster for transactions and fast SELECTs. # Adjust sizes as needed, experiment to find the optimal values. # join_buffer_size = 128M # sort_buffer_size = 2M # read_rnd_buffer_size = 2M sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES Any suggestion is welcomed.

    Read the article

  • Index page in Magento is way too slow, what can I do?

    - by Jonathan
    Weirdly, the index page of my magento commerce is very slow, while when you navigate the products, brands, searchs etc is very fast, but everytime you click on the banner to go for home or enter the website, it take ages to load I wonder what can I do about this? I don't know where to start, since I am new at magento. I thought I could go on and read the code, but that would take ages too, since magento is very big. Maybe I can analyze it somehow? Thanks, Jonathan

    Read the article

  • Selective Suppression of Log Messages

    - by Duncan Mills
    Those of you who regularly read this blog will probably have noticed that I have a strange predilection for logging related topics, so why break this habit I ask?  Anyway here's an issue which came up recently that I thought was a good one to mention in a brief post.  The scenario really applies to production applications where you are seeing entries in the log files which are harmless, you know why they are there and are happy to ignore them, but at the same time you either can't or don't want to risk changing the deployed code to "fix" it to remove the underlying cause. (I'm not judging here). The good news is that the logging mechanism provides a filtering capability which can be applied to a particular logger to selectively "let a message through" or suppress it. This is the technique outlined below. First Create Your Filter  You create a logging filter by implementing the java.util.logging.Filter interface. This is a very simple interface and basically defines one method isLoggable() which simply has to return a boolean value. A return of false will suppress that particular log message and not pass it onto the handler. The method is passed the log record of type java.util.logging.LogRecord which provides you with access to everything you need to decide if you want to let this log message pass through or not, for example  getLoggerName(), getMessage() and so on. So an example implementation might look like this if we wanted to filter out all the log messages that start with the string "DEBUG" when the logging level is not set to FINEST:  public class MyLoggingFilter implements Filter {     public boolean isLoggable(LogRecord record) {         if ( !record.getLevel().equals(Level.FINEST) && record.getMessage().startsWith("DEBUG")){          return false;            }         return true;     } } Deploying   This code needs to be put into a JAR and added to your WebLogic classpath.  It's too late to load it as part of an application, so instead you need to put the JAR file into the WebLogic classpath using a mechanism such as the PRE_CLASSPATH setting in your domain setDomainEnv script. Then restart WLS of course. Using The final piece if to actually assign the filter.  The simplest way to do this is to add the filter attribute to the logger definition in the logging.xml file. For example, you may choose to define a logger for a specific class that is raising these messages and only apply the filter in that case.  <logger name="some.vendor.adf.ClassICantChange"         filter="oracle.demo.MyLoggingFilter"/> You can also apply the filter using WLST if you want a more script-y solution.

    Read the article

  • Optimizing Mysql to avoid redundancy but still have fast access to calculable data

    - by diglettpotato
    An example for the sake of the question: I have a database which contains users, questions, and answers. Each user has a score which can be calculated using the data from the questions and answers tables. Therefore if I had a score field in the users table, it would be redundant. However, if I don't use a score field, then calculating the score every time would significantly slow down the website. My current solution is to put it in a score field, and then have a cron running every few hours which recalculates everybody's score, and updates the field. Is there a better way to handle this?

    Read the article

  • Alternative to 'where col in (list)' for MySQL

    - by user210481
    Hi I have the following table T: id 1 2 3 4 col a b a c I want to do a select that returns the id,col when group by(col) having count(col)1 One way of doing it is SELECT id,col FROM T WHERE col IN (SELECT col FROM T GROUP BY(col) HAVING COUNT(col)>1); The intern select (from the right) returns 'a' and main one (left) will return 1,a and 3,a The problem is that the where in statement seems to be extremely slow. In my real case, the results from the internal select has many 'col's, something about 70000 and it's taking hours. Right now it's much faster to do the internal select and the main select getting all ids and upcs and do the intersection locally. MySQL should be able to handle this kind of query efficiently. Can I substitute the where in for a join or something faster? Thanks

    Read the article

  • group_concat on an empty join in MySQL

    - by Yossarian
    Hello, I've got the following problem: I have two tables: (simplified) +--------+ +-----------+ | User | | Role | +--------+ +-----------+ | ID<PK> | | ID <PK> | +--------+ | Name | +-----------+ and M:N relationship between them +-------------+ | User_Role | +-------------+ | User<FK> | | Role<FK> | +-------------+ I need to create a view, which selects me: User, and in one column, all of his Roles (this is done by group_concat). I've tried following: SELECT u.*, group_concat(r.Name separator ',') as Roles FROM User u LEFT JOIN User_Role ur ON ur.User=u.ID LEFT JOIN Role r ON ur.Role=r.ID GROUP BY u.ID; However, this works for an user with some defined roles. Users without role aren't returned. How can I modify the statement, to return me User with empty string in Roles column when User doesn't have any Role? Explanation: I'm passing the SQL data directly to a grid, which then formats itself, and it is easier for me to create slow and complicated view, than to format it in my code. I'm using MySQL

    Read the article

  • Replacing keywords in text with php & mysql

    - by intacto
    Hello, I have a news site containing an archive with more than 1 million news. I created a word definitions database with about 3000 entries, consisting of word-definition pairs. What I want to do is adding a definition next to every occurence of these words in the news. I cant make a static change as I can add a new keyword everyday, so i can make it realtime or cached. The question is, a str_replace or a preg_replace would be very slow for searching 3 thousand keywords in a text and replacing them. Are there any fast alternatives?

    Read the article

  • Storing dates i Train schedule MYSQL

    - by App_beginner
    Hi I have created a train schedule database in MYSQL. There are several thousand routes for each day. But with a few exceptions most of the routes are similar for every working day, but differ on weekends. At this time I basically update my SQL tables at midnight each day, to get the departures for the next 24 hours. This is however very inconvenient. So I need a way to store dates in my tables so I don't have to do this every day. I tried to create a separate table where I stored dates for each routenumber (routenumbers are resetted each day), but this made my query so slow that it was impossible to use. Does this mean I would have to store my departure and arrival times as datetimes? In that case the main table containing routes would have several million entries. Or is there another way? My routetable looks like this: StnCode (referenced in seperate Station table) DepTime ArrTime Routenumber legNumber

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >