Search Results

Search found 48190 results on 1928 pages for 'mysql slow query log'.

Page 264/1928 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • Why is MySQL with InnoDB doing a table scan when key exists and choosing to examine 70 times more ro

    - by andysk
    Hello, I'm troubleshooting a query performance problem. Here's an expected query plan from explain: mysql> explain select * from table1 where tdcol between '2010-04-13:00:00' and '2010-04-14 03:16'; +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ | 1 | SIMPLE | table1 | range | tdcol | tdcol | 8 | NULL | 5437848 | Using where | +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ 1 row in set (0.00 sec) That makes sense, since the index named tdcol (KEY tdcol (tdcol)) is used, and about 5M rows should be selected from this query. However, if I query for just one more minute of data, we get this query plan: mysql> explain select * from table1 where tdcol between '2010-04-13 00:00' and '2010-04-14 03:17'; +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ | 1 | SIMPLE | table1 | ALL | tdcol | NULL | NULL | NULL | 381601300 | Using where | +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ 1 row in set (0.00 sec) The optimizer believes that the scan will be better, but it's over 70x more rows to examine, so I have a hard time believing that the table scan is better. Also, the 'USE KEY tdcol' syntax does not change the query plan. Thanks in advance for any help, and I'm more than happy to provide more info/answer questions.

    Read the article

  • How do I insert data from a Python dictionary to MySQL?

    - by NJTechie
    I manipulated some data from MySQL and the resulting dictionary "data" (print data) displays something like this : {'1': ['1', 'K', abc, 'xyz', None, None, datetime.date(2009, 6, 18)], '2': ['2', 'K', efg, 'xyz', None, None, None, None], '3': ['3', 'K', ijk, 'xyz', None, None, None, datetime.date(2010, 2, 5, 16, 31, 2)]} How do I create a table and insert these values in a MySQL table? In other words, how do I dump them to MySQL or CSV? Not sure how to deal with datetime.date and None values. Any help is appreciated.

    Read the article

  • When writing to the Windows Event Log, is it possible to create a custom link text for URLs?

    - by thomasnguyencom
    I'm just using the vanilla EventLog.WriteEntry method: EventLog.WriteEntry(EVENT_SOURCE, message, EventLogEntryType.Error, id); Here's how the message shows up in the Event Log, with the links in the parenthesis working just fine, but it's ugly: Example 1: Please contact us via email (mailto:[email protected]) or online (http://example.com). Here's how the message shows up in the Event Log, with the HTML "markup", doesn't even handle it: Example 2: Please contact us via <a href="mailto:[email protected]">email</a> or <a href="http://example.com">online</a>. This is how I would like the message to show up, but with "email" and "online" as the link texts: Example 3: Please contact us via email or online. I've tried the <a href>...</a> HTML tags with no success.

    Read the article

  • LINQ Query using UDF that receives parameters from the query

    - by Ben Fidge
    I need help using a UDF in a LINQ which calculates a users position from a fixed point. int pointX = 567, int pointY = 534; // random points on a square grid var q = from n in _context.Users join m in _context.GetUserDistance(n.posY, n.posY, pointX, pointY, n.UserId) on n.UserId equals m.UserId select new User() { PosX = n.PosX, PosY = n.PosY, Distance = m.Distance, Name = n.Name, UserId = n.UserId }; The GetUserDistance is just a UDF that returns a single row in a TVP with that users distance from the points deisgnated in pointX and pointY variables, and the designer generates the following for it: [global::System.Data.Linq.Mapping.FunctionAttribute(Name="dbo.GetUserDistance", IsComposable=true)] public IQueryable<GetUserDistanceResult> GetUserDistance([global::System.Data.Linq.Mapping.ParameterAttribute(Name="X1", DbType="Int")] System.Nullable<int> x1, [global::System.Data.Linq.Mapping.ParameterAttribute(Name="X2", DbType="Int")] System.Nullable<int> x2, [global::System.Data.Linq.Mapping.ParameterAttribute(Name="Y1", DbType="Int")] System.Nullable<int> y1, [global::System.Data.Linq.Mapping.ParameterAttribute(Name="Y2", DbType="Int")] System.Nullable<int> y2, [global::System.Data.Linq.Mapping.ParameterAttribute(Name="UserId", DbType="Int")] System.Nullable<int> userId) { return this.CreateMethodCallQuery<GetUserDistanceResult>(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), x1, x2, y1, y2, userId); } when i try to compile i get The name 'n' does not exist in the current context

    Read the article

  • mysql - how do I load data from a different configuration...

    - by Ronedog
    I'm not sure if this will make sense, but I'll give it a shot. My hard drive went down and I had to reinstall the os along with all my webserver configuration,etc. I kept a backup of the mysql database, but it doesn't contain all the tables...I added a couple tables after my last backup. I have access to the hard drive and the directory where the mysql data files are stored from the failed hard drive, but I don't know how to retrieve the data into my new mysql database. Is it even possible to get the raw data files from mysql and load them into a different instance? I'd even be happy if there was some way for phpmyadmin to show the data files, then I could dump out to a backup txt file, and reload them into my new configuration. Any help will be appreciated. thanks.

    Read the article

  • How can I have MySQL write outfiles as a different user?

    - by David Locke
    I'm working with a MySQL query that writes into an outfile. I run this query once every day or two and so I want to be able to remove the outfile without having to resort to su or sudo. The only way I can think of making that happen is to have the outfile written as owned by someone other than the mysql user. Is this possible? Edit: I am not redirecting output to a file, I am using the INTO OUTFILE part of a select query to output to a file. If it helps: mysql --version mysql Ver 14.12 Distrib 5.0.32, for pc-linux-gnu (x86_64) using readline 5.2

    Read the article

  • In MySQL how can I tell what character set a particular table is using?

    - by muudscope
    I have a large mysql table that I think might be using the wrong character set. If so I'll need to change it using ALTER TABLE mytable CONVERT TO CHARACTER SET utf8 But since this is a very large table, I'd rather not run this command unless I have to. So my question is, how can I ask mysql what the character set is on a particular table? I can call status in mysql to see the database's character set, but that doesn't necessarily mean all the tables have the same character set, right?

    Read the article

  • I'm installing mysql with homebrew, how do I set value to 'log_output'?

    - by imjared
    I've been following directions here: brew install mysql on mac os and have now run into the same issue both on my home machine and work (both MacBook Pro running OSX 10.7.5 and ZSH): 121031 12:18:03 [ERROR] /usr/local/opt/mysql/bin/mysqld: Error while setting value '/Users/me/.mysql/logs/' to 'log_output' The full output of the error can be found in this gist: https://gist.github.com/3988275 I'm really at a loss for what I'm doing incorrectly or what I need to change. It should be noted that there are no logs for me to check in /usr/local/var/mysql. Any help is greatly appreciated.

    Read the article

  • Is copying /var/lib/mysql a good alterntive to mysqldump?

    - by kemp
    Since I'm making a full backup of my entire debian system, I was thinking if having a copy of /var/lib/mysql directory is a viable alternative to dumping tables with mysqldump. are all informations needed contained in that directory? can single tables be imported in another mysql? can there be problems while restoring those files on a (probably slightly) different mysql server version?

    Read the article

  • Which connection string for MySql ODBC connector 5.2.6?

    - by stighy
    it seems i can't make work a connection to MySql using ODBC connector 5.2.6. In a 64 bit environment, in a VBA excel application, i use this string, but it not work: "Driver={MySQL ODBC 5.2 Driver}; Server=myserver;Database=mydb;User=readonly;Password=mypass;Option=3" I have also used Driver={MySQL ODBC 5.2w Driver} and Driver={MySQL ODBC 5.2a Driver} But the error is: ODBC driver unknow. Can someone help me ? Ps: it works with a DSN setted, but i would like to use a connection string so i don't go to each user computer and set a DSN. Thanks

    Read the article

  • How to insert into two different tables in one statement with Java and MySQL?

    - by MalcomTucker
    I am using Java, Spring (NamedParameterJdbcTemplate) and MySQL. My statement looks like this: INSERT INTO Table1 (Name) VALUES (?); INSERT INTO Table2 (Path, Table1Id) VALUES (?, LAST_INSERT_ID()) But it is throwing the following error: PreparedStatementCallback; bad SQL grammar [INSERT INTO Table1 (Name) VALUES (?); INSERT INTO Table2 (Path, Table1Id) VALUES (?, LAST_INSERT_ID())] ` Nested exception is: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'INSERT INTO Table2 (Path, Table1Id' at line 1 The syntax works fine in MySQL but something is up when combining via the Spring template. Thanks!

    Read the article

  • How can I check if a binary string is UTF-8 in mysql?

    - by Piotr Czapla
    I've found a Perl regexp that can check if a string is UTF-8 (the regexp is from w3c site). $field =~ m/\A( [\x09\x0A\x0D\x20-\x7E] # ASCII | [\xC2-\xDF][\x80-\xBF] # non-overlong 2-byte | \xE0[\xA0-\xBF][\x80-\xBF] # excluding overlongs | [\xE1-\xEC\xEE\xEF][\x80-\xBF]{2} # straight 3-byte | \xED[\x80-\x9F][\x80-\xBF] # excluding surrogates | \xF0[\x90-\xBF][\x80-\xBF]{2} # planes 1-3 | [\xF1-\xF3][\x80-\xBF]{3} # planes 4-15 | \xF4[\x80-\x8F][\x80-\xBF]{2} # plane 16 )*\z/x; But I'm not sure how to port it to MySQL as it seems that MySQL don't support hex representation of characters see this question. Any thoughts how to port the regexp to MySQL? Or maybe you know any other way to check if the string is valid UTF-8? UPDATE: I need this check working on the MySQL as I need to run it on the server to correct broken tables. I can't pass the data through a script as the database is around 1TB.

    Read the article

  • wordpress generating slow mysql queries - is it index problem?

    - by tash
    Hello Stack Overflow I've got very slow Mysql queries coming up from my wordpress site. It's making everything slow and I think this is eating up CPU usage. I've pasted the Explain results for the two most frequently problematic queries below. This is a typical result - although very occasionally teh queries do seem to be performed at a more normal speed. I have the usual wordpress indexes on the database tables. You will see that one of the queries is generated from wordpress core code, and not from anything specific - like the theme - for my site. I have a vague feeling that the database is not always using the indexes/is not using them properly... Is this right? Does anyone know how to fix it? Or is it a different problem entirely? Many thanks in advance for any help anyone can offer - it is hugely appreciated Query: [wp-blog-header.php(14): wp()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort Query time: 34.2829 (ms) 9) Query: [wp-content/themes/LMHR/index.php(40): query_posts()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.ID NOT IN ( SELECT tr.object_id FROM wp_term_relationships AS tr INNER JOIN wp_term_taxonomy AS tt ON tr.term_taxonomy_id = tt.term_taxonomy_id WHERE tt.taxonomy = 'category' AND tt.term_id IN ('217', '218', '223', '224') ) AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort 2 DEPENDENT SUBQUERY tr ref PRIMARY,term_taxonomy_id PRIMARY 8 func 1 Using index 2 DEPENDENT SUBQUERY tt eq_ref PRIMARY,term_id_taxonomy,taxonomy PRIMARY 8 antin1_lovemusic2010.tr.term_taxonomy_id 1 Using where Query time: 70.3900 (ms)

    Read the article

  • MySQL replication - Should I handle load balancing from my client code (PHP) ?

    - by pirostraktor
    In a MySQL master-slave replication enviroment if I have 4 slave servers how can I execute load balanced select queries? Should I write a PHP class to dealing with the 4 slaves or it is possible to address queries to MySQL's own load balancer solution? Is there a MySQL load balancing solutions? Can I use some other tool to distribute my queries? What is the typical set up in situations like this? Thanks for all answers!

    Read the article

  • Need a better way to execute console commands from python and log the results

    - by Wim Coenen
    I have a python script which needs to execute several command line utilities. The stdout output is sometimes used for further processing. In all cases, I want to log the results and raise an exception if an error is detected. I use the following function to achieve this: def execute(cmd, logsink): logsink.log("executing: %s\n" % cmd) popen_obj = subprocess.Popen(\ cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) (stdout, stderr) = popen_obj.communicate() returncode = popen_obj.returncode if (returncode <> 0): logsink.log(" RETURN CODE: %s\n" % str(returncode)) if (len(stdout.strip()) > 0): logsink.log(" STDOUT:\n%s\n" % stdout) if (len(stderr.strip()) > 0): logsink.log(" STDERR:\n%s\n" % stderr) if (returncode <> 0): raise Exception, "execute failed with error output:\n%s" % stderr return stdout "logsink" can be any python object with a log method. I typically use this to forward the logging data to a specific file, or echo it to the console, or both, or something else... This works pretty good, except for three problems where I need more fine-grained control than the communicate() method provides: stdout and stderr output can be interleaved on the console, but the above function logs them separately. This can complicate the interpretation of the log. How do I log stdout and stderr lines interleaved, in the same order as they were output? The above function will only log the command output once the command has completed. This complicates diagnosis of issues when commands get stuck in an infinite loop or take a very long time for some other reason. How do I get the log in real-time, while the command is still executing? If the logs are large, it can get hard to interpret which command generated which output. Is there a way to prefix each line with something (e.g. the first word of the cmd string followed by :).

    Read the article

  • Why is this MySQL FULLTEXT query returning 0 rows when matching rows are present?

    - by Don MacAskill
    I have a MySQL table with 200M rows which has a FULLTEXT index on two columns (Title,Body). When I do a simple FULLTEXT query in the default NATURAL LANGUAGE mode for some popular results (they'd return 2M+ rows), I'm getting zero rows back: SELECT COUNT(*) FROM itemsearch WHERE MATCH (Title, Body) AGAINST ('fubar'); But when I do a FULLTEXT query in BOOLEAN mode, I can see the rows in question do exist (I get 2M+ back, depending): SELECT COUNT(*) FROM itemsearch WHERE MATCH (Title, Body) AGAINST ('+fubar' IN BOOLEAN MODE); I have some queries which return ~500K rows which are working fine in either mode, so if it's result size related, it seems to crop up somewhere between 500K and a little north of 2M. I've tried playing with the various buffer size variables, to no avail. It's clearly not the 50% threshold, since we're not getting 100M rows back for any result. Any ideas?

    Read the article

  • In mysql, is "explain ..." always safe?

    - by tye
    If I allow a group of users to submit "explain $whatever" to mysql (via Perl's DBI using DBD::mysql), is there anything that a user could put into $whatever that would make any database changes, leak non-trivial information, or even cause significant database load? If so, how? I know that via "explain $whatever" one can figure out what tables / columns exist (you have to guess names, though) and roughly how many records are in a table or how many records have a particular value for an indexed field. I don't expect one to be able to get any information about the contents of unindexed fields. DBD::mysql should not allow multiple statements so I don't expect it to be possible to run any query (just explain one query). Even subqueries should not be executed, just explained. But I'm not a mysql expert and there are surely features of mysql that I'm not even aware of. In trying to come up with a query plan, might the optimizer actual execute an expression in order to come up with the value that an indexed field is going to be compared against? explain select * from atable where class = somefunction(...) where atable.class is indexed and not unique and class='unused' would find no records but class='common' would find a million records. Might 'explain' evaluate somefunction(...)? And then could somefunction(...) be written such that it modifies data?

    Read the article

  • MySQL: Search for a field, then replace another field in the same row.

    - by Francisco
    Sorry if the question is stupid but I'm newbie to MySQL and got stuck with this. Let's suppose I have the following table in MySQL: City.........Country.....Restaurants Rome......Italy.............3032 Paris.......France........5220 I want to search for the city "Paris" and update the field "Restaurants" (replace 5220 with 5300). What would be the right MySQL query? Thanks in advance!

    Read the article

  • How to obtain value of auto_increment field in Access linked to MySQL?

    - by Cruachan
    I'm trying to modify and existing Access application to use MySQL as a database via ODBC with the minimal amount of recoding. The current code will often insert a new record using DAO then obtain the ID by using LastModified. This doesn't work with MySQL. Instead I'm trying to use the approach using SELECT * FROM tbl_name WHERE auto_col IS NULL Suggested for Access in the MySQL documentation. However if I set up a sample table consisting of just an id and text data field and execute this CurrentDb.Execute ("INSERT INTO tbl_scratch (DATA) VALUES ('X')") Set rst = CurrentDb.OpenRecordset("SELECT id FROM tbl_scratch WHERE id IS NULL") myid = rst!id Id is returned as null. However if I execute INSERT INTO tbl_scratch (DATA) VALUES ('X'); SELECT id FROM tbl_scratch WHERE id IS NULL; using a direct MySQL editor then id is returned correctly, so my database and approach is fine but my implementation inside Access must be incorrect. Frustratingly the MySQL documentation gives the SQL statement to retrieve the id as an example that works in Access (as it states LAST_INSERT_ID() doesn't) but gives no further details. How might I fix this?

    Read the article

  • Why is only 2 of my querys working? (PHP/Mysql)

    - by ggfan
    When I remove a posting, I want it to remove the posting itself, any comments related to it, and like/dislike of the comments, and favorites that people added. So there are 4 queries that do this, but somehow only 2 of the 4 are working. When I switch say query 2 for query 3, then query 2 works and query 3 doesn't... The table names are correct and it is getting the data via $_GET. // Grab the data from the GET $posting_id = $_GET['posting_id']; // Delete the posting from the database $query = "DELETE FROM posting WHERE posting_id=$posting_id LIMIT 1"; mysqli_query($dbc, $query) or die(mysql_error()); // Delete the comments $query2 = "DELETE FROM comments WHERE posting_id=$posting_id "; mysqli_query($dbc, $query2) or die(mysql_error()); // Delete the like/dislike $query3 = "DELETE FROM comment_likedislike WHERE posting_id=$posting_id "; mysqli_query($dbc, $query3) or die(mysql_error()); // Delete the favorites $query4 = "DELETE FROM favorite WHERE posting_id=$posting_id "; mysqli_query($dbc, $query4) or die(mysql_error());

    Read the article

  • MySQL: How to consume/discard the result of a query?

    - by GetFree
    I have a stored procedure which executes an optimize table statement for every table in a DB. Those optimize table statements are prepared statements of course (they have to be built at runtime) and I need to call that procedure from PHP using ext/mysql API. Unfortunately, ext/mysql does't support doing such thing because optimize table returns a result set and in order to handle that, the new mysql protocol is required, which is supported by the "new" ext/mysqli API. Well... there are several things I dont have control over, so it's not in my posibilities to upgrade to ext/mysqli any time soon, nor can I implement the procedure as PHP code rather than sql code. So I thought if it would be possible somehow to consume/discard the result of optimize table inside the stored procedure so that ext/mysql doesn't complain about it. One thing to consider is that since the optimize table statements are prepared statements, you can't use a cursor over them.

    Read the article

  • MySQL very slow compared to MS Access when inserting hundred of thousands of rows ! So is there a my

    - by asksuperuser
    I am currently adding hundred of thousands of rows of data to a table first on a MS Access Table then on a MySQL Table. I first tried with MS Access, it tooks less than 40 seconds. Then I tried exactly with the same source and with the same table structure to MySQL and it took 6 min and 40 seconds that is 1000% slower !!! So is there a myth that a database server is more performant ?

    Read the article

  • Lighttpd + fastcgi + python (for django) slow on first request

    - by EagleOne
    I'm having a problem with a django website I host with lighttpd + fastcgi. It works great but it seems that the first request always takes up to 3seconds. Subsequent requests are much faster (<1s). I activated access logs in lighttpd in order to track the issue. But I'm kind of stuck. Here are logs where I 'lose' 4s (from 10:04:17 to 10:04:21): 2012-12-01 10:04:17: (mod_fastcgi.c.3636) handling it in mod_fastcgi 2012-12-01 10:04:17: (response.c.470) -- before doc_root 2012-12-01 10:04:17: (response.c.471) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.472) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.473) Path : 2012-12-01 10:04:17: (response.c.521) -- after doc_root 2012-12-01 10:04:17: (response.c.522) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.523) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.524) Path : /var/www/finderauto.fcgi 2012-12-01 10:04:17: (response.c.541) -- logical -> physical 2012-12-01 10:04:17: (response.c.542) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.543) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.544) Path : /var/www/finderauto.fcgi 2012-12-01 10:04:21: (response.c.128) Response-Header: HTTP/1.1 200 OK Last-Modified: Sat, 01 Dec 2012 09:04:21 GMT Expires: Sat, 01 Dec 2012 09:14:21 GMT Content-Type: text/html; charset=utf-8 Cache-Control: max-age=600 Transfer-Encoding: chunked Date: Sat, 01 Dec 2012 09:04:21 GMT Server: lighttpd/1.4.28 I guess that if there is a problem, it's whith my configuration. So here is the way I launch my django app: python manage.py runfcgi method=threaded host=127.0.0.1 port=3033 And here is my lighttpd conf: server.modules = ( "mod_access", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", "mod_accesslog", ) server.document-root = "/var/www" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/var/log/lighttpd/error.log" server.pid-file = "/var/run/lighttpd.pid" server.username = "www-data" server.groupname = "www-data" accesslog.filename = "/var/log/lighttpd/access.log" debug.log-request-header = "enable" debug.log-response-header = "enable" debug.log-file-not-found = "enable" debug.log-request-handling = "enable" debug.log-timeouts = "enable" debug.log-ssl-noise = "enable" debug.log-condition-cache-handling = "enable" debug.log-condition-handling = "enable" fastcgi.server = ( "/finderauto.fcgi" => ( "main" => ( # Use host / port instead of socket for TCP fastcgi "host" => "127.0.0.1", "port" => 3033, #"socket" => "/home/finderadmin/finderauto.sock", "check-local" => "disable", "fix-root-scriptname" => "enable", ) ), ) alias.url = ( "/media" => "/home/user/django/contrib/admin/media/", ) url.rewrite-once = ( "^(/media.*)$" => "$1", "^/favicon\.ico$" => "/media/favicon.ico", "^(/.*)$" => "/finderauto.fcgi$1", ) index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", " index.lighttpd.html" ) url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ## Use ipv6 if available #include_shell "/usr/share/lighttpd/use-ipv6.pl" dir-listing.encoding = "utf-8" server.dir-listing = "enable" compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ( "application/x-javascript", "text/css", "text/html", "text/plain" ) include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" If any of you could help me finding out where I lose these 3 or 4 s. I would much appreciate. Thanks in advance!

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >