Search Results

Search found 14828 results on 594 pages for 'settings py'.

Page 297/594 | < Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >

  • exe files and Python

    - by Sorush Rabiee
    i have some questions about python: 1- How to build .exe files from .py files? 2- How to run a program with arguments and receive the result by python code? 3- How to load .NET library in python code (or write python in VS.NET IDE [!?])? 4- is built-in integer of python 3.1 something like a string? it calculates 200! in less than one second and calculates 2^1 to 2^7036 (by a simple recursive algorithm & writing them to a text file) with a 1.75GHz cpu in 4 minuets, so if it is a string, how it can be so fast like this? is there a great difference between memory type and logical calculation of python with c++? 5- what is the best python practice? how can i be an expert?

    Read the article

  • How to read formatted input in python?

    - by eSKay
    I want to read from stdin five numbers entered as follows: 3, 4, 5, 1, 8 into seperate variables a,b,c,d & e. How do I do this in python? I tried this: import string a=input() b=a.split(', ') for two integers, but it does not work. I get: Traceback (most recent call last): File "C:\Users\Desktop\comb.py", line 3, in <module> b=a.split(', ') AttributeError: 'tuple' object has no attribute 'split' How to do this? and suppose I have not a fixed but a variable number n integers. Then?

    Read the article

  • Having py2exe include my data files (like include_package_data)

    - by cool-RR
    I have a Python app which includes non-Python data files in some of its subpackages. I've been using the include_package_data option in my setup.py to include all these files automatically when making distributions. It works well. Now I'm starting to use py2exe. I expected it to see that I have include_package_data=True and to include all the files. But it doesn't. It puts only my Python files in the library.zip, so my app doesn't work. How do I make py2exe include my data files?

    Read the article

  • Django-pyodbc mssql/freetds server connection problems on linux

    - by wizard
    Error: ('IM002', '[IM002] [unixODBC][Driver Manager]Data source name not found, and no default driver specified (0) (SQLDriverConnectW)') I'm migrating from developing on a windows development machine to linux machine in production and I'm having issues with the freetds driver. As far as I can tell that error message means it can't find the driver. I can connect via the cli via sqsh and tsql. I've setup my settings.py as such. 'bc2db': { 'ENGINE': 'sql_server.pyodbc', 'NAME': 'DataTEST', 'USER': 'appuser', 'PASSWORD': 'PASS', 'HOST': 'bc2.domain.com', 'options': { 'driver': 'FreeTDS', } }, Does anyone have any mssql experience with djano? do I have to use a dns? (how would I format that?) Thanks

    Read the article

  • How to get Locale from its String representation in Java?

    - by Joonas Pulakka
    Is there a neat way of getting a Locale instance from its "programmatic name" as returned by Locale's toString() method? An obvious and ugly solution would be parsing the String and then constructing a new Locale instance according to that, but maybe there's a better way / ready solution for that? The need is that I want to store some locale specific settings in a SQL database, including Locales themselves, but it would be ugly to use serialized Locale objects there. I would rather store their String representations, which seem to be quite adequate in detail.

    Read the article

  • Kill explorer.exe with windows title

    - by matthias
    Hello, I'm new with programing and my question is now, how i can close some specific explorer.exe windows. My Problem is, i have a program that call some windows: Option Explicit Dim shell, expl1, expl2, expl3, Terminate Dim uprgExplorer set shell = WScript.CreateObject("WScript.Shell") set expl1 = shell.exec("C:\WINDOWS\explorer.exe c:\Documents and Settings") set expl2 = shell.exec("C:\WINDOWS\explorer.exe C:\WINDOWS\system32\CCM\Cache") set expl3 = shell.exec("C:\WINDOWS\explorer.exe c:\SCRIPTS\LOG") Now i will kill only this 3 windows NOT the explorer.exe. Can some one help me? Greetings, matthias

    Read the article

  • need help figuring out dynamic menu generation in django

    - by photographer
    I need to dynamically generate a code like this in the resulting html: <p>>> gallery one</p> <p><a href="../gallery2">gallery two</a></p> <p><a href="../about">about the author</a></p> <p><a href="../news">our news</a></p> I do have menu_code string variable created in views.py (it is generated depending on an item number of the current page passed — 1 in the case above), which contains that long string with the code shown above. It is (well, supposed to) passed by locals() into the html template (all other variables are passed that way successfully): return render_to_response('gallery_page.html', locals()) I have this: {% include menu_code %} inside the template html. But instead of being interpreted as code it is just shown as text in the browser. What am I doing wrong? How to make it work as a dynamically generated menu?

    Read the article

  • jquery-Ajax call on tornado handlers waits for pervious ajax call to return

    - by harshh
    Hey All. I recently started testing TornadoWeb for a home-project, which uses jquery getJSON function to call my tornado handlers. And found something strange, which i seek an explanation for. I fire an ajax request for Handler1 on tornado, and in some cases request for Handler2 is initiated before Handler1 returns. It appears from development-server logs, and firebug console-debugging, that Handler2 request waits for Handler1 request to finish, and then return. So basically, XHR call is waiting for earlier XHRs. They are supposed to be asynchronous/non-blocking right?? Or am i missing something. You can check the test-case environment called testtornado at http://github.com/harshh/Harsh-Projects/ with main.py as server triggering file. I would appreciate help from anyone who can throw some light on this.

    Read the article

  • jquery json null when using localhost

    - by Eeyore
    I am trying to load json generated by my django app. It works when I save the json output and load it from a static file. However, when I make a call to a server it returns null. JSON {"users": [ { "id": 1, "name": "arnold" }, { "id": 2, "name": "frankie" } ]} Ajax call $.ajax({ url: "http://localhost:8000/json", //vs. json.js dataType: 'json', type: 'get', timeout: 20000, error: function() { alert("error"); }, beforeSend: function() { alert("beforeSend"); }, complete: function() { alert("complete"); }, success: function(data) { alert(data.users[0].name); } }); view.py return HttpResponse(simplejson.dumps(data), content_type = 'application/json; charset=utf8')

    Read the article

  • Initial Genetic Programming Parameters

    - by cmptrer
    I did a little GP (note:very little) work in college and have been playing around with it recently. My question is in regards to the intial run settings (population size, number of generations, min/max depth of trees, min/max depth of initial trees, percentages to use for different reproduction operations, etc.). What is the normal practice for setting these parameters? What papers/sites do people use as a good guide?

    Read the article

  • Clean stop of Python bottle webserver when started from subprocess

    - by luc
    Hello all, I would like to embed the great Bottle web framework into a small application (1st target is Windows OS). This app starts the bottle webserver thanks to the subprocess module. import subprocess p = subprocess.Popen('python websrv.py') The bottle app is quite simple @route("/") def index(): return template('index') run(reloader=True) It starts the default webserver into a Windows console. All seems Ok except the fact that I must press Ctrl-C to close the bottle webserver. I would like that the master app terminates the webserver when it shutdowns. I can't find a way to do that (p.terminate() doesn't work in this case unfortunately) Any idea? Thanks in advance

    Read the article

  • Boost.Python tutorial in Ubuntu 10.04

    - by Doughy
    I downloaded the latest version of Boost and I'm trying to get the Boost.python tutorial up and running on Ubuntu 10.04: http://www.boost.org/doc/libs/1_43_0/libs/python/doc/tutorial/doc/html/python/hello.html I navigated to the correct directory, ran "bjam" and it compiled using default settings. I did not yet create a bjam config file. The compilation appears to have worked, but now I have no idea how to include the files in my python script. When I try to run the python hello world script, it gives me this error: Traceback (most recent call last): File "./hello.py", line 6, in <module> import hello_ext ImportError: libboost_python.so.1.43.0: cannot open shared object file: No such file or directory Anyone know what is going on?

    Read the article

  • High Load mysql on Debian server stops every day. Why?

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ? I'm already try mysqltuner and tunung-primer.sh , but they marked all green. Mysqltuner output mysqltuner >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.24-9-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 112G (Tables: 1528) [--] Data in InnoDB tables: 39G (Tables: 340) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 344 -------- Performance Metrics ------------------------------------------------- [--] Up for: 8h 18m 33s (14M q [478.333 qps], 259K conn, TX: 9B, RX: 5B) [--] Reads / Writes: 84% / 16% [--] Total buffers: 10.5G global + 81.1M per thread (200 max threads) [OK] Maximum possible memory usage: 26.3G (83% of installed RAM) [OK] Slow queries: 1% (259K/14M) [!!] Highest connection usage: 100% (201/200) [OK] Key buffer size / total MyISAM indexes: 1.5G/5.6G [OK] Key buffer hit rate: 100.0% (6B cached / 1M reads) [OK] Query cache efficiency: 74.3% (8M cached / 11M selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 247K sorts) [!!] Joins performed without indexes: 106025 [!!] Temporary tables created on disk: 49% (351K on disk / 715K total) [OK] Thread cache hit rate: 99% (249 created / 259K connections) [!!] Table cache hit rate: 15% (2K open / 13K opened) [OK] Open file limit used: 15% (3K/20K) [OK] Table locks acquired immediately: 99% (4M immediate / 4M locks) [!!] InnoDB data size / buffer pool: 39.4G/5.9G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce or eliminate persistent connections to reduce connection usage Adjust your join queries to always utilize indexes Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Variables to adjust: max_connections (> 200) wait_timeout (< 600) interactive_timeout (< 600) join_buffer_size (> 5.0M, or always use indexes with joins) table_cache (> 10000) innodb_buffer_pool_size (>= 39G) Mysql primer output -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.5.24-9-log x86_64 Uptime = 0 days 8 hrs 20 min 50 sec Avg. qps = 478 Total Questions = 14369568 Threads Connected = 16 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1.000000 sec. You have 260626 out of 14369701 that take longer than 1.000000 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is enabled Binlog sync is not enabled, you could loose binlog records during a server crash WORKER THREADS Current thread_cache_size = 50 Current threads_cached = 45 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine MAX CONNECTIONS Current max_connections = 200 Current threads_connected = 11 Historic max_used_connections = 201 The number of used connections is 100% of the configured maximum. You should raise max_connections INNODB STATUS Current InnoDB index space = 214 M Current InnoDB data space = 39.40 G Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 5.85 G Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 23.46 G Configured Max Per-thread Buffers : 15.84 G Configured Max Global Buffers : 7.54 G Configured Max Memory Limit : 23.39 G Physical Memory : 31.40 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 5.61 G Current key_buffer_size = 1.47 G Key cache miss rate is 1 : 5578 Key buffer free ratio = 77 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is enabled Current query_cache_size = 200 M Current query_cache_used = 101 M Current query_cache_limit = 50 M Current Query cache Memory fill ratio = 50.59 % Current query_cache_min_res_unit = 4 K MySQL won't cache query results that are larger than query_cache_limit in size SORT OPERATIONS Current sort_buffer_size = 64 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 5.00 M You have had 106606 queries where a join could not use an index properly You have had 8 joins without keys that check for key usage after each row join_buffer_size >= 4 M This is not advised You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. OPEN FILES LIMIT Current open_files_limit = 20210 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_open_cache = 10000 tables Current table_definition_cache = 2000 tables You have a total of 1910 tables You have 2151 open tables. The table_cache value seems to be fine TEMP TABLES Current max_heap_table_size = 2.92 G Current tmp_table_size = 2.92 G Of 366426 temp tables, 49% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables. TABLE SCANS Current read_buffer_size = 3 M Current table scan ratio = 2846 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 185 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1'

    Read the article

  • How to know VirtualStore is applied in my application

    - by Lu Lu
    Hello everyone, I am developing an application which will run on Windows OS. However, when run it on Windows Vista, my application's settings is stored in VirtualStore. How to check VirtualStore is being applied in my application (I need a function to check and it must ok on XP, Vista). And how to get the path of Virtual Store of my application. Thanks.

    Read the article

  • Tweepy + App Engine Example OAuth Help

    - by Wasauce
    Hi I am trying to follow the Tweepy App Engine OAuth Example app in my app but am running into trouble. Here is a link to the tweepy example code: http://github.com/joshthecoder/tweepy-examples Specifically look at: http://github.com/joshthecoder/tweepy-examples/blob/master/appengine/oauth_example/handlers.py Here is the relevant snippet of my code [Ignore the spacing problems]: try: authurl = auth.get_authorization_url() request_token = auth.request_token db_user.token_key = request_token.key db_user.token_secret = request_token.secret db_user.put() except tweepy.TweepError, e: # Failed to get a request token self.generate('error.html', { 'error': e, }) return self.generate('signup.html', { 'authurl': authurl, 'request_token': request_token, 'request_token.key': request_token.key, 'request_token.secret': request_token.secret, }) As you can see my code is very similar to the example. However, when I compare the version of the request_token.key and request_token.secret that are rendered on my signup page (this is for the request_token.key and request_token.secret found in the datastore. Any guidance on what I am doing wrong here? Thanks! Reference Links:

    Read the article

  • Django ImageField validation & PIL

    - by Zayatzz
    Hello On sunday, I had problems with python modules, when I installed stackless python. Now I have compiled and installed : setuptools & python-mysqldb and i got my django project up and running again. (i also reinstalled django-1.1), Then I compiled and installed, jpeg, freetype2 and PIL. I also started using mod_wsgi instead of mod_python. But when uploading imagefield in form I get validationerror: Upload a valid image. The file you uploaded was either not an image or a corrupted image. Searchmonkey shows that it comes from field.py imagefield validation. before raising this error it imports Image from PIL, opens file and verfies it. I tried importing PIL from python prompt manually - it worked just fine. Same with Image.open and Image.verify. So what could be causing this problem? Alan

    Read the article

  • Using SCons as a build engine for distutils

    - by pygabriel
    I have a python package with some C code needed to build an extension (with some non-trivial building needs). I have used SCons as my build system because it's really good and flexible. I'm looking for a way to compile my python extensions with SCons ready to be distributed with distutils. I want that the user simply types setup.py install and get the extension compiled with SCons instead of the default distutils build engine. An idea that comes to mind is to redefine build_ext command in distutils, but I can't find extensive documentation for it. Any suggestion?

    Read the article

  • problem while finding iphone memory programatically

    - by sneha
    I am facing a strange problem with my iphone. It shows available memory as 278 Mb from settings and also in the itunes . But when i find it programatically like this NSDictionary *fileSystemAttributes = [[NSFileManager defaultManager] attributesOfFileSystemForPath:NSHomeDirectory() error:&error]; double availableSpace = [[fileSystemAttributes objectForKey:NSFileSystemFreeSize] floatValue]; I am getting it as 458.0 Mb. Can any one help me out why i m having so much difference between both the values ?? As both the values should be same Thanks in advance

    Read the article

  • Reverse Engineer a .pyo python file

    - by Brian
    I have 2 .pyo python files that I can convert to .py source files, but they don't compile perfectly as hinted by decompyle's verify. Therefore looking at the source code, I can tell that config.pyo simply had variables in in an array: ADMIN_USERIDS = [116901, 141, 349244, 39, 1159488] I would like to take the original .pyo and disassembly or whatever I need to do inorder to change one of these IDs. Or.... in model.pyo the source indicates a if (productsDeveloperId != self.getUserId()): All I would want to do is hex edit the != to be a == .....Simple with a windows exe program but I can't find a good python disassembler anywhere. Any suggestions are welcomed...I am new to reading bytecode and new to python as well.

    Read the article

  • IIS7 folder permissions for web application

    - by Andrew
    I am using windows authentication without impersonation on my company's intranet website with IIS7. Under IIS7, what account is used to access the folder which contains my web app using these settings? Would it be IIS_IUSRS? Or NETWORK SERVICE? Or another I don't know about?

    Read the article

  • Android: Unregister camera button

    - by niko
    Hi, I tried to bind some actions to a camera button: videoPreview.setOnKeyListener(new OnKeyListener(){ public boolean onKey(View v, int keyCode, KeyEvent event){ if(event.getAction() == KeyEvent.ACTION_DOWN) { switch(keyCode) { case KeyEvent.KEYCODE_CAMERA: //videoPreview.onCapture(settings); onCaptureButton(); ... } } return false; } }); Pressing the button however the application crashes because the original Camera application starts. Does anyone know how to prevent Camera application start when the camera button is pressed?

    Read the article

  • Sharepoint calculated Date field shows incorrectly in non-US region

    - by Proforce
    If a SharePoint user (with Regional Settings set to UK) views a calculated date field in a View details form, the field shows incorrectly showing the date is in US format, thus 01-Apr-2010 for 03-Jan-2010, and doesnt show unresolvalble dates such as 31-Dec-2010. This applies even with a simnple =[Modified] formula The Server is set up in the US for that locale. Does anyone know of a workaround for this or if there is a patch available ? Thanks in advance..

    Read the article

  • Google Chrome audit on caching

    - by Álvaro G. Vicario
    If I run an audit on my sites with Google Chrome, I get this message in the Leverage browser caching section: The following resources are missing a cache expiration. Resources that do not specify an expiration may not be cached by browsers: A list of all the pictures follows. I get a similar notice in Leverage proxy caching: Consider adding a "Cache-Control: public" header to the following resources: Apart from pictures, I also get a notice about HTML, CSS and JavaScript files: The following resources are explicitly non-cacheable. Consider making them cacheable if possible: Its funny because I've worked hard to cache all static contents (except for pictures, where I just left Apache's default settings). Firefox does indeed store all these items in cache. Is there anything I should improve in my HTTP headers? Here's the complete header set of some items as loaded after removing the browser caché. Pictures use default settings I didn't really check before, the rest should be cachéd for three hours. I can set headers with both .htaccess and PHP. PNG HTTP/1.1 200 OK Date: Sat, 31 Jul 2010 12:46:14 GMT Server: Apache Last-Modified: Thu, 18 Mar 2010 21:40:54 GMT Etag: "c48024-230-4821a15d6c580" Accept-Ranges: bytes Content-Length: 560 Keep-Alive: timeout=4 Connection: Keep-Alive Content-Type: image/png HTML HTTP/1.1 200 OK Date: Sat, 31 Jul 2010 12:46:13 GMT Server: Apache X-Powered-By: PHP/5.2.11 Expires: Sat, 31 Jul 2010 15:46:13 GMT Cache-Control: max-age=10800, s-maxage=10800, must-revalidate, proxy-revalidate Content-Encoding: gzip Vary: Accept-Encoding Last-Modified: Wed, 24 Mar 2010 20:30:36 GMT Keep-Alive: timeout=4 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=ISO-8859-15 CSS HTTP/1.1 200 OK Date: Sat, 31 Jul 2010 12:48:21 GMT Server: Apache X-Powered-By: PHP/5.2.11 Expires: Sat, 31 Jul 2010 15:48:21 GMT Cache-Control: max-age=10800, s-maxage=10800, must-revalidate, proxy-revalidate Content-Encoding: gzip Vary: Accept-Encoding Last-Modified: Thu, 18 Mar 2010 21:40:12 GMT Keep-Alive: timeout=4 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/css JavaScript HTTP/1.1 200 OK Date: Sat, 31 Jul 2010 12:48:21 GMT Server: Apache X-Powered-By: PHP/5.2.11 Expires: Sat, 31 Jul 2010 15:48:21 GMT Cache-Control: max-age=10800, s-maxage=10800, must-revalidate, proxy-revalidate Content-Encoding: gzip Vary: Accept-Encoding Last-Modified: Thu, 18 Mar 2010 21:40:12 GMT Keep-Alive: timeout=4 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: application/x-javascript Update I've tested Jumby's suggestion and set my CSS's expire to 1 year: Cache-Control:max-age=31536000, s-maxage=31536000, must-revalidate, proxy-revalidate Connection:Keep-Alive Content-Encoding:gzip Content-Length:4198 Content-Type:text/css Date:Mon, 02 Aug 2010 20:48:56 GMT Expires:Tue, 02 Aug 2011 20:48:56 GMT Keep-Alive:timeout=5, max=99 Last-Modified:Thu, 18 Mar 2010 20:40:12 GMT Server:Apache/2.2.14 (Win32) PHP/5.3.1 Vary:Accept-Encoding X-Powered-By:PHP/5.3.1 However, Chrome still claims "explicitly non-cacheable".

    Read the article

  • Where are Eclipse Bookmarks stored?

    - by Tom Tresansky
    Does anyone know where bookmarks are stored in the Eclipse IDE? I had to delete a Java project from my workspace, then use the Import Existing project option to reset some configuration settings, and now my bookmarks are gone. I'm trying to understand how this affected my bookmarks, since everything on the file system except for the Eclipse .project and .classpath files should have remained unchanged.

    Read the article

< Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >