Search Results

Search found 19701 results on 789 pages for 'disk images'.

Page 10/789 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Mac OS X Disk Encryption - Automation

    - by jfm429
    I want to setup a Mac Mini server with an external drive that is encrypted. In Finder, I can use the full-disk encryption option. However, for multiple users, this could become tricky. What I want to do is encrypt the external volume, then set things up so that when the machine boots, the disk is unlocked so that all users can access it. Of course permissions need to be maintained, but that goes without saying. What I'm thinking of doing is setting up a root-level launchd script that runs once on boot and unlocks the disk. The encryption keys would probably be stored in root's keychain. So here's my list of concerns: If I store the encryption keys in the system keychain, then the file in /private/var/db/SystemKey could be used to unlock the keychain if an attacker ever gained physical access to the server. this is bad. If I store the encryption keys in my user keychain, I have to manually run the command with my password. This is undesirable. If I run a launchd script with my user credentials, it will run under my user account but won't have access to the keychain, defeating the purpose. If root has a keychain (does it?) then how would it be decrypted? Would it remain locked until the password was entered (like the user keychain) or would it have the same problem as the system keychain, with keys stored on the drive and accessible with physical access? Assuming all of the above works, I've found diskutil coreStorage unlockVolume which seems to be the appropriate command, but the details of where to store the encryption key is the biggest problem. If the system keychain is not secure enough, and user keychains require a password, what's the best option?

    Read the article

  • Linux disk usage analyser that acts like symlinks are real files

    - by Rory
    I am using git-annex, an extension to the DVCS git, which is designed for handling large files. It makes heavy use of symlinks. The actual large files are moved to the .git/annex directory and the original files are symlinked to there. I am running out of disk space, and need to clear up, and see what's using all my space. Usually I'd use a disk usage tool like ncdu, Baobab or Filelight. However they treat the symlink as essentially empty, and only count the file that it is pointing to as using any space. Which means when I use git-annex, it shows no space used in the main directories and lots of space used in the .git/annex directory. This is not helpful. Is there any (graphical or ncurses) based disk usage programme for linux (apt-get installable would be easie that is capable (through options or not) of counting a symlink as using up the space that the original file uses up? Many have options for different behaviour for hard links, so makes sense that some should h (I know counting symlinks as using space has flaws, like counting the space space twice, broken symlinks, etc. But that's OK for my purposes)

    Read the article

  • How can I display images on a MS Access 2007+ form with a hyperlink source?

    - by Yaaqov
    I am looking improve the efficiency of an Access 2010 database by using a web server with images and only storing the hyperlink source (i.e, http://www.images.com/images/image1.jpg) in the table. I know that one can save images as "attachements", using a "blob" object type, but when you're dealing with thousands of images, queries are bogged down, and performance suffers. So in short, is there are relatively simple way of displaying images on MS Access forms with a source that is a hyperlink address (storing files locally and using filepaths is not preferable). Thanks.

    Read the article

  • Showing items as images in a WPF ListView

    - by Joan Venge
    So I have binded a List to a ListView where the List has items of type Album, where it has lots of properties including .Cover, which I is an image on disk. Well since I don't know what type of image is needed and how they have to be loaded (I only know using Image types for Winforms), I don't know the type yet. Can someone show or post me a quick sample where it shows this sort of items being shown as images of a certain fixed size using their .Cover property? In essence this would show: What type .Cover should be How to open images from disk for WFP (assuming it's different than Winforms image loading) How to show them on a ListView as images of a certain fixed size, scaling the images if necessary

    Read the article

  • Tableview lags, and glitches while scrolling due to images loading in each cell

    - by Luis Tovar
    Hey Guys! Me again! Can someone provide me with some example code of how to properly load images in a tablecell without making the tableview glitch while scrolling. For example if you look at fandago's app, when scrolling through their movies you can see that the image is loaded asynchronously so that the scroll isnt jumping, lagging, or glitchy. Thanks in advance. Right now I have the images loading just fine but it is glitchy as i scroll because the images are loading on main thread. I am creating an app very similar to fandango. PS I am downloading these images via xml. (you know what i mean) Thanks!!

    Read the article

  • PROBLEM LOADING IMAGES FROM DOM

    - by bappyiub
    I HAVE PROBLEM IN LOADING IMAGES USING JQUERY. MY PROGRAM IS SUCH THAT IT INSERTS ASWELL AS DELETES FROM THE SAME FORM. WHEN I DELETE AN IMAGE AND INSERTS THE IMAGE AND AFTER LOADING THE JQUERY FUNCTION THE DELETED IMAGE IS SHOWN. I HAVE FOUND INCOSTIENCY IN DOM AND ACTUAL LOACTION. THE BROWSER LOADS THE IMAGES FROM DOM NOT FORM ACTULA LOCATION. IS THERE ANY FUNCTION THAT WILL FORCE TO READ FROM DOM./ DELETE THE IMAGES IN DOM

    Read the article

  • Can anyone recommend a decent tool for optimizing images other than Photoshop

    - by toomanyairmiles
    Can anyone recommend a decent tool for optimising images other than adobe photoshop, the gimp etc? I'm looking to optimise images for the web preferably online and free. Basically I have a client who can't install additional software on their work PC but needs to optimise photographs and other images for their website and is presently uploading 1 or 2 Mb files. On a personal level I'm interested to see what other people are using...

    Read the article

  • Does anyone knows the algorithm how Facebook detects the images when adding a link

    - by Edelcom
    When you add a link to your Facebook page, after some processing, Facebook presents you a next/prev button to choose an image linked to the url your are inserting. Obviously, Facebook reads the html-page and displays the images found on the url you insert. Does anyone knows what algorithm Facebook uses to decide what images to show ? If I insert a link to : http://www.staplijst.be/lachende-wandelaars-aalter-aktivia-003.asp, only 11 images are detected. The one I want, the one at the top right corner, is not included in the list. If I insert a link to http://www.staplijst.be/stichting-kennedymars-rijsbergen-zundert-nederland-knblo-nl-81996.asp, 19 images are displayed (including the one I want (the one at the right top corner of the text area). Both pages are build using asp code but are functionally the same. I thought that it has something to do with the image size, but can't find any deciding factor there. I will investigate some furhter, because if I know what Facebook is looking for, I can make sure that the correct images are included on the page (since they are dynamic pages build with classic asp). But if anyone has any idea ? Help would be appreciated.

    Read the article

  • Can anyone recommend a decent tool for optimising images other than photoshop

    - by toomanyairmiles
    Can anyone recommend a decent tool for optimising images other than adobe photoshop, the gimp etc? I'm looking to optimise images for the web preferably online and free. Basically I have a client who can't install additional software on their work PC but needs to optimise photographs and other images for their website and is presently uploading 1 or 2 Mb files. On a personal level I'm interested to see what other people are using...

    Read the article

  • What would be a good way to scale images in c#

    - by Younes
    I got a website which contains alot of projects with each project containing a sidebar. In this sidebar it is possible to attach images to a project. The images attached will be shown in a gallery with 3 small thumbs at the bottom and one bigger image at the top of the gallery. The big image will refresh to another image when a visitor clicks on the small thumb @ the bottom of the gallery. The thumbs are no problem, they are shown correctly. My problem is the bigger image at the top of the gallery. The images that get uploaded have a big variety of sizes, while my holder has a width of 239 and height of 179. What would be the best way to scale the images so that they are shwon correctly to the visitors of the website?

    Read the article

  • Images inside of UILabel

    - by enby
    hello everyone! I'm trying to find the best way to display images inside of UILabels (in fact, I wouldn't mind switching to something other than UILabel if it supports images with no hassle) The scenario is: I have a table view with hundreds of cells and UILabel being the main component of each cell The text I assign to each cell contains sequences of characters that need to be parsed out and represented as an image In simpler words, imagine a TableView of an instant messenger that parses replaces all ":)", ":(", ":D" etc with corresponding smiley images Any input would be greatly appreciated!

    Read the article

  • Quality for images in LaTeX documents

    - by Ladislav
    Hey all, What are some of the pointers that I need to follow if I want to have good quality images in a LaTeX document. These images are mostly screenshots of an software application or flow charts. Below are two such images. Flow Chart Screenshot Thanx Ladislav

    Read the article

  • Adding images to a JAR file in Eclipse?

    - by ChrisFreeley
    I'm trying to make a game in java, but I'm a bit of a newb. The game has images, and when I run the application from eclipse, they all show up fine. But when I export the project as an application, the images don't show up. When I put the application in the same folder as the images, they show up when I run the application, so someone suggested that I just need to put the image files inside my JAR. Can someone tell me how? Thank you

    Read the article

  • storing images in sqlserver using c#

    - by barq
    i want to store images of my employees with thier profiles in sql server database. i have following reservations. whether i should compress images or not if yes please provide me sample code or article how should i retrieve images efficiently, i an afraid of asp.net application performance issue. i think with ten thousand employee records it will halt or slow down

    Read the article

  • Unable to format disk: 'The system cannot find the file specified'

    - by ACarter
    I have a USB flash drive, which I may have mucked up, so I used DISKPART's CLEAN to clean it up. I created a simple volume, and tried to format it. (This is all using Windows' disk management.) I was told The system cannot find the file specified. So I tried using DISKPART (as an admin): DISKPART select volume 9 Volume 9 is the selected volume. DISKPART format recommended DiskPart has encountered an error: The system cannot find the file specified. See the System Event Log for more information. DISKPART As you can see, no luck. When I plug the drive in, the computer makes a beep noise as though it has recognised something, but nothing appears in My Computer How can I format the disk so I can use it again?

    Read the article

  • How HP D2700 disk enclosure is monitored for alarms via SNMP

    - by VSAC
    We have HP D2700 disk enclosure and we would like to monitor D2700 (connected to HP Proliant DL360G8) for alarms.I have following questions regarding this. What are the options available for reporting D2700 hardware alarms (disk failure, power failure) via SNMP? We understand the D2700 to have an Ethernet interface for controller A and B and alarms are available via SNMP via this interface. Can anyone provide the actual alarms via this interface? (MIB and alarm list) As we have a number of D2700’s and would like to minimize the number of physical connections to the switch and associated IP addresses; Is there a mechanism to monitor the D2700 from the SCSI connected HPDL360 and raise SNMP alarms from the DL360 for hardware failures on the D2700? If so can anyone provide details and the actual alarms and MIB via this mechanism? Thanks!

    Read the article

  • disk-to-disk backup without costly backup redundancy?

    - by AaronLS
    A good backup strategy involves a combination of 1) disconnected backups/snapshots that will not be affected by bugs, viruses, and/or security breaches 2) geographically distributed backups to protect against local disasters 3) testing backups to ensure that they can be restored as needed Generally I take an onsite backup daily, and an offsite backup weekly, and do test restores periodically. In the rare circumstance that I need to restore files, I do some from the local backup. Should a catastrophic event destroy the servers and local backups, then the offsite weekly tape backup would be used to restore the files. I don't need multiple offsite backups with redundancy. I ALREADY HAVE REDUNDANCY THROUGH THE USE OF BOTH LOCAL AND REMOTE BACKUPS. I have recovery blocks and par files with the backups, so I already have protection against a small percentage of corrupt bits. I perform test restores to ensure the backups function properly. Should the remote backups experience a dataloss, I can replace them with one of the local backups. There are historical offsite backups as well, so if a dataloss was not noticed for a few weeks(such as a bug/security breach/virus), the data could be restored from an older backup. By doing this, the only scenario that poses a risk to complete data loss would be one where both the local, remote, and servers all experienced a data loss in the same time period. I'm willing to risk that happening since the odds of that trifecta negligibly small, and the data isn't THAT valuable to me. So I hope I have emphasized that I don't need redundancy in my offsite backups because I have covered all the bases. I know this exact technique is employed by numerous businesses. Of course there are some that take multiple offsite backups, because the data is so incredibly valuable that they don't even want to risk that trifecta disaster, but in the majority of cases the trifecta disaster is an accepted risk. I HAD TO COVER ALL THIS BECAUSE SOME PEOPLE DON'T READ!!! I think I have justified my backup strategy and the majority of businesses who use offsite tape backups do not have any additional redundancy beyond what is mentioned above(recovery blocks, par files, historical snapshots). Now I would like to eliminate the use of tapes for offsite backups, and instead use a backup service. Most however are extremely costly for $/gb/month storage. I don't mind paying for transfer bandwidth, but the cost of storage is way to high. All of them advertise that they maintain backups of the data, and I imagine they use RAID as well. Obviously if you were using them to host servers this would all be necessary, but for my scenario, I am simply replacing my offsite backups with such a service. So there is no need for RAID, and absolutely no value in another layer of backups of backups. My one and only question: "Are there online data-storage/backup services that do not use redundancy or offer backups(backups of my backups) as part of their packages, and thus are more reasonably priced?" NOT my question: "Is this a flawed strategy?" I don't care if you think this is a good strategy or not. I know it pretty standard. Very few people make an extra copy of their offsite backups. They already have local backups that they can use to replace the remote backups if something catastrophic happens at the remote site. Please limit your responses to the question posed. Sorry if I seem a little abrasive, but I had some trolls in my last post who didn't read my requirements nor my question, and were trying to go off answering a totally different question. I made it pretty clear, but didn't try to justify my strategy, because I didn't ask about whether my strategy was justifyable. So I apologize if this was lengthy, as it really didn't need to be, but since there are so many trolls here who try to sidetrack questions by responding without addressing the question at hand.

    Read the article

  • WordPress not resizing images with Nginx + php-fpm and other issues

    - by Julian Fernandes
    Recently i setup a Ubuntu 12.04 VPS with 512mb/1ghz CPU, Nginx + php-fpm + Varnish + APC + Percona's MySQL server + CloudFlare Pro for our Ubuntu LoCo Team's WordPress blog. The blog get about 3~4k daily hits, use about 180MB and 8~20% CPU. Everything seems to be working insanely fast... page load is really good and is about 16x faster than any of our competitors... but there is one problem. When we upload a image, WordPress don't resize it, so all we can do it insert the full image in the post. If the imagem have, let's say, 30kb, it resize fine... but if the image have 100kb+, it won't... In nginx error logs i see this: upstream timed out (110: Connection timed out) while reading response header from upstream, client: 150.162.216.64, server: www.ubuntubrsc.com, request: "POST /wp-admin/async-upload.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.ubuntubrsc.com", referrer: "http://www.ubuntubrsc.com/wp-admin/media-upload.php?post_id=2668&" It seems to be related with the issue, but i dunno. When that timeout happens, i started to get it when i'm trying to view a post too: upstream timed out (110: Connection timed out) while reading response header from upstream, client: 150.162.216.64, server: www.ubuntubrsc.com, request: "GET /tutoriais-gimp-6-adicionando-aplicando-novos-pinceis.html HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.ubuntubrsc.com", referrer: "http://www.ubuntubrsc.com/" And only a restart of php5-fpm fix it. I tryed increasing some timeouts and stuffs but it did not worked, so i guess it's some kind of limitation i did not figured yet. Could someone help me with it, please? /etc/nginx/nginx.conf: user www-data; worker_processes 1; pid /var/run/nginx.pid; events { worker_connections 1024; use epoll; multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 15; keepalive_requests 2000; types_hash_max_size 2048; server_tokens off; server_name_in_redirect off; open_file_cache max=1000 inactive=300s; open_file_cache_valid 360s; open_file_cache_min_uses 2; open_file_cache_errors off; server_names_hash_bucket_size 64; # server_name_in_redirect off; client_body_buffer_size 128K; client_header_buffer_size 1k; client_max_body_size 2m; large_client_header_buffers 4 8k; client_body_timeout 10m; client_header_timeout 10m; send_timeout 10m; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## error_log /var/log/nginx/error.log; access_log off; ## # CloudFlare's IPs (uncomment when site goes live) ## set_real_ip_from 204.93.240.0/24; set_real_ip_from 204.93.177.0/24; set_real_ip_from 199.27.128.0/21; set_real_ip_from 173.245.48.0/20; set_real_ip_from 103.22.200.0/22; set_real_ip_from 141.101.64.0/18; set_real_ip_from 108.162.192.0/18; set_real_ip_from 190.93.240.0/20; real_ip_header CF-Connecting-IP; set_real_ip_from 127.0.0.1/32; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 9; gzip_min_length 1000; gzip_proxied expired no-cache no-store private auth; gzip_buffers 32 8k; # gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/fastcgi_params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $https; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; /etc/nginx/sites-avaiable/default: ## # DEFAULT HANDLER # ubuntubrsc.com ## server { listen 8080; # Make site available from main domain server_name www.ubuntubrsc.com; # Root directory root /var/www; index index.php index.html index.htm; include /var/www/nginx.conf; access_log off; location / { try_files $uri $uri/ /index.php?q=$uri&$args; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location ~ /\. { deny all; access_log off; log_not_found off; } location ~* ^/wp-content/uploads/.*.php$ { deny all; access_log off; log_not_found off; } rewrite /wp-admin$ $scheme://$host$uri/ permanent; error_page 404 = @wordpress; log_not_found off; location @wordpress { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_NAME /index.php; fastcgi_param SCRIPT_FILENAME $document_root/index.php; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; if (-f $request_filename) { fastcgi_pass unix:/var/run/php5-fpm.sock; } } } server { listen 8080; server_name ubuntubrsc.* www.ubuntubrsc.net www.ubuntubrsc.org www.ubuntubrsc.com.br www.ubuntubrsc.info www.ubuntubrsc.in; return 301 $scheme://www.ubuntubrsc.com$request_uri; } /var/www/nginx.conf: # BEGIN W3TC Minify cache location ~ /wp-content/w3tc/min.*\.js$ { types {} default_type application/x-javascript; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ /wp-content/w3tc/min.*\.css$ { types {} default_type text/css; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ /wp-content/w3tc/min.*js\.gzip$ { gzip off; types {} default_type application/x-javascript; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header Content-Encoding gzip; } location ~ /wp-content/w3tc/min.*css\.gzip$ { gzip off; types {} default_type text/css; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header Content-Encoding gzip; } # END W3TC Minify cache # BEGIN W3TC Browser Cache gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header Pragma "public"; add_header Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; try_files $uri $uri/ $uri.html /index.php?$args; } location ~ \.(asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; } # END W3TC Browser Cache # BEGIN W3TC Minify core rewrite ^/wp-content/w3tc/min/w3tc_rewrite_test$ /wp-content/w3tc/min/index.php?w3tc_rewrite_test=1 last; set $w3tc_enc ""; if ($http_accept_encoding ~ gzip) { set $w3tc_enc .gzip; } if (-f $request_filename$w3tc_enc) { rewrite (.*) $1$w3tc_enc break; } rewrite ^/wp-content/w3tc/min/(.+\.(css|js))$ /wp-content/w3tc/min/index.php?file=$1 last; # END W3TC Minify core # BEGIN W3TC Skip 404 error handling by WordPress for static files if (-f $request_filename) { break; } if (-d $request_filename) { break; } if ($request_uri ~ "(robots\.txt|sitemap(_index)?\.xml(\.gz)?|[a-z0-9_\-]+-sitemap([0-9]+)?\.xml(\.gz)?)") { break; } if ($request_uri ~* \.(css|js|htc|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml|asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$) { return 404; } # END W3TC Skip 404 error handling by WordPress for static files # BEGIN Better WP Security location ~ /\.ht { deny all; } location ~ wp-config.php { deny all; } location ~ readme.html { deny all; } location ~ readme.txt { deny all; } location ~ /install.php { deny all; } set $susquery 0; set $rule_2 0; set $rule_3 0; rewrite ^wp-includes/(.*).php /not_found last; rewrite ^/wp-admin/includes(.*)$ /not_found last; if ($request_method ~* "^(TRACE|DELETE|TRACK)"){ return 403; } set $rule_0 0; if ($request_method ~ "POST"){ set $rule_0 1; } if ($uri ~ "^(.*)wp-comments-post.php*"){ set $rule_0 2$rule_0; } if ($http_user_agent ~ "^$"){ set $rule_0 4$rule_0; } if ($rule_0 = "421"){ return 403; } if ($args ~* "\.\./") { set $susquery 1; } if ($args ~* "boot.ini") { set $susquery 1; } if ($args ~* "tag=") { set $susquery 1; } if ($args ~* "ftp:") { set $susquery 1; } if ($args ~* "http:") { set $susquery 1; } if ($args ~* "https:") { set $susquery 1; } if ($args ~* "(<|%3C).*script.*(>|%3E)") { set $susquery 1; } if ($args ~* "mosConfig_[a-zA-Z_]{1,21}(=|%3D)") { set $susquery 1; } if ($args ~* "base64_encode") { set $susquery 1; } if ($args ~* "(%24&x)") { set $susquery 1; } if ($args ~* "(\[|\]|\(|\)|<|>|ê|\"|;|\?|\*|=$)"){ set $susquery 1; } if ($args ~* "(&#x22;|&#x27;|&#x3C;|&#x3E;|&#x5C;|&#x7B;|&#x7C;|%24&x)"){ set $susquery 1; } if ($args ~* "(%0|%A|%B|%C|%D|%E|%F|127.0)") { set $susquery 1; } if ($args ~* "(globals|encode|localhost|loopback)") { set $susquery 1; } if ($args ~* "(request|select|insert|concat|union|declare)") { set $susquery 1; } if ($http_cookie !~* "wordpress_logged_in_" ) { set $susquery "${susquery}2"; set $rule_2 1; set $rule_3 1; } if ($susquery = 12) { return 403; } # END Better WP Security /etc/php5/fpm/php-fpm.conf: pid = /var/run/php5-fpm.pid error_log = /var/log/php5-fpm.log emergency_restart_threshold = 3 emergency_restart_interval = 1m process_control_timeout = 10s events.mechanism = epoll /etc/php5/fpm/php.ini (only options i changed): open_basedir ="/var/www/" disable_functions = pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority,dl,system,shell_exec,fsockopen,parse_ini_file,passthru,popen,proc_open,proc_close,shell_exec,show_source,symlink,proc_close,proc_get_status,proc_nice,proc_open,proc_terminate,shell_exec ,highlight_file,escapeshellcmd,define_syslog_variables,posix_uname,posix_getpwuid,apache_child_terminate,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,escapeshellarg,posix_uname,ftp_exec,ftp_connect,ftp_login,ftp_get,ftp_put,ftp_nb_fput,ftp_raw,ftp_rawlist,ini_alter,ini_restore,inject_code,syslog,openlog,define_syslog_variables,apache_setenv,mysql_pconnect,eval,phpAds_XmlRpc,phpA ds_remoteInfo,phpAds_xmlrpcEncode,phpAds_xmlrpcDecode,xmlrpc_entity_decode,fp,fput,virtual,show_source,pclose,readfile,wget expose_php = off max_execution_time = 30 max_input_time = 60 memory_limit = 128M display_errors = Off post_max_size = 2M allow_url_fopen = off default_socket_timeout = 60 APC settings: [APC] apc.enabled = 1 apc.shm_segments = 1 apc.shm_size = 64M apc.optimization = 0 apc.num_files_hint = 4096 apc.ttl = 60 apc.user_ttl = 7200 apc.gc_ttl = 0 apc.cache_by_default = 1 apc.filters = "" apc.mmap_file_mask = "/tmp/apc.XXXXXX" apc.slam_defense = 0 apc.file_update_protection = 2 apc.enable_cli = 0 apc.max_file_size = 10M apc.stat = 1 apc.write_lock = 1 apc.report_autofilter = 0 apc.include_once_override = 0 apc.localcache = 0 apc.localcache.size = 512 apc.coredump_unmap = 0 apc.stat_ctime = 0 /etc/php5/fpm/pool.d/www.conf user = www-data group = www-data listen = /var/run/php5-fpm.sock listen.owner = www-data listen.group = www-data listen.mode = 0666 pm = ondemand pm.max_children = 5 pm.process_idle_timeout = 3s; pm.max_requests = 50 I also started to get 404 errors in front page if i use W3 Total Cache's Page Cache (Disk Enhanced). It worked fine untill somedays ago, and then, out of nowhere, it started to happen. Tonight i will disable my mobile plugin and activate only W3 Total Cache to see if it's a conflict with them... And to finish all this, i have been getting this error: PHP Warning: apc_store(): Unable to allocate memory for pool. in /var/www/wp-content/plugins/w3-total-cache/lib/W3/Cache/Apc.php on line 41 I already modifed my APC settings, but no sucess. So... could anyone help me with those issuees, please? Ooohh... if it helps, i instaled PHP like this: sudo apt-get install php5-fpm php5-suhosin php-apc php5-gd php5-imagick php5-curl And Nginx from the official PPA. Sorry for my bad english and thanks for your time people! (:

    Read the article

  • Munin "Disk usage" is too high?

    - by f-aminov
    I've recently installed munin on my VMware client server and saw that the Disk usage shows about 80-90%. Everything else (cpu load, ram, etc.) seems to be running fine. I have only two virtual hosts on my server with 1000 users/day in total, so I don't think that's too much. Here is the graph for the disk usage. Server info: Debian Lenny, CPU 510Mhz, RAM 512MB Is it bad? What could possibly cause this? Thank you for any suggestions.

    Read the article

  • Using DD for disk cloning

    - by roe
    There's been a number of questions regarding disk cloning tools (yes, I did search for it) :) and dd has been suggested at least once. I've already considered using dd myself, mainly because ease of use, and that it's readily available on pretty much all run of the mill bootable linux distributions. To my question, what is the best way to use dd for cloning a disk. I don't have much time to go around, and no test-hardware to play with, so I need to be prepared, and it's pretty much a one-shoot chance to get it done. I did a quick google search, and the first result was an apparent failed attempt. Is there anything I need to do after using dd, i.e. is there anything that CAN'T be read using dd? Thanks!

    Read the article

  • Ubuntu hard disk problem

    - by Henadzy
    Hello! I have got the error with a hard disk on Ubuntu 9.10. It slows down my system, applications have not been responding for a long time. But when I mount and use filesystem which placed on this hard disk at other computer it works properly. disk: SAMSUNG HD161HJ (SATA) syslog: Apr 25 00:28:25 vare6gin kernel: [ 885.773839] ata3.00: exception Emask 0x1 SAct 0x1e SErr 0x0 action 0x6 frozen Apr 25 00:28:25 vare6gin kernel: [ 885.773845] ata3.00: Ata error. fis:0x21 Apr 25 00:28:25 vare6gin kernel: [ 885.773861] ata3.00: cmd 60/08:08:3f:00:ad/00:00:10:00:00/40 tag 1 ncq 4096 in Apr 25 00:28:25 vare6gin kernel: [ 885.773864] res 51/40:24:67:c8:91/40:00:05:00:00/40 Emask 0x9 (media error) Apr 25 00:28:25 vare6gin kernel: [ 885.773871] ata3.00: status: { DRDY ERR } Apr 25 00:28:25 vare6gin kernel: [ 885.773877] ata3.00: error: { UNC } Apr 25 00:28:25 vare6gin kernel: [ 885.773890] ata3.00: cmd 60/18:10:9f:6b:ed/00:00:0e:00:00/40 tag 2 ncq 12288 in Apr 25 00:28:25 vare6gin kernel: [ 885.773893] res 51/40:24:67:c8:91/40:00:05:00:00/40 Emask 0x9 (media error) Apr 25 00:28:25 vare6gin kernel: [ 885.773900] ata3.00: status: { DRDY ERR } Apr 25 00:28:25 vare6gin kernel: [ 885.773904] ata3.00: error: { UNC } Apr 25 00:28:25 vare6gin kernel: [ 885.773918] ata3.00: cmd 60/08:18:3f:5f:ed/00:00:0e:00:00/40 tag 3 ncq 4096 in Apr 25 00:28:25 vare6gin kernel: [ 885.773921] res 51/40:24:67:c8:91/40:00:05:00:00/40 Emask 0x9 (media error) Apr 25 00:28:25 vare6gin kernel: [ 885.773927] ata3.00: status: { DRDY ERR } Apr 25 00:28:25 vare6gin kernel: [ 885.773932] ata3.00: error: { UNC } Apr 25 00:28:25 vare6gin kernel: [ 885.773946] ata3.00: cmd 60/08:20:67:c8:91/00:00:05:00:00/40 tag 4 ncq 4096 in Apr 25 00:28:25 vare6gin kernel: [ 885.773948] res 51/40:24:67:c8:91/40:00:05:00:00/40 Emask 0x9 (media error) Apr 25 00:28:25 vare6gin kernel: [ 885.773955] ata3.00: status: { DRDY ERR } Apr 25 00:28:25 vare6gin kernel: [ 885.773960] ata3.00: error: { UNC } Apr 25 00:28:25 vare6gin kernel: [ 885.773970] ata3: hard resetting link Apr 25 00:28:25 vare6gin kernel: [ 885.773974] ata3: nv: skipping hardreset on occupied port Apr 25 00:28:25 vare6gin kernel: [ 886.240073] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 25 00:28:25 vare6gin kernel: [ 886.256277] ata3.00: configured for UDMA/133 Apr 25 00:28:25 vare6gin kernel: [ 886.256305] ata3: EH complete Apr 25 00:28:27 vare6gin kernel: [ 888.176088] ata3: EH in SWNCQ mode,QC:qc_active 0xF sactive 0xF Apr 25 00:28:27 vare6gin kernel: [ 888.176099] ata3: SWNCQ:qc_active 0xF defer_bits 0x0 last_issue_tag 0x3 Apr 25 00:28:27 vare6gin kernel: [ 888.176102] dhfis 0xF dmafis 0x1 sdbfis 0x0 Apr 25 00:28:27 vare6gin kernel: [ 888.176109] ata3: ATA_REG 0x51 ERR_REG 0x40 Apr 25 00:28:27 vare6gin kernel: [ 888.176113] ata3: tag : dhfis dmafis sdbfis sacitve Apr 25 00:28:27 vare6gin kernel: [ 888.176120] ata3: tag 0x0: 1 1 0 1 Apr 25 00:28:27 vare6gin kernel: [ 888.176126] ata3: tag 0x1: 1 0 0 1 Apr 25 00:28:27 vare6gin kernel: [ 888.176131] ata3: tag 0x2: 1 0 0 1 Apr 25 00:28:27 vare6gin kernel: [ 888.176136] ata3: tag 0x3: 1 0 0 1

    Read the article

  • Ubuntu hard disk problem

    - by Henadzy
    Hello! I have got the error with a hard disk on Ubuntu 9.10. It slows down my system, applications have not been responding for a long time. But when I mount and use filesystem which placed on this hard disk at other computer it works properly. disk: SAMSUNG HD161HJ (SATA) syslog: Apr 25 00:28:25 vare6gin kernel: [ 885.773839] ata3.00: exception Emask 0x1 SAct 0x1e SErr 0x0 action 0x6 frozen Apr 25 00:28:25 vare6gin kernel: [ 885.773845] ata3.00: Ata error. fis:0x21 Apr 25 00:28:25 vare6gin kernel: [ 885.773861] ata3.00: cmd 60/08:08:3f:00:ad/00:00:10:00:00/40 tag 1 ncq 4096 in Apr 25 00:28:25 vare6gin kernel: [ 885.773864] res 51/40:24:67:c8:91/40:00:05:00:00/40 Emask 0x9 (media error) Apr 25 00:28:25 vare6gin kernel: [ 885.773871] ata3.00: status: { DRDY ERR } Apr 25 00:28:25 vare6gin kernel: [ 885.773877] ata3.00: error: { UNC } Apr 25 00:28:25 vare6gin kernel: [ 885.773890] ata3.00: cmd 60/18:10:9f:6b:ed/00:00:0e:00:00/40 tag 2 ncq 12288 in Apr 25 00:28:25 vare6gin kernel: [ 885.773893] res 51/40:24:67:c8:91/40:00:05:00:00/40 Emask 0x9 (media error) Apr 25 00:28:25 vare6gin kernel: [ 885.773900] ata3.00: status: { DRDY ERR } Apr 25 00:28:25 vare6gin kernel: [ 885.773904] ata3.00: error: { UNC } Apr 25 00:28:25 vare6gin kernel: [ 885.773918] ata3.00: cmd 60/08:18:3f:5f:ed/00:00:0e:00:00/40 tag 3 ncq 4096 in Apr 25 00:28:25 vare6gin kernel: [ 885.773921] res 51/40:24:67:c8:91/40:00:05:00:00/40 Emask 0x9 (media error) Apr 25 00:28:25 vare6gin kernel: [ 885.773927] ata3.00: status: { DRDY ERR } Apr 25 00:28:25 vare6gin kernel: [ 885.773932] ata3.00: error: { UNC } Apr 25 00:28:25 vare6gin kernel: [ 885.773946] ata3.00: cmd 60/08:20:67:c8:91/00:00:05:00:00/40 tag 4 ncq 4096 in Apr 25 00:28:25 vare6gin kernel: [ 885.773948] res 51/40:24:67:c8:91/40:00:05:00:00/40 Emask 0x9 (media error) Apr 25 00:28:25 vare6gin kernel: [ 885.773955] ata3.00: status: { DRDY ERR } Apr 25 00:28:25 vare6gin kernel: [ 885.773960] ata3.00: error: { UNC } Apr 25 00:28:25 vare6gin kernel: [ 885.773970] ata3: hard resetting link Apr 25 00:28:25 vare6gin kernel: [ 885.773974] ata3: nv: skipping hardreset on occupied port Apr 25 00:28:25 vare6gin kernel: [ 886.240073] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 25 00:28:25 vare6gin kernel: [ 886.256277] ata3.00: configured for UDMA/133 Apr 25 00:28:25 vare6gin kernel: [ 886.256305] ata3: EH complete Apr 25 00:28:27 vare6gin kernel: [ 888.176088] ata3: EH in SWNCQ mode,QC:qc_active 0xF sactive 0xF Apr 25 00:28:27 vare6gin kernel: [ 888.176099] ata3: SWNCQ:qc_active 0xF defer_bits 0x0 last_issue_tag 0x3 Apr 25 00:28:27 vare6gin kernel: [ 888.176102] dhfis 0xF dmafis 0x1 sdbfis 0x0 Apr 25 00:28:27 vare6gin kernel: [ 888.176109] ata3: ATA_REG 0x51 ERR_REG 0x40 Apr 25 00:28:27 vare6gin kernel: [ 888.176113] ata3: tag : dhfis dmafis sdbfis sacitve Apr 25 00:28:27 vare6gin kernel: [ 888.176120] ata3: tag 0x0: 1 1 0 1 Apr 25 00:28:27 vare6gin kernel: [ 888.176126] ata3: tag 0x1: 1 0 0 1 Apr 25 00:28:27 vare6gin kernel: [ 888.176131] ata3: tag 0x2: 1 0 0 1 Apr 25 00:28:27 vare6gin kernel: [ 888.176136] ata3: tag 0x3: 1 0 0 1

    Read the article

  • No space left on device with encrypted disk that takes all space

    - by Yosef
    I use Ubuntu 11.04. There's no space left on device. I have encrypted the disk that takes up space (maybe it's good to disable it, but I don't know how). In shell, I get this message: No space left on device I run df -I: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda1 3055616 602499 2453117 20% / none 210161 890 209271 1% /dev none 214789 8 214781 1% /dev/shm none 214789 53 214736 1% /var/run none 214789 3 214786 1% /var/lock /home/myuser/.Private 3055616 602499 2453117 20% /home/myuser df -I Edit: When I run only df: Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 48060296 45618928 0 100% / none 1538340 684 1537656 1% /dev none 1547596 808 1546788 1% /dev/shm none 1547596 104 1547492 1% /var/run none 1547596 0 1547596 0% /var/lock /home/myuser/.Private 48060296 45618928 0 100% /home/myuser Edit: I thinking about few solution but I don't know which better and how exactly to do them: to enlarge partition size (I cant install gparted - no more disk space) remove encryption of partition - I really not need that

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >