Search Results

Search found 51448 results on 2058 pages for 'log files'.

Page 474/2058 | < Previous Page | 470 471 472 473 474 475 476 477 478 479 480 481  | Next Page >

  • Windows Search Indexing SSD

    - by user654628
    I just bought a new SSD (Vertex4 from OCZ, 256G) and installed Windows 8 with it on my laptop. I am not using an external hard drive to keep extra data (paging, temp files etc) because I am using a laptop and do not want to carry it around with me. My question is, if I disable Windows indexer (Windows Search service), does Windows still search files under the search (Windows 7 Start menu search/Metro UI Windows 8 search)? Since the indexer was meant to search for files by indexing them, does this mean that Windows will not search new files? Thanks in advance.

    Read the article

  • suPHP not working

    - by amarc
    OS: Ubuntu 10.04 etc/suphp/suphp.conf: [global] ;Path to logfile logfile=/var/log/suphp/suphp.log ;Loglevel loglevel=info ;User Apache is running as webserver_user=www-data ;Path all scripts have to be in docroot=/home ;Path to chroot() to before executing script ;chroot=/mychroot ; Security options allow_file_group_writeable=false allow_file_others_writeable=false allow_directory_group_writeable=false allow_directory_others_writeable=false ;Check wheter script is within DOCUMENT_ROOT check_vhost_docroot=true ;Send minor error messages to browser errors_to_browser=false ;PATH environment variable env_path=/bin:/usr/bin ;Umask to set, specify in octal notation umask=0077 ; Minimum UID min_uid=100 ; Minimum GID min_gid=100 [handlers] ;Handler for php-scripts application/x-httpd-suphp="php:/usr/bin/php-cgi" ;Handler for CGI-scripts x-suphp-cgi="execute:!self" some vhost in sites-enabled: NameVirtualHost *:8080 <VirtualHost *:8080> ServerAdmin ... ServerName ... ServerAlias ... AddType application/x-httpd-php .php AddHandler application/x-httpd-php .php suPHP_Engine on suPHP_UserGroup user user suPHP_ConfigPath "/home/user/etc" suPHP_PHPPath /usr/bin DocumentRoot /home/user/web/site.com/ ErrorLog /var/log/apache2/site.com-error_log CustomLog /var/log/apache2/site.com-access_log common <Directory /home/user/web/site.com/> Order Deny,Allow Allow from all Options +Indexes </Directory> </VirtualHost> But when I did nano /home/user/web/id.php and paste <?php system('id'); ?> in it, result I get is: uid=33(www-data) gid=33(www-data) groups=33(www-data) Have no idea what to do so I was hoping comunity could help ty.

    Read the article

  • Which linux distro to use? Hyper-V hosting.

    - by TomTom
    Not a linux geek I am looking for a recommendation which Linux distro to use for a hyper-v based hosting envfironment (so access to the enlightment part easily is important). I Would also love to have something that alloows me to split operating system read only files and user files easily without too much tinkering onto two discs, so that the boot disc can be read only. (reasoning: This would allow me to set up a read only disc that is shared between multiple server instances, with the server disc only containing basically the user files)

    Read the article

  • rsync windows to linux permission denied

    - by user64908
    Using Command rsync -avzP --delete --omit-dir-times ../../ [email protected]:/var/www/mysite/ I'm getting rsync: mkstemp "/var/www/mysite/.." failed: Permission denied (13) If ext is in the www-data group should I still set all the files to be owned by user www-data? I am trying to publish the files with rsync and then set the permissions using sudo chown -R www-data doc sudo chgrp -R www-data doc but I can't even rsync because of the permission denied. The SSH works fine, the rsync too except when it tries to write over or update some of the files in /var/www Client * Windows 7 * Cygwin 1.7.16 (GNU bash, version 4.1.10(4)-release (i686-pc-cygwin)) * rsync version 3.0.9 protocol version 30 Server * Ubuntu 12.04 * Apache2 * Root Accounts [ubuntu,ext] * Groups [www-data] * sudo vigr has www-data:x:33:ubuntu,ext I have already configure this http://stackoverflow.com/questions/2124169/cwrsync-ignores-nontsec-on-windows-7 This article has also managed to confuse me http://unix.stackexchange.com/questions/41687/how-should-i-rsync-files-in-var-www-if-i-want-them-to-be-owned-by-www-data What is the right procedure?

    Read the article

  • NGINX: How do I calculate an optimal no. of worker processes and worker connections?

    - by bodacious
    Our web app is running on a Linode 2048 server at the moment (~ 2048 GB of RAM) The MYSQL database is on another linode of it's own so this server is really only handling NGINX and and the Rails application. The application itself uses about 185976 of memory per instance (RSS). Our traffic is < 1000 per day and the pages are mostly cached so there are fewer hits to the rails app itself. My question is - how can I calculate optimal NGINX config settings for my app? Below is the current config: worker_processes 1; # pid of nginx master process pid /var/run/nginx.pid; events { worker_connections 1024; } http { access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; passenger_root /home/user/.rvm/gems/ree-1.8.7-2011.01@URTV/gems/passenger-3.0.3; passenger_ruby /home/user/.rvm/rubies/ree-1.8.7-2011.01/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay on; # gzip settings gzip on; gzip_http_version 1.0; gzip_comp_level 2; gzip_vary on; gzip_proxied any; gzip_types text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; # load extra modules from the vhosts directory include /opt/nginx/vhosts/*.conf; } Any advice would be appreciated! :)

    Read the article

  • Untar with date filter

    - by Don
    Is there any way to untar and only extract those files that are above a certain date including directory structure?? I restored a backup on a play server but it was a few days old. However I have a tar archive of the entire structure that is more up to date and healthy so now I want to extract all files (including directory structure) based on a date filter on the files if possible?

    Read the article

  • (Win7) Gets stuck with ~1% CPU. Especially with multithreading

    - by meow
    Windows 7 32 bit, up to date, Intel i7 860. (For some reason the company runs 32bit Windows everywhere.) I tried to update all motherboard drivers etc. as far as possible. I have a performance issue with a machine which appears in connection with multithreading (or so I think). As an example (and where I most often see it, but it appears on other programs as well): ProteoWizard is a file conversion tool for mass spectrometry files. I can add a list of files and it will attempt to process up to 8 files in parallel (quadcore x 2 threads/core). If I choose 1 to 6 files, I start the process and it goes straight through. If I have =7 files in the queue, conversion goes to ~20%, then gets stuck for 15 seconds, then continues again, always in "chunks" of a few % before getting stuck again. During the time the process is stuck, CPU is at 1%. RAM is not limiting, it is maybe at 70% or so and not going up. I don't get the same problem on other, even slower machines. The computer gets also stuck at 1% CPU doing nothing on other occasions, but for multithreading it is most frequent. Where should I look for the problem?

    Read the article

  • Domain migration - 301 Redirect of all contentes of directory)

    - by Trufa
    Hi, I would like to know if it is possible to do the following considering that I would like to migrate domains. I have lets say: one.com/files/one.html one.com/files/two.php one.com/other/three.html one.com/other/four.doc one.com/other/subdirectory/five.doc I am migrating to two.com So I would like to make RESPECTIVE 301 redirects to the following: two.com/old/files/one.html two.com/old/files/two.php two.com/old/other/three.html two.com/old/other/four.doc two.com/old/other/subdirectory/five.doc I've tried with cPanel and although I come "close" with the redirects option I can't seem to make it happen. The folders are not much (10 -12) the file are a lot, and obviously impossible to make it manually. How would you proceed? Can this/ should this be done with regex from the .htaccess?? Can you direct all the elements of a subdirectory in the manner expressed above? I hope the question is clear enough, if not please ask for any clarification needed!! Thanks in advance!!

    Read the article

  • Javascript loading never completes on many sites

    - by Joe
    I recently moved country and have found that on many websites the page never finishes loading. In some cases, no content is ever displayed, but the loading will never time out. Loading Developer Tools in Chrome shows me that it is the Javascript files which never load. For example, this BBC article will never load compatability.js, though will load all the other JS files perfectly. Google Maps often fails to finish loading, meaning it's impossible to make searches. There seems to be no pattern to which files will fail to load (i.e. they don't come from the same CDN). I have tried Chrome, Safari and Firefox on OSX 10.8, and Chrome on my girlfriend's OSX 10.7. I have similar issues on the iPad. In many cases, if I can go to the mobile version of the page that seems to load fine. I have run the browsers in private mode, disabled plugins, updated flash, cleared the cache, flushed the DNS cache - though it would seem that if this is happening on other devices, none of this would work anyway. Is this an ISP issue? And if so, why would it be limited to certain JS files and not all? JS files from the same domain work fine, so I'm not really sure what I should be looking for.

    Read the article

  • Limit access on Apache 2.4 to ldap group

    - by jakobbg
    I've upgraded from Ubuntu 12.04 LTS to 14.04 LTS, and suddenly, my Apache 2.4 (previous: Apache 2.2) now lets everybody in to my virtual host, which is unfortunate :-). What am I doing wrong? Anything with the Order/Allow lines? Any help is greatly appreciated! Here's my current config; <VirtualHost *:443> DavLockDB /etc/apache2/var/DavLock ServerAdmin [email protected] ServerName foo.mydomain.com DocumentRoot /srv/www/foo Include ssl-vhosts.conf <Directory /srv/www/foo> Order allow,deny Allow from all Dav On Options FollowSymLinks Indexes AllowOverride None AuthBasicProvider ldap AuthType Basic AuthName "Domain foo" AuthLDAPURL "ldap://localhost:389/dc=mydomain,dc=com?uid" NONE AuthLDAPBindDN "cn=searchUser, dc=mydomain, dc=com" AuthLDAPBindPassword "ThisIsThePwd" require ldap-group cn=users,dc=mydomain,dc=com <FilesMatch '^\.[Dd][Ss]_[Ss]'> Order allow,deny Deny from all </FilesMatch> <FilesMatch '\.[Dd][Bb]'> Order allow,deny Deny from all </FilesMatch> </Directory> ErrorLog /var/log/apache2/error-foo.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access-foo.log combined </VirtualHost>

    Read the article

  • Why is MySQL unable to open hosts.allow/hosts.deny?

    - by HonoredMule
    I have a storage server running Nexenta (OpenSolaris kernel, Ubuntu userspace) with MySQL on top of a ZFS storage array, using innodb_file_per_table and ulimit -n set to 8K. mysqltuner.pl confirms the file limit and claims there are 169 files. The following command: pfiles `fuser -c / 2>/dev/null indicates one mysqld process having 485 file/device descriptors (and they're almost all for files) so I don't know how reliable the tuning script is, but it is still way less than 8K and this list also finds no other process which is close to it's limit. The global total number of descriptors in use is around 1K. So what can cause mysqld to be constantly streaming the following errors? [date] [host] mysqld[pid]: warning: cannot open /etc/hosts.allow: Too many open files [date] [host] mysqld[pid]: warning: cannot open /etc/hosts.deny: Too many open files Everything appears to actually be operating fine, but the issue is constantly flooding the admin console and starts right away on a fresh boot (not only reproducible, but always from mysqld and always the hosts files, whose permissions are the default -rw-r--r-- 1 root root). I could, of course, suppress it from the admin console but I'd rather get to the bottom of it and still allow mysqld warnings/errors to reach the admin console. EDIT: not only is the actual file descriptor well within sane limits, the issue also persists (with immediate appearance) even with the file limit raised to 65535 and always only on hosts.allow/deny.

    Read the article

  • Minimal backup for Windows 7 system recovery

    - by JIm
    There might not be an answer to this, but for a home Win7 system, what files/directories must be backed up to recover after a windows crash? I can reinstall software, and I keep data files elsewhere. When I use acronis home backup software to backup my "critical" files it seems to choose the entire partition. Updates are mostly browser cache files and the like. Or, after a crash, should I just reinstall windows. I dread the hours of windows updates that would require. Thanks.

    Read the article

  • Remote I/O costs with a Content Delivery Network

    - by x711Li
    As far as I know, the time complexity of scanning a directory and the amount of files in said directory are correlated due to I/O costs. Would the administrative costs of placing the files in a hashed directory tree for uploading/downloading files through a CDN API be worth it for the added efficiency? For instance, given a filename foo.mp3, the MD5 hash for this is 10ebb1120767e9de166e0f5905077cb1. Thus, storing foo.mp3 in ./10/eb/foo.mp3 would allow for less files per directory (assuming MD5 generates patterns with in Base36, this allows for 36^2 root directories with 36^2 subdirectories each and little chance of hash collision) Considering the directories themselves are not loaded, would the I/O costs of directory scanning still exist with direct uploading/downloading?

    Read the article

  • How to redirect a name-based VirtualHost to a different port?

    - by Andra
    I have a virtuoso sparql endpoint installed, which I want to make available through a hostname (e.g. www.virtuosoexample.com). The thing with virtuoso is that the is no Document root. The endpoint is initiated by the daemon and made available through a source port (e.g. localhost:1234/) I know how to set a virtual host pointing to a document root, but i don't know how to do this for a server with a port number. Any advice would be appreciated. Below is the code, how I would do it with a document root. I tried to change that (naively) into localhost:1234/sparql, but that didn't work <VirtualHost * ServerName www.virtuosoexample.com <www.virtuosoexample.com> ServerAlias www.virtuosoexample.com <www.virtuosoexample.com> ErrorLog /var/log/apache2/error.wp-sparql.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.wp-sparql.log combined DocumentRoot /var/www/endpoint/sparql/ <Directory /var/www/endpoint/sparql> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost

    Read the article

  • backing up ntfs disk using rsync on ubuntu

    - by user70366
    For a long time I was using windows. I have a separate drive I use to keep copies of my media files, photos etc. on, which I periodically backup to an external drive. In Windows I used SyncToy to do this. After my Windows stopped booting, I decided to switch to Linux (Ubuntu 10.10). That seems to be going fine, but now I want to backup my drive to the external drive like before. Mostly the two drives will be already the same with maybe about 10GB of extra files added. So I try to use rsync to synchronise the two drives like this: rsync --dry-run -rvlt --modify-window=1 /media/Antonio1TB/Backup /media/FREECOM\ HDD/Backup The problem is the dry run indicates that every file on the drive will be copied. Not just the files I have recently added. What is the correct command to synch two NTFS drives under Ubuntu so that files that already exist don't get copied again? Thanks.

    Read the article

  • How to copy windows explorer file selections and past filenames as txt

    - by Ricky Williams
    Win7 How can I select multiple files in explorer and get a string of text listing the file names like i would get if i selected the files from within "open file" window of an application. for example if i have a Dir that contains 100 jpegs and i select 01.jpg, 05.jpg, 10.jpg in windows explore via ctrl+clicks or shift+click how can i get a text string that reads " "01.jpg" "05.jpg" "10.jpg" with or without the selected files full path " with out dragging the selected files to another window? I've tried copy and pasting into Note Pad and Word Pad and it either past the picts into the the application or a empty clipboard.

    Read the article

  • Understanding Unix Permissions (w/ ACL)

    - by Dr. DOT
    I am trying to set permissions on my server properly. Currently I have a number of directories and files chmod'd at 0777 -- but I am not comfortable with it being this way. So at the advice of a serverfault specialist, I had my hosting provider install ACL on my shared virtual server. When I FTP to the server as my FTP user account "abc", I can do everything I need to do (and rightfully so) because all my dirs and files are owned by "abc", the group is "abc", and the 1st octet is set to 7 (rwx). That much I get. But here's where it gets dark gray for me. PHP is set to user "nobody". so when someone browses on of my web pages that either ends in .php or has some embedded PHP, I assume the last octet controls the access. Because all my dirs and files are owned by "abc" and assigned to group "abc", if the last octet was a 4 (r--) then the server would let the browser read the file. If it were a 6 (rw-) then the server would let the browser also write to the file or directory, correct? what if the web document does not end in .php or does not have any PHP embedded? What is the user then? how can I use ACL to not set the permission to 6 (rw-) or even 7 (rwx)? [not sure what execute does or means] Just looking for some sort of policy settings to best lock down my dirs and files while allowing my PHP scripts to do uploads and write to files (so my users don't call me to tell me "permission denied". Ok, thanks to anyone out there willing to lend me a hand. It is greatly appreciated.

    Read the article

  • Cygwin file and directory user and group

    - by dvanaria
    I use Cygwin as my main development environment on both my home and work computers. In order to share files between the two computers, I use Dropbox, which is installed in the following folder on both computers: c:\cygwin\home\dvanaria\dropbox Everything works great, except for one thing. When I'm working on my home computer and do an ls -l on any directory, all the files show up as owned by dvanaria of group Users. But when I work from my work computer, an ls -l shows all files as being owned by Administrators and of group Domain Users. I know Cygwin uses some kind of mapping between Windows users and permissions to the /etc/passwd file. But to be honest I have no idea how this file works or how it maps to Windows under Cygwin. Could anyone help figure this out? The main problem is that I can't edit any files when using my work computer, only read them.

    Read the article

  • Compare-Object gives false differences

    - by Andy
    I have some problem with Compare-Object. My task is to get difference between two directory snapshots made at different times. First snapshot is taken like this: ls -recurse d:\dir | export-clixml dir-20100129.xml Then, later, I get second snapshot and load both of them: $b = (import-clixml dir-20100130.xml) $a = (import-clixml dir-20100129.xml) Next, I'm trying to compare with Compare-Object, like that: diff $a $b What I get is in some places files that were added to $b since $a, but in some -- files that were in both snapshots, and some files, that were added to $b, are not given in Compare-Object output. Puzzling, but $b.count - $a.count is EXACTLY the same as (diff $a $b).count. Why is that? Ok, Compare-Object has -property param. I try to use that: diff -property fullname $a $b And I get the whole mess of differences: it shows me ALL the files. For example, say $a contains: A\1.txt A\2.txt A\3.txt And $b contains: X\2.mp3 X\3.mp3 X\4.mp3 A\1.txt A\2.txt A\3.txt diff output is something like that: X\2.mp3 => A\1.txt <= X\3.mp3 => A\2.txt <= X\4.mp3 => A\3.txt <= A\1.txt => A\2.txt => A\3.txt => Weird. I think I don't understand something crucial about Compare- Object usage, and manuals are scarce... Please, help me to get the DIFFERENCE between two directory snapshots. Thanks in advance UPDATE: I've saved data as plain strings like that: > import-clixml dir-20100129.xml | % { $_.fullname } | out-file -enc utf8 a.txt And results are the same. Here're excerpts of both snapshots (top 100-something lines, a.txt and b.txt), output of compare-object, and output of UNIX diff (unified). All files are UTF-8: http://dl.dropbox.com/u/2873752/compare-object-problem.zip

    Read the article

  • iptables rule on INPUT between 2 ethernet cards on the same host

    - by user1495181
    I have 2 eth cards on the same host. Both connected directly with LAN cable. I set eth0 with ip - 192.168.1.2 I set eth1 with ip - 192.168.1.1 I set this rule: iptables -A INPUT -p tcp -j NFQUEUE --queue-num 0 There are no other rules. (I ran iptables -X,-F) I send TCP syn packet ( with c++ program by using raw socket) from 192.168.1.2 to 192.168.1.1 In wireshark i see that the packet received on eth0, but the iptables rule (above) dosnt apply for this packet. when i sent the packet to remote host and apply this rule on the remote host than it work correct. So, i guess that this is due to the fact that both eth cards exists the same host. . I need to create iptables INPUT rule for local eth card (dest and src on the same machine ). I need it for simplify test. Did i guess the problem correct? is there a way to bypass this? Ps - connected them via switch didn't help. the rule wasn't applied. Run on Ubuntu. TCDUMP show the packet: 10:48:42.365002 IP 192.168.1.2.38550 > 192.168.1.1.34298: Flags [S], seq 0, win 5840, length 0 but logging of iptables like this, has nothing: iptables -A INPUT -p tcp -j LOG --log-prefix '*****************' iptables -A OUTPUT -p tcp -j LOG --log-prefix '#################'

    Read the article

  • Setting up logging for a remote backup script

    - by Brian Dainis
    So I wrote up a short script that I am planning to run via a cron job daily to package up my site files and send them to a remote location. I also plan to incorporate DB dumps, but I have not gotten that far yet. My issue today however is that Im am uncertain how to log the output of each command for errors, warnings, or other pertinent information the command may output. I would also like to install sometype of fail safe so if something goes horribly wrong the script will stop dead in its tracks and notify me via email or something. Ok the email thing is not as critical, but would be nice. Does anybody have any ideas for that? Here is what I have so far. By the way, both servers are CentOS 6.2 running standard LAMP. #!/bin/sh ################################# ### Set Vars ################################# THEDATE=`date +%m%d%y%H%M` ################################# ### Create Archives ################################# tar -cf /root/backups/files/server_BAK_${THEDATE}.tar -C / var/www/vhosts gzip /root/backups/files/server_BAK_${THEDATE}.tar ################################# ### Send Data to Remote Server ################################# scp /root/backups/files/server_BAK_${THEDATE}.tar.gz user@host:/home/bak1/ftp/backups/ ################################# ### Remove Data from this Server ################################# rm -rf /root/backups/files/server_BAK_${THEDATE}.tar.gz

    Read the article

  • Can "tar" backup incrementally?

    - by Somebody still uses you MS-DOS
    I have my home folder with a few GB. Is it possible to run tar on it, create a home.tar.gz, and then for changed files, it creates home1.tar.gz only with modified files from previous tar (thus being an incremental backup)? I would like to check the resulting checksum files and export them as well like home.md5, home1.md5, etc. (I know this could be another process, but interesting as well).

    Read the article

  • Outlook 2010 PST not indexing

    - by kellyllek
    I've had this problem for months: Outlook is unable to search recent items in the inbox. I was running Outlook 2010 Beta. I've just moved to the Trial version, hoping to solve the issue. I have tons of PST files but one central one I'm mainly concerned with. As of now it seems none of it is indexing. I've been through all the sites and made all the changes; rebuilt the index, changed the name of the PST files, run scanpst, stopped and started the search services, made sure the Windows features under programs and features has the indexing option checked, etc... Status now says 'zero items left to index', and 150,000 have been indexed. I think I have a lot more files than that, and also nothing is showing up on any search. I'm not sure what else to do? Side question. I'm going to be moving out of Outlook. However I have 10Gigs+ of PST files over the years. I want to merge them and make them search-able in the easiest way possible. Any idea on how to do that? Could I even move over to Thunderbird right now and be able to index and search my PST files? Also, Google Desktop won't index Outlook 2010 email either...

    Read the article

  • Accidental Extract Location - How to Clean Up?

    - by Gordon
    Sometimes I will do a command such as unzip tons_of_files.zip And I will forget to put a -d to point to a subdirectory. This causes the current folder to get filled with tons of files that are intermixed with the existing files. What is the best way to remove all these new files and/or move them to a new directory? I want to avoid having to manually examine the directory and determine if the file was part of the archive or was already present.

    Read the article

< Previous Page | 470 471 472 473 474 475 476 477 478 479 480 481  | Next Page >