Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 630/1620 | < Previous Page | 626 627 628 629 630 631 632 633 634 635 636 637  | Next Page >

  • backup software that ignores user rights

    - by Chris
    Hi, As a computer technician I have to reinstall systems allmost daily (when it can't be repaired ;-)) My problem is that I recover user files by hand and a external mounting device. Most of the time it works fine, but also weekly I have systems with passwords and personal files which are often not sucessfull recovered. I know you can change owner, but when people have 30 GB's af data, my backup computer works for ages to change the rights. Can anyone think of software (commercial is no problem) which does the following: * backup user data without having user rights troubles * have a option to choose what to backup (email accounts, documents, etc, etc) even when it's externaly mounted, in short, it reconizes the folder structure) * Works on different OS's like XP, Vista, W7

    Read the article

  • PHP + IIS Application Pool Identity Windows\Temp permissions

    - by Matt Boothman
    I am currently running PHP (5.3) on IIS 7.5 on a Win2k8 R2 Web Edition Server and would like to know what, if any, problems or security vulnerabilities I may introduct into a system by assigning Read, Write, Modify & Execute permissions to either IUSR account or the IIS_USERS group for %SystemRoot%\Temp? Should I be altering permissions to that folder at all (as Windows reminds me I probably shouldn't when i attempt to change them)? Should I create a temp folder somewhere else and set permissions accordingly? The problem is when i set Anonymous Authentication (I'm guessing is a more secure option???) to use the App Pool identity, when starting sessions PHP gets stuck in a loop because it's unable to create session files in the %SystemRoot%\Temp folder due to lack of permission on the application pool user or IIS_USERS group. Another problem being ImageMagick (PHP Extension) is being denied access to %SystemRoot%\Temp to write temporary files so is throwing exceptions. I have tried searching Google however have not found anything that touches upon this subject specifically. Any help greatly appreciated.

    Read the article

  • Use an audio/video file from a Linux laptop via USB to be played by Magic Sing ET-23H

    - by AisIceEyes
    I am one of the technical directors of a regular karaoke contest event. For the karaoke contest itself, due to tight budget, we are using what one of the sponsors are providing - Magic Sing ET-23H . The video output of the Magic Sing ET-23H are broadcasted at two big screens that are being shown to the audience and event attendees. When a karaoke contestant provides his / her karaoke video, the video itself is in a readable USB flashdrive and is attached to the USB input of Magic Sing ET-23H. What really bugs me is that the interface of Magic Sing ET-23H are also being broadcasted at the big screen video feeds. The interface of choosing the video file is being seen in the Magic Sing ET-23H - also to the big video screens that are seen by the audience and event goers. I will post in the comments ( if my less than 10 reputation would allow me) the picture of Magic Sing ET-23KH USB input of the device. I always bring my laptop, Acer AS5742-7653, during the regular karaoke event. I'm using my laptop also for tallying of scores from the judges, and also playing audio files from contestants that did not provide a karaoke video. I personally am using different Linux distros, but I next to all the time use my Ubuntu Studio 12.04.3 64bit partition during the regular karaoke contest event. My question is this: Is there a way I can share a temporary video/audio file directly from the laptop I'm using, going to the Magic Sing ET-23H that can broadcast both the video/audio file? Just like how in Window's Avisynth AVS files, or VirtualDub's temporary avi file, or like using ffplay (of ffmpeg), etc. I have researched somewhat the matter and found links in SuperUser.com. Though I can only provide the links at the comments section of this post if my reputation of less than 10 would allow me. I have a hunch it is possible, but I have not fully understood the device being used at the event, Magic Sing ET-23H, if there are other ways for it to broadcast video and audio files besides its USB input. Any help to my current predicament is highly appreciated. Thank you. PS: Since I need at least 10 reputation to post more than 2 links and also post images, I will try to post the image & links at the comments (if my below 10 reputation would allow me).

    Read the article

  • Solaris 11.1 smb share pam.conf

    - by websta
    I would like to enable an SMB share on Solaris 11.1 x64 My steps: pkg install service/filesystem/smb svcadm enable -r smb/server echo "other password required pam_smb_passwd.so.1 nowarn" >> /etc/pam.conf useradd public smbadm enable-user public zfs set share=name=fs1,path=/rpool/fs1,prot=smb rpool/fs1 zfs set sharesmb=on rpool/fs1 passwd -r files public Step 8 failes: It is not possible to enter a password, output is: solaris passwd -r files public Please try again Please try again Permission denied If I uncomment the new line in pam.conf, it is possible to change the password. Nevertheless, it is not possible to access the share from Windows 7. The Solaris machine is reachable with ping. Access with another SMB enabled user is denied too.

    Read the article

  • How to remove $data stream from file in windows 8

    - by chris.w.mclean
    Windows for a while now has added an additional hidden stream to files that were downloaded from the internet. If you attempted to use these files, you'd get all kinds of odd behavior as windows was detecting this additional stream and then preventing the app / exe from getting all sorts of security clearance. But in previous versions of windows you could right click on a file, go to properties then click 'Unblock' which removed the extra stream. Windows 8 seems to be doing the additional streams trick, but I haven't yet found a way to remove them using the win 8 UI. Anyone know how to do this?

    Read the article

  • How to disable Windows File Protection in Windows XP or 7 from Registry?

    - by SEARAS
    How to disable Windows File Protection in Windows 7 and/or XP from Registry? I want to automatically replace a driver with my created driver. I used PendingFileRenameOperations key in HKLM\System\CurrentControlSet\Control\Session Manager but i've found that it can ONLY be used for simple (not-system) files, because Windows File Protection disables it for system files (see this post). Now I need to temporarily disable WFP (and turn it on after changing driver). You can tell me another way to disable it. It can help me too. Thanks in advance! Any ideas?

    Read the article

  • What is a good tool to scan a FTP directory and show disk usage visually ala KDirStat/WinDirStat?

    - by Wesley 'Nonapeptide'
    Is there a tool that can scan an FTP directory and build a visual representation of disk usage? I'm running on Windows so that platform is my preference for this tool, but a *NIX tool would also be useful. I'm thinking along the lines of WinDirStat, KDirStat and TreeSize. At first I thought WinDirStat might be able to scan an FTP directory, but it was not so. FOSS is a plus, but not a requirement (I'm not against paying for good software). I'd like to also have a simple report on how many of what types of files are present, largest files, etc. Much like the simple file type reporting in *DirStat.

    Read the article

  • Wordpress Automatic Updating/Installing Plugins Permissions

    - by karmic
    I am using the latest Wordpress and I have always had issues with the automatic updater. For the files in the wordpress directory, i set them to permission 770, and add the webserver user 'www-data' as the group owner. I use lighttpd. However, the automatic updating plugins or installing plugins does not work. It works if I chmod 777 the files or if I set the actual owner to the web server as well. What are the best permission settings for security while still allowing the updating feature to work properly in wordress? Also, by 'not work' i mean, it will go to the screen that asks me for FTP credentials when I try to update.

    Read the article

  • Weird problem, special characters after formatting the hard drive, computer not starting 0_o

    - by m3taspl0it
    Hya friends , Last night my computer was working fine. But today when i came back from college and started it , it was starting fine but after sometime it's getting restarted , again and again getting restarted at different points , so i tried to boot it in safe mode but same problem. Now after all this , i finally decided to format the drive C (it is in 80 GB) and load new OS windows XP3. After formatting and loading xp3 sufficient files , when it was getting rebooted for copying the actual os files , it hung and a weird screen came. I've also attached the pic of error : http://www.postimage.org/image.php?v=gxz1vKS My specs : P4 3.0 Ghz 2 GB RAM (2x 512 mb and 1 GB) 3 hard drives { 80 GB (5 years old around) 320 GB ( 2 years old around) 500 GB ( recently bought) 256 MB graphics card any help is very appreciated , thanks

    Read the article

  • Code deploy system [closed]

    - by Turnaev Evgeny
    Currently we deploy code to servers in a various ways: freebsd package freebsd ports part of config files and static just svn up'ed and a symlink is changed to new upped folder The distribution of freebsd packages to target servers is done through custom tool that uses ssh. I am looking for a code deploy system that will allow: deploy several packages (freebsd or linux) atomic (ether deploy all or none of them to server) can save a history of last stable version - so in case of bad deploy i can easily rollback to last working version all servers ease deployment of config and static files - and integrate those into atomic deploy/rollback system. should work with freebsd or linux (apt-get system)

    Read the article

  • Concurrent modification during backup: rsync vs dump vs tar vs ?

    - by pehrs
    I have a Linux log server where multiple applications write data. Data is written in bursts, and in a lot of different files. I need to make a backup of this mess, preferably preserving as much coherence between the file versions as possible and avoiding getting truncated files. Total amount of data on the server is about 100Gb. What I really would want (but can't) is to shut-down, backup the system cold and then start it up again. What kind of guarantees against concurrent modification does the various backup tools give? When do they "freeze" the file versions? I am looking at rsync, dump and tar at the moment, but I am open for other (open source) alternatives. Changing the application or blocking writing for backups is sadly not an option. System is not running LVM (yet), but I have considered that for rebuilding the system and then snapshots.

    Read the article

  • SQL Server 2008 - Error starting service - model.mdf not found?!

    - by alex
    my SQL server 2008 was running fine. About an hour ago, it suddenly stopped - the MSSQLSERVER service had stopped I right clicked, clicked start, and it said the service had started, and stopped I looked in the event log and saw these two errors: 17207 : udopen: Operating system error 3(error not found) during the creation/opening of physical device C:\Program Files\Microsoft SQL Server\MSSQL\data\model.mdf. 17204 : FCB::Open failed: Could not open device C:\Program Files\Microsoft SQL Server\MSSQL\data\model.mdf for virtual device number (VDN) 1. The model.mdf db has NEVER been in that location - i specified drive F: to use for data / log during install. I checked the SQL Configuration Manager, to try and set startup params, but SQL Server is not listed as one of the services..... EDIT: I've now moved the db to where it was looking for: C:\Program Files\Microsoft SQL Server\MSSQL\data\ directory. Now if I start the service, it still does not work - i get this error message in the log: Could not find row in sysindexes for database ID 3, object ID 1, index ID 1. Run DBCC CHECKTABLE on sysindexes. Interestingly, i checked the error log - around the time users reported problems, there is this: 2010-01-08 17:11:26.44 spid51 Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install. 2010-01-08 17:11:26.44 spid51 FILESTREAM: effective level = 0, configured level = 0, file system access share name = 'MSSQLSERVER'. 2010-01-08 17:11:26.44 spid51 Configuration option 'Agent XPs' changed from 1 to 0. Run the RECONFIGURE statement to install. 2010-01-08 17:11:26.44 spid51 FILESTREAM: effective level = 0, configured level = 0, file system access share name = 'MSSQLSERVER'. 2010-01-08 17:11:26.44 spid51 Configuration option 'show advanced options' changed from 1 to 0. Run the RECONFIGURE statement to install. 2010-01-08 17:11:26.44 spid51 FILESTREAM: effective level = 0, configured level = 0, file system access share name = 'MSSQLSERVER'. 2010-01-08 17:11:44.89 spid10s Service Broker manager has shut down. 2010-01-08 17:11:47.83 spid7s SQL Server is terminating in response to a 'stop' request from Service Control Manager. This is an informational message only. No user action is required. 2010-01-08 17:11:47.83 spid7s SQL Trace was stopped due to server shutdown. Trace ID = '1'. This is an informational message only; no user action is required.

    Read the article

  • Drupal on an NFS share has terrible performance

    - by Marcus
    We have a setup where a Drupal 7 site with the following setup - a VMware ESXi 4.1 host server running a web vm and an NFS VM. The web VM is using Apache and mod_php. The site is still in development thus we have to turn off all forms of caching due to the frequently-updated files. Each page request takes around 15-20 seconds to complete. Profiling the PHP code shows that the vast majority of time (normally over 90%) is taking by all the is_dir(), is_file() function calls that load up the modules. I've increased PHP's realpath cache size to several megs and an strace shows that the lstat calls then drop from over 200 to around 6 and stat() decreases a bit (around 600 calls). However, while this has shaved off quite a bit of time, I am simply unable to break past the 10 second per request barrier. Is there a way to get better performance out of this setup that doesn't involve caching? Configs and stats: VMs: web - Centos 6 64bt, 2.5GB RAM, normal CPU/HD prioritisation nfs - Centos 6 64bt, 2GB RAM, normal CPU priority, high HD priority PHP: 32M realpath cache size (it's this high for testing purposes) NFS: ~]# egrep -v '#|^$' /etc/nfsmount.conf [ NFSMount_Global_Options ] Defaultvers=4 Ac=False Rsize=32k Wsize=32k Bsize=32k Reading speeds via NFS are not an issue a dd of a 100M test file using 32k blocks returns: 3200+0 records in 3200+0 records out 104857600 bytes (105 MB) copied, 1.84984 s, 56.7 MB/s real 0m1.857s user 0m0.007s sys 0m0.330s Strace on Apache process with empty realpath cache: % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 50.78 1.157452 337 3434 28 stat 32.58 0.742656 628 1182 425 open 9.29 0.211788 762 278 1 lstat 3.17 0.072322 0 237865 write 2.45 0.055839 490 114 13 access 0.45 0.010262 43 237 brk 0.34 0.007725 10 811 74 read 0.28 0.006340 9 679 fstat 0.22 0.005069 18 281 poll 0.20 0.004533 6 698 getdents 0.09 0.001960 10 190 mmap 0.05 0.001065 14 74 accept4 0.04 0.001000 333 3 chdir 0.03 0.000750 4 190 munmap 0.01 0.000339 0 836 close 0.01 0.000247 3 75 writev 0.00 0.000068 0 611 fcntl 0.00 0.000063 1 77 shutdown 0.00 0.000000 0 1 lseek 0.00 0.000000 0 5 rt_sigaction 0.00 0.000000 0 1 rt_sigprocmask 0.00 0.000000 0 3 setitimer 0.00 0.000000 0 5 socket 0.00 0.000000 0 5 5 connect 0.00 0.000000 0 74 getsockname 0.00 0.000000 0 15 setsockopt 0.00 0.000000 0 5 getcwd 0.00 0.000000 0 1 futex ------ ----------- ----------- --------- --------- ---------------- Strace after realpaths are cached % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 60.14 1.371006 484 2831 28 stat 31.79 0.724705 627 1155 425 open 3.53 0.080354 0 237865 write 2.65 0.060433 530 114 13 access 0.43 0.009913 99 100 brk 0.38 0.008730 11 804 74 read 0.35 0.007910 12 675 fstat 0.30 0.006775 10 654 getdents 0.13 0.003065 11 281 poll 0.09 0.002000 333 6 1 lstat 0.07 0.001545 2 807 close 0.05 0.001063 14 74 accept4 0.04 0.001000 6 179 mmap 0.02 0.000404 2 179 munmap 0.01 0.000271 4 75 writev 0.01 0.000212 0 611 fcntl 0.01 0.000129 2 77 shutdown 0.00 0.000022 0 74 getsockname 0.00 0.000000 0 1 lseek 0.00 0.000000 0 5 rt_sigaction 0.00 0.000000 0 1 rt_sigprocmask 0.00 0.000000 0 3 setitimer 0.00 0.000000 0 3 socket 0.00 0.000000 0 3 3 connect 0.00 0.000000 0 15 setsockopt 0.00 0.000000 0 5 getcwd 0.00 0.000000 0 3 chdir ------ ----------- ----------- --------- --------- ---------------- Mount: nfs.xxx.xxx.xxx:/path/to/website/files on /path/to/website/files type nfs (rw,hard,intr,noac,vers=4,addr=xx.xx.xx.xx,clientaddr=xx.xx.xx.xx) Any help is, naturally, appreciated.

    Read the article

  • Moving a Drupal between linux servers, best practice to avoid file-ownership problems

    - by zero
    I want to port over a Drupal commons 6x24 from a local LAMP-stack to a production webserver. Both systems run OpenSuse Linux. How do I do this, what are the most important steps. How should I handle file-ownership. It's important for me to have to have full control of the file ownership. If I use the wwwrun account, I frequently run into problems, due to a very strict webserver-admin. See for example the long history of looking for fixes and solutions see this thread and even more interesting see this very long and impressive thread here. All troubles I run into have to do with file-owernship and permissions. This is my current setup; Note: This was just a quick hacked installation - quick and dirty. Well my interest is after the general options i have in the port of a drupal from linux to linux linux-vi17:/srv/www/htdocs/com624 # ls -l insgesamt 224 -rwxrwxrwx 1 root www 45285 19. Jan 00:54 CHANGELOG.txt -rwxrwxrwx 1 root www 925 19. Jan 00:54 COPYRIGHT.txt -rwxrwxrwx 1 root www 206 19. Jan 00:54 cron.php drwxrwxrwx 2 root www 4096 19. Jan 00:54 includes -rwxrwxrwx 1 root www 923 19. Jan 00:54 index.php -rwxrwxrwx 1 root www 1244 19. Jan 00:54 INSTALL.mysql.txt -rwxrwxrwx 1 root www 1011 19. Jan 00:54 INSTALL.pgsql.txt -rwxrwxrwx 1 root www 47073 19. Jan 00:54 install.php -rwxrwxrwx 1 root www 15572 19. Jan 00:54 INSTALL.txt -rwxrwxrwx 1 root www 14940 19. Jan 00:54 LICENSE.txt -rwxrwxrwx 1 root www 1858 19. Jan 00:54 MAINTAINERS.txt drwxrwxrwx 3 root www 4096 19. Jan 00:54 misc drwxrwxrwx 35 root www 4096 19. Jan 00:54 modules drwxrwxrwx 4 root www 4096 19. Jan 00:54 profiles -rwxrwxrwx 1 root www 1470 19. Jan 00:54 robots.txt drwxrwxrwx 2 root www 4096 19. Jan 00:54 scripts drwxrwxrwx 4 root www 4096 19. Jan 00:54 sites drwxrwxrwx 7 root www 4096 19. Jan 00:54 themes -rwxrwxrwx 1 root www 26250 19. Jan 00:54 update.php -rwxrwxrwx 1 root www 4864 19. Jan 00:54 UPGRADE.txt -rwxrwxrwx 1 root www 294 19. Jan 00:54 xmlrpc.php linux-vi17:/srv/www/htdocs/com624 # thx to BetaRides answer here a quick overview on the drush functionality with rsync http://drush.ws/ core-rsync Rsync the Drupal tree to/from another server using ssh. Examples: drush rsync @dev @stage Rsync Drupal root from dev to stage (one of which must be local). drush rsync ./ @stage:%files/img Rsync all files in the current directory to the 'img' directory in the file storage folder on stage. Arguments: source May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. destination May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. Options: --mode The unary flags to pass to rsync; --mode=rultz implies rsync -rultz. Default is -az. --RSYNC-FLAG Most rsync flags passed to drush sync will be passed on to rsync. See rsync documentation. --exclude-conf Excludes settings.php from being rsynced. Default. --include-conf Allow settings.php to be rsynced --exclude-files Exclude the files directory. --exclude-sites Exclude all directories in "sites/" except for "sites/all". --exclude-other-sites Exclude all directories in "sites/" except for "sites/all" and the site directory for the site being synced. Note: if the site directory is different between the source and destination, use --exclude-sites followed by "drush rsync @from:%site @to:%site" --exclude-paths List of paths to exclude, seperated by : (Unix-based systems) or ; (Windows). --include-paths List of paths to include, seperated by : (Unix-based systems) or ; (Windows). Topics: docs-aliases Site aliases overview with examples Aliases: rsync

    Read the article

  • How to migrate exchange 2007 (sherweb) to Google Apps?

    - by Yoffe
    I need to migrate our Sherweb.com exchange 2007 services to a Google Apps account. For the process I am really not sure.. I understand I should start with creating aliases for all email accounts within the exchange server, in Google Apps, and here I'm not sure how am I supposed to explain the Exchange that the DNS have changed without losing emails.'' Second thing is: How can I safely move the up-to 3GB mailboxes from the Exchange server to the new Google Apps accounts? Must it be with Outlook data files? If so, how do I actually upload the data files into the Google Apps account? And if not, what would be a proper way to do so? Would really appreciate any kind of help.

    Read the article

  • How to combine wildcards and spaces (quotes) in an Windows command?

    - by Jan Fabry
    I want to remove directories of the following format: C:\Program Files\FogBugz\Plugins\cache\[email protected]_NN NN is a number, so I want to use a wildcard (this is part of a post-build step in Visual Studio). The problem is that I need to combine quotes around the path name (for the space in Program Files) with a wildcard to match the end of the path. I already found out that rd is the remove command that accepts wildcards, but where do I put the quotes? I have tried no ending quote (works for dir), ...example.com*", ...example.com"*, ...example.com_??", ...cache\"[email protected]*, ...cache"\[email protected]*, but none of them work. (How many commands to remove a file/directory are there in Windows anyway? And why do they all differ in capabilities?)

    Read the article

  • Why does writing a file to an NFS share send a COMMIT operation to the NFS server?

    - by Antonis Christofides
    I have a Debian squeeze (2.6.32-5-amd64) which is at the same time a NFS4 server and client (it mounts itself through NFS4). The local directory that leads directly to disk is /nfs4exports/mydir, whereas /nfs4mounts/mydir is the same thing mounted through NFS, using the machine's external IP address. Here is the line from fstab: 192.168.1.75:/mydir /nfs4mounts/mydir nfs4 soft 0 0 I have an application that writes many small files. If I write directly to /nfs4exports/mydir, it writes thousands of files per second; but if I write to /nfs4mounts/mydir, it writes 4 files per second or so. I can greatly increase speed if I add async to /etc/exports. (Writing a single large file to the NFS-mounted directory goes at more than 100 MB/s.) I examine the server statistics and I see that whenever a file is written, it is "committed" (this also happens with NFSv3): root@debianvboxtest:~# mount -t nfs4 192.168.1.75:/mydir /mnt root@debianvboxtest:~# nfsstat|grep -A 2 'nfs v4 operations' Server nfs v4 operations: op0-unused op1-unused op2-future access close commit 0 0% 0 0% 0 0% 10 4% 1 0% 1 0% root@debianvboxtest:~# echo 'hello' >/mnt/test1056 root@debianvboxtest:~# nfsstat|grep -A 2 'nfs v4 operations' Server nfs v4 operations: op0-unused op1-unused op2-future access close commit 0 0% 0 0% 0 0% 11 4% 2 0% 2 0% Now in the RFC, I read this: The COMMIT operation is similar in operation and semantics to the POSIX fsync(2) system call that synchronizes a file's state with the disk (file data and metadata is flushed to disk or stable storage). COMMIT performs the same operation for a client, flushing any unsynchronized data and metadata on the server to the server's disk or stable storage for the specified file. I don't understand why the client commits. I don't think that the "echo" shell built-in command runs fsync; if echo wrote to a local file and then the machine went down, the file might be lost. In contrast, the NFS client appears to be sending a COMMIT upon completion of the echo. Why? I am reluctant to use the async NFS server option, because it would apparently ignore COMMIT. I feel as if I had a local filesystem and I had to choose between syncing every file upon close and ignoring fsync altogether. What have I understood wrong?

    Read the article

  • WBAdmin SystemState Problems

    - by TheD
    I recently installed DHCP on my 2008R2 Server and now Backup Exec and WBAdmin/WSB is having System State issues. Essentially, after some research, I came across a handy tool called VSHADOW which has allowed me to output all the files (and their respective directories) to a text file. And hooray, I think I found the problem: * WRITER "Dhcp Jet Writer" - WriterId = {be9ac81e-3619-421f-920f-4c6fea9e93ad} - InstanceId = {0ed0a8f4-19b0-414d-a3a8-d51d6f4ac8e0} - Supports restore events = TRUE - Writer restore conditions = VSS_WRE_IF_REPLACE_FAILS - Restore method = VSS_RME_RESTORE_AT_REBOOT - Requires reboot after restore = TRUE - Excluded files: - Component "Dhcp Jet Writer:\C:_Windows_system32_dhcp\dhcp" - Name: 'dhcp' - Logical Path: 'C:_Windows_system32_dhcp' - Full Path: '\C:_Windows_system32_dhcp\dhcp' The logical path and Full Path for DHCP is completely wrong. However I can't find where I would change this path, I assume in the registry but I've had no luck finding the key !

    Read the article

  • Which Large File System Format to use for USB Flash drive compatible with Ubuntu/Mac/Windows?

    - by wajiw
    I've had this problem for a long time and can't find a solution. I switch between the 3 OSes all the time and use a 1TB USB Drive to do so. I can't seem to find a format that is compatible across all systems that handles large files (at least 8-9 GB). Does anyone have a solution for this? Recently I've tried exFat but that messes up the filesystem when trying to read on windows after adding files from Ubuntu (using the fuse driver). The OSes currently I'm using are Windows Vista/7, Mac OS X (10.6.5) and Ubuntu 10.10

    Read the article

  • How to setup ssh's umask for all type of connections

    - by Unode
    I've been searching for a way to setup OpenSSH's umask to 0027 in a consistent way across all connection types. By connection types I'm referring to: sftp scp ssh hostname ssh hostname program The difference between 3. and 4. is that the former starts a shell which usually reads the /etc/profile information while the latter doesn't. In addition by reading this post I've became aware of the -u option that is present in newer versions of OpenSSH. However this doesn't work. I must also add that /etc/profile now includes umask 0027. Going point by point: sftp - Setting -u 0027 in sshd_config as mentioned here, is not enough. If I don't set this parameter, sftp uses by default umask 0022. This means that if I have the two files: -rwxrwxrwx 1 user user 0 2011-01-29 02:04 execute -rw-rw-rw- 1 user user 0 2011-01-29 02:04 read-write When I use sftp to put them in the destination machine I actually get: -rwxr-xr-x 1 user user 0 2011-01-29 02:04 execute -rw-r--r-- 1 user user 0 2011-01-29 02:04 read-write However when I set -u 0027 on sshd_config of the destination machine I actually get: -rwxr--r-- 1 user user 0 2011-01-29 02:04 execute -rw-r--r-- 1 user user 0 2011-01-29 02:04 read-write which is not expected, since it should actually be: -rwxr-x--- 1 user user 0 2011-01-29 02:04 execute -rw-r----- 1 user user 0 2011-01-29 02:04 read-write Anyone understands why this happens? scp - Independently of what is setup for sftp, permissions are always umask 0022. I currently have no idea how to alter this. ssh hostname - no problem here since the shell reads /etc/profile by default which means umask 0027 in the current setup. ssh hostname program - same situation as scp. In sum, setting umask on sftp alters the result but not as it should, ssh hostname works as expected reading /etc/profile and both scp and ssh hostname program seem to have umask 0022 hardcoded somewhere. Any insight on any of the above points is welcome. EDIT: I would like to avoid patches that require manually compiling openssh. The system is running Ubuntu Server 10.04.01 (lucid) LTS with openssh packages from maverick. Answer: As indicated by poige, using pam_umask did the trick. The exact changes were: Lines added to /etc/pam.d/sshd: # Setting UMASK for all ssh based connections (ssh, sftp, scp) session optional pam_umask.so umask=0027 Also, in order to affect all login shells regardless of if they source /etc/profile or not, the same lines were also added to /etc/pam.d/login. EDIT: After some of the comments I retested this issue. At least in Ubuntu (where I tested) it seems that if the user has a different umask set in their shell's init files (.bashrc, .zshrc,...), the PAM umask is ignored and the user defined umask used instead. Changes in /etc/profile did't affect the outcome unless the user explicitly sources those changes in the init files. It is unclear at this point if this behavior happens in all distros.

    Read the article

  • Is there a way to bundle pdf tiles to a Kindle friendly file?

    - by Maciej Swic
    I'm downloading PDF approach plates from Navigraph, and i have a folder per airport with files named after their corresponding approach / departure etc. Now I'd like to take such a folder with a bunch of PDF files, automatically generate an index and combine them to a single .mobi file that i can send to my Kindle. The index created can be very simple and consist of the file name (without the extension). Tapping an index item should jump to the correct page for that chart. I know there is a host of apps that combine comic book jpg's to ebooks, but is there anything that does the above please?

    Read the article

  • Is there any way to search within OneNote 2007 attachments

    - by jtolle
    I'm starting to use OneNote (2007) more. One thing I'd like to do is take notes on papers I have read. That is, I attach, say, a PDF file, and then type in some notes about it. Sometimes I do other stuff like copy some key text or figures from the paper, so OneNote is great for this because all that plus my own notes plus the file itself can all be in one place. However, the OneNote search doesn't seem to be able to search within said PDF files. Windows search finds things, but just in the OneNote cache, not the actual Onenote .one files. (Presumably that will only work for recently accessed stuff, and in any case doesn't take me to my actual notes.) Is there a way to do what I want? If not, does anyone have a suggestion (or link) as to how to best use OneNote to store (and later search for!) this kind of content and notes?

    Read the article

  • Apache Rewrite Rules breaking each other?

    - by neezer
    I have this rule: RewriteCond %{REQUEST_URI} ^/(manhattan|queens|westchester|new-jersey|bronx|brooklyn)-apartments/.*$ RewriteCond %{REQUEST_URI} !^/guide/(.*)$ RewriteRule ^(.*)$ /home/neezer/public-html/domain.com/guide/$1 [L] Which works great on it's own. Essentially, I have a bunch of directories that have a bunch of files in them that I want to keep in the "/guide" folder, but I want them to appear at the web root for SEO reasons. This rule works, but unfortunately the original URL's still work too (with "/guide"). I want to 301 Redirect the ones with "/guide" in the URL to those without, without actually moving the files on the server. I tried adding this rule: RewriteCond %{REQUEST_URI} ^/guide/(manhattan|queens|westchester|new-jersey|bronx|brooklyn)-apartments/.*$ RewriteRule ^guide/(.*)$ http://www.domain.com/$1 [R=301,L] ... but that breaks my first rule completely. Any thoughts about what I might be doing wrong? Please let me know if you need to know anything else from me to help me with this issue.

    Read the article

  • Archive Outlook mail items into SQL Server

    - by marc_s
    I am looking (and so far not finding any) for a solution to archive e-mail items from my Outlook into SQL Server. My PST is beginning to get really really big, and I'd love to extract my older e-mail into SQL Server in a way so I can still easily find mails if needed. I would prefer SQL Server as the storage medium since I'm familiar with it, and it's rock solid - I don't want to have a collection of PST files or CHM files or anything like that. Does anyone know of such a solution? I'm a power/home user - I can't afford $5'000 enterprise licenses - I need a sub-$100 solution for private use.

    Read the article

  • nfs server on cygwin slow

    - by Weltenwanderer
    The setup: We run an instance of cygwin nfsd on a Windows 2008 Server (Xeon 3,2 GHz). There are several Sun Solaris and SunOS machines accessing the shares. This is the exports file: /disk3 (rw,all_squash) /disk2 (rw,all_squash) Those paths are soft linked to the relevant cygdrive/d/path/to/dir paths. Some of the folders contain up to 10k files. The Problem: ls -la on the mounted folder on the sun boxes takes 2 - 3 minutes and the general read performance is really bad. cat filename displays the file in slow bursts and this hurts performance on tasks that access those shared files heavily. Processor load is not the issue, the nfs server idles most of the time, the cygwin tasks never get over 1% load.

    Read the article

< Previous Page | 626 627 628 629 630 631 632 633 634 635 636 637  | Next Page >