Search Results

Search found 14413 results on 577 pages for 'vs 2012'.

Page 142/577 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • Permissions restoring from Time Machine - Finder copy vs "cp" copy

    - by Ben Challenor
    Note: this question was starting to sprawl so I rewrote it. I have a folder that I'm trying to restore from a Time Machine backup. Using cp -R works fine, but certain folders cannot be restored with either the Time Machine UI or Finder. Other users have reported similar errors and the cp -R workaround was suggested (e.g. Restoring from Time Machine - Permissions Error). But I wanted to understand: Why cp -R works when the Finder and the Time Machine UI do not. Whether I could prevent the errors by changing file permissions before the backup. There do indeed seem to be some permissions that Finder works with and some that it does not. I've narrowed the errors down to folders with the user ben (that's me) and the group wheel. Here's a simplified reproduction. I have four folders with the owner/group combinations I've seen so far: ben ~/Desktop/test $ ls -lea total 16 drwxr-xr-x 7 ben staff 238 27 Nov 14:31 . drwx------+ 17 ben staff 578 27 Nov 14:29 .. 0: group:everyone deny delete -rw-r--r--@ 1 ben staff 6148 27 Nov 14:31 .DS_Store drwxr-xr-x 3 ben staff 102 27 Nov 14:30 ben-staff drwxr-xr-x 3 ben wheel 102 27 Nov 14:30 ben-wheel drwxr-xr-x 3 root admin 102 27 Nov 14:31 root-admin drwxr-xr-x 3 root wheel 102 27 Nov 14:31 root-wheel Each contains a single file called file with the same owner/group: ben ~/Desktop/test $ cd ben-staff ben ~/Desktop/test/ben-staff $ ls -lea total 0 drwxr-xr-x 3 ben staff 102 27 Nov 14:30 . drwxr-xr-x 7 ben staff 238 27 Nov 14:31 .. -rw-r--r-- 1 ben staff 0 27 Nov 14:30 file In the backup, they look like this: ben /Volumes/Deimos/Backups.backupdb/Ben’s MacBook Air/Latest/Macintosh HD/Users/ben/Desktop/test $ ls -leA total 16 -rw-r--r--@ 1 ben staff 6148 27 Nov 14:34 .DS_Store 0: group:everyone deny write,delete,append,writeattr,writeextattr,chown drwxr-xr-x@ 3 ben staff 102 27 Nov 14:51 ben-staff 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown drwxr-xr-x@ 3 ben wheel 102 27 Nov 14:51 ben-wheel 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown drwxr-xr-x@ 3 root admin 102 27 Nov 14:52 root-admin 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown drwxr-xr-x@ 3 root wheel 102 27 Nov 14:52 root-wheel 0: group:everyone deny add_file,delete,add_subdirectory,delete_child,writeattr,writeextattr,chown Of these, ben-staff can be restored with Finder without errors. root-wheel and root-admin ask for my password and then restore without errors. But ben-wheel does not prompt for my password and gives the error: The operation can’t be completed because you don’t have permission to access “file”. Interestingly, I can restore the file from this folder by dragging it directly to my local drive (instead of dragging its parent folder), but when I do so its permissions are changed to ben/staff. Here are the permissions after the restore for the three folders that worked correctly, and the file from ben-wheel that was changed to ben/staff. ben ~/Desktop/test-restore $ ls -leA total 16 -rw-r--r--@ 1 ben staff 6148 27 Nov 14:46 .DS_Store drwxr-xr-x 3 ben staff 102 27 Nov 14:30 ben-staff -rw-r--r-- 1 ben staff 0 27 Nov 14:30 file drwxr-xr-x 3 root admin 102 27 Nov 14:31 root-admin drwxr-xr-x 3 root wheel 102 27 Nov 14:31 root-wheel Can anyone explain this behaviour? Why do Finder and the Time Machine UI break with the ben / wheel permissions? And why does cp -R work (even without sudo)?

    Read the article

  • Netbook performance - 1.33 GHz vs 1.6/1.66 GHz Atom

    - by Imran
    All new 11" netbooks seem to carry 1.33 GHz Atom Z520 CPU instead of 1.6/1.66 GHz Atom N270/N280. The screen resolution of 11" netbooks make them very appealing, but I'm a bit concerned about their performance as they carry a slower CPU than the 1.6GHz Atom, which isn't a great performer in the first place. Is there any significant difference in performance between 1.33 GHz and 1.6/1.66 GHz Atom processors in day to day usage? Are any of those fast enough to decode 720p x264 video? (When paired with typical Intel GMA platform and software decoder like ffdshow/CoreAVC of course, not with Nvidia Ion platform)

    Read the article

  • Linux Mint vs Kubuntu

    - by Hannes de Jager
    I'm currently running Kubuntu Karmic Koala and are eager to upgrade to 10.04 the end of the month. But I've also spotted Linux Mint and heard a couple of good things about it. It looks snazzy but I was wondering how it compares to Ubuntu/Kubuntu. For those that ran both can you provide some pros and cons?

    Read the article

  • surgemail vs Exchange

    - by Gaz
    At work we are running Surgemail. The desktop mail client is Outlook which downloads mail over POP3, and so email is stored on users desktops in PST files. Looking at the features of Surgemail compared to Exchange 2007 can anyone provide a convincing argument to change? The argument must be user related or disaster recovery related they can not be about administration of the system.

    Read the article

  • RAIDZ vs RAID1+0

    - by Hiro2k
    Hi guys I just got 4 SSDs for my FreeNAS box. This server is only used to serve a single iSCSI extent to my Citrix XenServer pool and was wondering if I should setup them up in a RAIDZ or a RAID 1+0 configuration. This isn't used for anything in production, just for my test lab so I'm not sure which one is going to be better in this scenario. Will I see a major difference in speed or reliability? Currently the server has three 500GB Western Digital Blue drives and it's dog slow when I deploy a new version of our software on it, hence the upgrade.

    Read the article

  • USB Hardware vs. Software Write Lock

    - by TreyK
    I'm in the market for a USB flash drive, and remember this cool feature a tiny 32MB flash drive of mine had: a write lock switch. This seemed like it would be an amazing feature to have as a shield against any nastiness happening to the drive on an unfamiliar computer. However, very few drives on the market offer this feature. Instead, it seems that forms of software protection are the more prominent method. This software protection causes me a bit of uneasiness, as it seems like this software wouldn't be nearly as bulletproof as a physical switch. Also, levels of protection seem to vary from product to product. Being able to protect certain folders from reading and/or writing would be nice, but is the security trade-off worth it? Just how effective can this software protection be? Wouldn't a simple format be able to clean any drive with software protection? My drive must also be compatible with Windows XP, Vista, and 7, as well as Linux and Mac. What would be the best way forward for getting a well-sized (~8GB) flash drive with a strong write protection implementation, for little or no more than a regular drive? Thanks.

    Read the article

  • Forefront 2010 Antispam vs Exchange 2010 Antispam?

    - by Jon
    They look pretty similar, do they work together or independently? For example you have content filtering in Forefront where you can specify SCL barriers, just like in Exchange. However theres no where to specify the Spam mailbox. So will the spam mailbox still be used if I configure this in Forefront?

    Read the article

  • OSX pdf-kit vs Linux poppler or pdf/x

    - by Tahnoon Pasha
    I keep reading and hearing that the reason that there is no good pdf editing software for Linux is that the libraries are not as well developed. That is why there is no equivalent for Skim or Preview in Linux. I had a look a the pdf-kit documentation and the poppler documentation and they looked very similar to my admittedly non-technical view. Could someone explain to me why the OSX libraries (eg) are so much easier to write projects like Skim in than the linux ones. I'm not sure if the same applies to OSX projects to NVAlt, but it seems to be a common theme - I'd just like to understand what is behind the thesis that OSX is easier to code these projects in, and what would be involved in changing it. (I'm not disputing the value of Okular or Evince and the like, just noting that they don't have the richness of functionality of Skim, Preview or even things like Goodreader on the Ipad).

    Read the article

  • Performance of file operations on thousands of files on NTFS vs HFS, ext3, others

    - by peterjmag
    [Crossposted from my Ask HN post. Feel free to close it if the question's too broad for superuser.] This is something I've been curious about for years, but I've never found any good discussions on the topic. Of course, my Google-fu might just be failing me... I often deal with projects involving thousands of relatively small files. This means that I'm frequently performing operations on all of those files or a large subset of them—copying the project folder elsewhere, deleting a bunch of temporary files, etc. Of all the machines I've worked on over the years, I've noticed that NTFS handles these tasks consistently slower than HFS on a Mac or ext3/ext4 on a Linux box. However, as far as I can tell, the raw throughput isn't actually slower on NTFS (at least not significantly), but the delay between each individual file is just a tiny bit longer. That little delay really adds up for thousands of files. (Side note: From what I've read, this is one of the reasons git is such a pain on Windows, since it relies so heavily on the file system for its object database.) Granted, my evidence is merely anecdotal—I don't currently have any real performance numbers, but it's something that I'd love to test further (perhaps with a Mac dual-booting into Windows). Still, my geekiness insists that someone out there already has. Can anyone explain this, or perhaps point me in the right direction to research it further myself?

    Read the article

  • mdadm+zfs vs mdadm+lvm

    - by Alex
    This may be a naive question since I'm new to this and I cannot find any results about mdadm+zfs, but after some testing it seems it might work: The use case is a server with RAID6 for some data that is backed-up somewhat infrequently. I think I'm well served by any of ZFS or RAID6. Platform is Linux. Performance is secondary. So the two setups I am considering are: A RAID6 array plus regular LVM and ext4 A RAID6 array plus ZFS (without redundancy). Is this second option that I don't see discussed at all. Why ZFS+RAID6? It's mainly because the inability of ZFS to grow a raidz2 with new disks. You can replace disks with larger ones, I know, but not add another disk. You can accomplish 2-disk redundancy and ZFS disk growth using mdadm as the redundancy layer. Besides that main point (otherwise I could go directly to raidz2 without RAID under it), these are the pros-cons that I see for each option: ZFS has snapshots without preallocated space. LVM requires preallocation (might be no longer true). ZFS has checksumming (very interested in this) and compression (nice bonus). LVM has online filesystem growth (ZFS can do it offline with export/mdadm --grow/import). LVM has encryption (ZFS-on-Linux has not). This is the only major con of this combo I see. I guess I could go RAID6+LVM+ZFS... seems too heavy, or not? So, to close with a proper question: 1) Is there anything that inherently discourages or precludes RAID6+ZFS? Anyone has experience with a setup like this? 2) Are there possibilities for checksumming and compression that would make ZFS unnecessary (maintaining the possibility of filesystem growth)? Because the RAID6+LVM combo seems the sanctioned, tested way.

    Read the article

  • VSFTPD does not allow upload with virtual users

    - by Mr. Squig
    I am attempting to setup VSFTPD with virtual users on a server running Ubuntu 12.04. I have configured the server to allow for virtual users to login, but I am having trouble getting it to allow uploads. My vsftpd.conf is as follows: listen=YES anonymous_enable=NO local_enable=YES write_enable=YES local_umask=022 anon_upload_enable=YES dirmessage_enable=YES use_localtime=YES xferlog_enable=YES connect_from_port_20=YES chroot_local_user=YES virtual_use_local_privs=YES guest_enable=YES guest_username=virtual user_sub_token=$USER local_root=/var/www/$USER hide_ids=YES secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd rsa_cert_file=/etc/ssl/private/vsftpd.pem /etc/pam.d/vsftpd contains: auth required pam_pwdfile.so pwdfile /etc/vsftpd.passwd crypt=hash account required pam_permit.so crypt=hash I have two virtual users set up, one of which has the same name as a local user. They each have a directory in /var/www/ owned by 'virtual'. As I understand it, when a virtual user logs in this way they will appear to the system as the user virtual. Using this configuration user can log on, but cannot upload files. The error given in /var/log/vsftpd.log is: Tue Nov 20 19:49:00 2012 [pid 2] CONNECT: Client "96.233.116.53" Tue Nov 20 19:49:07 2012 [pid 1] [zac] OK LOGIN: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 2] CONNECT: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 1] [zac] OK LOGIN: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 3] [zac] FAIL CHMOD: Client "96.233.116.53", "/test.ppm 644" I have tried changing the permissions of these directories in all sorts of ways, but nothing seem to work. I have a feeling that it is something simple related to permissions. Any ideas?

    Read the article

  • XAMPP vs WAMP security and other on Windows XP

    - by typoknig
    Not long ago I found WAMP and thought it was a God send because it had all the things I wanted/needed (Apache, PHP, MySQL, and phpMyAdmin) all built into one installer. One thing about WAMP has been making me mad is an error I get in phpMyAdmin about the advanced features not working. I have tried to fix that error long enough on that error for long enough. http://stackoverflow.com/questions/2688385/problem-with-phpmyadmin-advanced-features I now read that most people prefer XAMPP over WAMP, but I am a bit concerned that XAMPP might have some extra security holes with Mercury and Perl, two thing that I don't really need or want right now. Are my security concerns justified or not? Is there any other reasons to go with XAMPP over WAMP or vice versa?

    Read the article

  • Ubuntu 10.04: Unable to Start RabbitMQ Server Post-Installation

    - by Garland W. Binns
    After installing RabbitMQ on Ubuntu 10.04 I receive a failure message that the service was unable to start. Any insight into the issue would be greatly appreciated! Below are contents of startup_log and startup_err. Startup_log: {error_logger,{{2012,7,7},{15,50,31}},"Protocol: ~p: register error: ~p~n",["inet_tcp",{{badmatch,{error,etimedout}},[{inet_tcp_dist,listen,1},{net_kernel,start_protos,4},{net_kernel,start_protos,3},{net_kernel,init_node,2},{net_kernel,init,1},{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}]} {error_logger,{{2012,7,7},{15,50,31}},crash_report,[[{initial_call,{net_kernel,init,['Argument__1']}},{pid,<0.20.0>},{registered_name,[]},{error_info,{exit,{error,badarg},[{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}},{ancestors,[net_sup,kernel_sup,<0.9.0>]},{messages,[]},{links,[#Port<0.100>,<0.17.0>]},{dictionary,[{longnames,false}]},{trap_exit,true},{status,running},{heap_size,987},{stack_size,24},{reductions,512}],[]]} {error_logger,{{2012,7,7},{15,50,31}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{'EXIT',nodistribution}},{offender,[{pid,undefined},{name,net_kernel},{mfa,{net_kernel,start_link,[[rabbitmqprelaunch877,shortnames]]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]} {error_logger,{{2012,7,7},{15,50,31}},supervisor_report,[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,shutdown},{offender,[{pid,undefined},{name,net_sup},{mfa,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]} {error_logger,{{2012,7,7},{15,50,31}},std_info,[{application,kernel},{exited,{shutdown,{kernel,start,[normal,[]]}}},{type,permanent}]} {"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}"} Startup_err: Crash dump was written to: erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}})

    Read the article

  • LDAP Structure: dc=example,dc=com vs o=Example

    - by PAS
    I am relatively new to LDAP, and have seen two types of examples of how to set up your structure. One method is to have the base being: dc=example,dc=com while other examples have the base being o=Example. Continuing along, you can have a group looking like: dn: cn=team,ou=Group,dc=example,dc=com cn: team objectClass: posixGroup memberUid: user1 memberUid: user2 ... or using the "O" style: dn: cn=team, o=Example objectClass: posixGroup memberUid: user1 memberUid: user2 My questions are: Are there any best practices that dictate using one method over the other? Is it just a matter of preference which style you use? Are there any advantages to using one over the other? Is one method the old style, and one the new-and-improved version? So far, I have gone with the dc=example,dc=com style. Any advice the community could give on the matter would be greatly appreciated.

    Read the article

  • OpenJDK vs. Sun Java6 on Ubuntu

    - by Mark Renouf
    Due to past (bad) experience resulting from the GCJ stuff being provided by default on certain distributions, I've always traditionally installed the official Sun Java package on servers. On Ubuntu it's been easy but now OpenJDK is a preferred option and easier to install... I wonder: is there any reason not to use it instead? As far as I understand it's the open source version of the Sun JDK.

    Read the article

  • On Server Disk Storage VS SAN Storage

    - by Justin
    Hello, I am looking at buying three servers, and trying to figure out which storage solution makes the most sense in terms of performance and cost. Total budget is around: $10,000. OPTION 1: Dell servers with RAID 10 (4 Drives) each 7200RPM SAS 500GB, for a total capacity of 1TB. Each server is approx: $3000. Total storage then across all three servers is 3TB. OPTION 2: Same Dell servers with a cheap single drive no RAID for $2000 and go with a centralized SAN solution. The biggest problem is that I haven't been able to even find a SAN solution that is a reasonable price. Dell entry level storage servers are like $15,000. I am thinking just iSCSI, not fiber (too expensive). What do you guys recommend?

    Read the article

  • Puppet: array in parameterized classes VS using resources

    - by Luke404
    I have some use cases where I want to define multiple similar resources that should end up in a single file (via a template). As an example I'm trying to write a puppet module that will let me manage the mapping between MAC addresses and network interface names (writing udev's persistent-net-rules file from puppet), but there are also many other similar usage cases. I searched around and found that it could be done with the new parameterised classes syntax: if implemented that way it should end up being used like this: node { "myserver.example.com": class { "network::iftab": interfaces => { "eth0" => { "mac" => "ab:cd:ef:98:76:54" } "eth1" => { "mac" => "98:76:de:ad:be:ef" } } } } Not too bad, I agree, but it would rapidly explode when you manage more complex stuff (think network configurations like in this module or any other multiple-complex-resources-in-a-single-config-file stuff). In a similar question on SF someone suggested using Pienaar's puppet-concat module but I doubt it could get any better than parameterised classes. What would be really cool and clean in the configuration definition would be something like the included host type, it's usage is simple, pretty and clean and naturally maps to multiple resources that will end up being configured in a single place. Transposed to my example it would be like: node { "myserver.example.com": interface { "eth0": "mac" => "ab:cd:ef:98:76:54", "foo" => "bar", "asd" => "lol", "eth1": "mac" => "98:76:de:ad:be:ef", "foo" => "rab", "asd" => "olo", } } ...that looks much better to my eyes, even with 3x options to each resource. Should I really be passing arrays to parameterised classes, or there is a better way to do this kind of stuff? Is there some accepted consensus in the puppet [users|developers] community? By the way, I'm referring to the latest stable release of the 2.7 branch and I am not interested in compatibility with older versions.

    Read the article

  • Access Home Network Server via External Address (DSL vs Cable)

    - by Dominic Barnes
    For the last few months, I've been using a server on my home network for basic backups and hosting some small websites. Up until this past week, I've been using Comcast (cable) as an ISP and now that I've moved into an apartment, I'm using AT&T. (DSL) I've set up dynamic DNS and I can verify it works externally. However, I can't seem to access the public address from within the local network. Is there something DSL does differently from Cable that makes this frustration possible?

    Read the article

  • Access Home Network Server via External Address (DSL vs Cable)

    - by Dominic Barnes
    For the last few months, I've been using a server on my home network for basic backups and hosting some small websites. Up until this past week, I've been using Comcast (cable) as an ISP and now that I've moved into an apartment, I'm using AT&T. (DSL) I've set up dynamic DNS and I can verify it works externally. However, I can't seem to access the public address from within the local network. Is there something DSL does differently from Cable that makes this frustration possible?

    Read the article

  • Reverse DNS does not match SMTP banner vs Reverse DNS mismatch

    - by MadBoy
    I have to make decision whether my Reverse DNS should match SMTP banner but Reverse DNS to DNS and vice versa stays different or vice versa. Which one to choose? I have an 2x Exchange 2010 server with one SMTP Sender with TMG 2010. TMG has 2 links connected so that we have 2 separate internet providers. The problem is I have no way to control TMG behavior on which link is used to send emails as it picks it randomly. I have 2 MX records: - mail.test.com which resolves to IP and IP resolves to mail.test.com - mail2.test.com which resolves to IP2 and IP2 resolves to mail.test.com This was done to prevent smtp banner issues but it provides problems with Reverse DNS if the server on the other side is eager enough to do comparison. But I've checked with Google and they also don't have that in perfect condition.

    Read the article

  • VMWare vmfs vs NFS datastore with vmdk?

    - by CarpeNoctem
    I want to add a new harddisk to an existing VM and want the best performance possible. The new hard disk will exist on an NFS datastore. Currently I did the following: Created new vmdk on NFS datastore Created new lvm partition using fdisk Create new physical volume, volume group, and logical volume (2TB) Created ext3 partition on logical volume Is there a better way to do this? Should I be doing some vmware-ish file system instead?

    Read the article

  • Fiber Channel Loop vs Point to Point

    - by RandomInsano
    So, I'm playing with a couple of QLogic QLA2340s connected directly together. I've got options here to either have them act as a loop, or in point to point mode. What's the difference if I'm only going to have two machines connected together? Is point-to-point more efficient? The firmware has an option to prefer loop, then fall back to p2p. Anyone have any idea if there are performance benefits or drawbacks? It's pretty hard to find that information.

    Read the article

  • Windows 2008 CAL vs RDS CAL

    - by g8keepa82
    Looking at the Win2k8 licensing page here and it appears to me that if I want to have a server to accept Remote Desktop Connections from say 30 users concurrently, I would require: Windows 2008 Server License & Windows 2008 CAL Is this correct logic? Or would I require RDS CALs instead? Or would I actually require RDS CALs on top of that? From what I can gather the RDS CALs are only required if I was to use the additional RDS services like App-V, etc. This question may have been answered here before but just wanted to clarify. Can anyone help?

    Read the article

  • Buying a Laptop Battery - OEM vs. 3rd Party

    - by pygorex1
    Looking at a replacement 9-cell battery for my Dell 1525 I've noticed that the OEM batteries that Dell sells are up to 3x more expensive than batteries sold by a 3rd party vendor. Is the Dell premium worth it? What experiences have you had buying replacement batteries?

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >