Search Results

Search found 8875 results on 355 pages for 'optimized solutions'.

Page 273/355 | < Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >

  • All commands stopped working in centos 6.5

    - by Michael
    I have made a big mistake while removing some duplicate packages as it appears to be broken. yum 1036 rpm -e --nodeps glibc-2.12-1.132.el6_5.2.x86_64 1037 rpm -e --nodeps nscd-2.12-1.132.el6_5.2.x86_64 1038 rpm -e --nodeps glibc-common-2.12-1.132.el6_5.2.x86_64 1040 rpm -e --nodeps glibc-common-2.12-1.132.el6.x86_64 glibc-devel-2.12-1.132.el6.x86_64 glibc-headers-2.12-1.132.el6.x86_64 1041 rpm -e glibc.x86_64 1042 rpm -e --nodeps glibc.x86_64 The issue happened after doing 1042 step. None of commands work(including yum, rpm, ls, cp etc) and getting error /lib64/ld-linux-x86-64.so.2: bad ELF interpreter: No such file or directory I thought that installing glibc after removing all the current ones would help to resolve the duplicate package error :( Now I realised that it is used as the C library in the GNU system and most systems with the Linux kernel. It defines the "system calls" and other basic facilities such as open, malloc, printf, exit, etc. Is there any possible solutions other than reinstall? I have lost ssh access. Maybe anything can be done using rescue cd? Thanks

    Read the article

  • How to best convert a fully encrypted drive into a Virtual Machine?

    - by SiegeX
    I have a Windows XP laptop that uses GuardianEdge's Encryption Plus to fully encrypt the drive from bootup. What I would like to do is install a much larger (unencrypted) hard drive with Windows 7 on it and turn this fully encrypted drive into a Virtual Machine that can be ran in either Virtualbox or VMWare on the Windows 7 host. I've read many howto's that talk about using an imaging tool like Acronis True Image to image the drive then passing that through VMWare's VCenter Converter to turn it into a format that VMWare can understand. Unfortunately this seems to all far apart when you are dealing with a fully encrypted drive because Acronis cannot recognize the file system and attempts to do a sector-by-sector copy of the entire hard drive. This is extremely wasteful since the drive is 120GB but the file system is only using 10GB of that. Even if I were OK with going with an inefficient 120GB sector-by-sector copy, I'm not sure that this would even work under VMWare or Virtualbox. Unfortunately, the Guardian Edge boot-time login comes up only after the hard drive has been selected as the boot device; preventing me from being able to decrypt the drive prior to booting an Acronis True Image CD so that it can recognize the underlying file system. I'm sure I'm not the first person to want to do this but I am having a heck of a time finding solutions to this problem. All suggested/answers welcomed. Thanks

    Read the article

  • Exchange 2003 ActiveSync problem with certificate

    - by colemanm
    We're having problems getting iPhones to sync properly with SBS 2003 Exchange. When you add a new Exchange ActiveSync account on an iPhone and enter all the pertinent information, it shows a "Verifying Exchange account info" message for a minute or so, then says everything's verified and asks what you want to sync, Mail, Contacts, Calendars... so it looks like it's working. However, when you go to the Mail app and select the Exchange email account, it just shows an "Inbox" folder with nothing in it. When you try refreshing, it attempts for a second, then says "Last Updated" with a timestamp, as if it worked, but there's no mail and no error message/feedback at all. I think I've narrowed it down to some sort of certificate issue, but I'm having trouble finding out where to go from here... I ran MS's Exchange connectivity testing tool with these results: Our cert was purchased from Network Solutions, and I'd already added it to the IIS Default Website for OWA purposes. But this report makes it look like the cert is somehow problematic. I don't know what to do now... Here's a shot of the cert details, just in case:

    Read the article

  • Setting the default permissions for files uploaded via FTP to a directory

    - by Kerri
    Disclaimer: I'm just a web designer/coder, and server admin stuff is my weakest point of them all. So be easy on me (and very specific). I'm using a simple CMS (Unify) on a site, where part of the functionality is that the client can upload files to a specified directory (using FTP). The permissions for the upload directory are set to 755. But when files are uploaded through the interface, they are uploaded with permissions set to 640 (instead of 644), so site visitors cannot acces the files. When I emailed the CMS's support about this, they told me that it was a server setting, and I need to make sure that files uploaded through FTP are set to 644. Makes perfect sense, but I have no idea how to do this. Any help would be greatly appreciated. This site is a shared site hosted by Network Solutions (Unix), so my access options are limited. I can edit .htaccess files, and php.ini, but that's about all I have access to. It appears I can't even log on via shell. ETA: 11/11/2010 Thanks all. I was able to work around this problem by setting up the CMS's settings in a different way. I'd be interested in following up on Nick O'Niel's suggestions, because I think he's on the right track, but unfortunately I can't access the necessary files on this particular server. So, anyway, I'm leaving this open, since the original questions isn't exactly resolved. Unfortunately, I probably can't put a correct answer to the test, since the shared server in question has nearly all of its config files tightly locked down.

    Read the article

  • IIS 7.5 returning 404 for unknown host names

    - by WaldenL
    This just doesn't seem correct to me, so I'm looking for someone to tell me how I've misconfigured IIS... Configuration is IIS7.5 (2008R2), without SP1. I have IIS 7.5 configured w/several sites. ALL sites have hostnames defined in the bindings, there is NO site w/out a hostname. However, if I request an unknown hostname from the server IIS (technically Microsoft-HTTPAPI/2.0) return a 404 error, not a 400 error. I would expect a 400 (or some other major error) rather than a lowly 404. This causes a problem when I have nginx in front of multiple IISs and want to stop a site so nginx takes it out of rotation. Since IIS still returns a 404 for the request even when there is no active site for that name, nginx doesn't know the server is dead. NB: IIS returns the 404 regardless of whether there is a server, but it's stopped, or there is no server. Thoughts? Solutions? -- Additional info: OK, I added a site on a port other than 80 (5000) and then on a connection to that port asked for a site that doesn't exist, and I get the expected error 400 (Invalid hostname). So, while IIS isn't listening for generic (no host name) connections on port 80 it would seem that something is. Any ideas how to get HTTPSys to dump the list of what it's listening for?

    Read the article

  • Creating a pseudoterminal to make sudo happy

    - by larsks
    I need to automate the provisioning of a cloud instance (running Fedora 17) for which the following initial facts are true: I have ssh-key based access to a remote user (cloud) That user has password-free root access via sudo. Manual configuration is as simple as logging in and running sudo su - and having at it, but I would like to fully automate this process. The trick is that the system defaults to having the requiretty option enabled for sudo, which means that an attempt to do something like this: ssh remotehost sudo yum -y install puppet Will fail: sudo: sorry, you must have a tty to run sudo I am working around this right now by first pushing over a small Python script that will run a command on a pseudoterminal: import os import sys import errno import subprocess pid, master_fd = os.forkpty() if pid == 0: # child process: now that we're attached to a # pty, run the given command. os.execvp(sys.argv[1], sys.argv[1:]) else: while True: try: data = os.read(master_fd, 1024) except OSError, detail: if detail.errno == errno.EIO: break if not data: break sys.stdout.write(data) os.wait() Assuming that this is named pty, I can then run: ssh remotehost ./pty sudo yum -y install puppet This works fine, but I'm wondering if there are solutions already available that I haven't considered. I would normally think about expect, but it's not installed by default on this system. screen can do this in a pinch, but the best I came up with was: screen -dmS sudo somecommand ...which does work but eats the output. Are there any other tools available that will allocate a pseudoterminal for me that are going to be generally available?

    Read the article

  • Looking for a host based network monitor solution

    - by Ole Martin Handeland
    Hi all! Problem So, my hosting company has a network usage graph for my dedicated server. It seems that one day earlier this month, my network usage suddenly spiked with several hundred megabytes transferred (usually it's in the tens, not hundreds). It was probably me, but i just can't be sure who or what it was. Question So my question is; does anyone know of any host based solution for monitoring network usage that would tell me the client's IP-address, the port/service he/she used? What I don't want I'm just guessing that someone will suggest i use nagios, munin, zabbix, cacti, mrtg - I've also looked at those, but a graph over network usage will not give me the answers I'm looking for. :-) Almost there I've already looked at a lot of monitoring solutions, and I've tried [ntop][http://www.ntop.org/], [darkstat][http://unix4lyfe.org/darkstat/] and others. Darkstat just didn't give me the answers. Although it listed a lot of statistics, and i could list the clients - it doesn't show me the network usage for a particular period. Ntop is by far the best I've seen so far - but i think it mostly shows current network usage, not the historical part. I could run apt-get upgrade and download a whole bunch of software, but not see it in the log afterwards.

    Read the article

  • lsof not showing what port a proc is listening on

    - by ericslaw
    I have many processes on a box listening on several ports. I am trying to map ports to pids. The problem is that lsof is not telling me what ports belong to which process. Given an apache listening on port 80, I can see it listening via netstat: user@host% netstat -an|grep LISTEN|grep 80 *.80 *.* 0 0 49152 0 LISTEN But when I try to map port 80 to a pid I get nothing: user@host% lsof -iTCP:80 -t When I try seeing what sockets that specific pid is using I get: user@host% lsof -lnP -p31 -a -i COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME libhttpd. 31 0 15u IPv4 0x6002d970b80 0t0 TCP *:65535 (LISTEN) Notice the *:65535 in the NAME column. Does anyone know why lsof is not reporting the port in use? I am running as root. I am using a mix of lsof and os versions: lsof v4.77 on Solaris10 sparc lsof v4.72 on Redhat4.2 etc I know that linux solutions can use "netstat -p", so I guess I'm only looking for why solaris isn't working, but I find lsof is frequently silent and not showing me expected data.

    Read the article

  • One server with multiple desktops "heads" with VNC

    - by Alexis K
    I am managing a system of kiosks. Each kiosk currently is running a web browser with the application for the kiosk running in the browser. Each kiosk needs to be able to display separate content. At times, the application running in the web browser freezes. Thus, I have to go out to the site to refresh the page. I want to see if there is a way to have one central server that has multiple browser heads. Then each kiosk would run a program like VNC to display one of the heads. This way when the program freezes, I just have to login to the central server and refresh the page. Getting VNC or another remote desktop software installed on the clients is no problem. What I am looking for is a way to have VNC remote into a specific head of a head of a web browser. Does such a thing exist? Or do I have to run a VM for each kiosk to remote into? Any advice, pointers, or solutions would be helpful.

    Read the article

  • Running phpmyadmin xampp Ubuntu 12.10

    - by Luigi Tiburzi
    I know it is a common problem and there are many solutions on the web but I'm trying everything and anything is working, I can't have phpmyadmin running on my machine. I installed XAMPP through: sudo tar xvfz ./Downloads/xampp-linux-1.8.1.tar.gz -C /opt then I did the chmod trick supposed to make an end to access issues and I change the default location to my php projects from /var/www to Dropbox/php. Then I started XAMPP in the usual way: sudo /opt/lampp/lampp start When I tried to run one of my php projects the output on the web is fine but if for example I try to write localhost on my browser I get: It works and not the usual XAMPP interface and most of all when I try to access localhost/phpmyadmin I get the login page, insert username (root) and password and I get: You don't have permission to access /phpmyadmin/index.php on this server. Apache/2.2.22 (Ubuntu) Server at localhost Port 80 I tried the Required all granted trick and some others but nothing is working. I even tried to uninstall phpmyadmin and reinstall it but this is not working too. I don't know hot to proceed. Thanks for your help.

    Read the article

  • linux container bridge filters ARP reply

    - by Dani Camps
    I am using kernel 3.0, and I have configured a linux container that is bridged to a tap interface in my host computer. This is the bridge configuration: :~$ brctl show bridge-1 bridge name bridge id STP enabled interfaces bridge-1 8000.9249c78a510b no ns3-mesh-tap-1 vethjUErij My problem is that this bridge is dropping ARP replies that come from the ns3-mesh-tap-1 interface. Instead, if I statically populate the ARP tables and ping directly everything works, so it has to be something related to ARP. I have read about similar problems in related posts, and I have tried with the solutions explained therein but nothing seems to work. Specifically: ~$ grep net.bridge /etc/sysctl.conf net.bridge.bridge-nf-call-arptables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-filter-vlan-tagged = 0 net.bridge.bridge-nf-filter-pppoe-tagged = 0 arptables and ebtables are not installed. iptables FORWARD is all set to accept: Chain FORWARD (policy ACCEPT) target prot opt source destination The bridged interfaces are set to PROMISC: ~$ ifconfig ns3-mesh-tap-1 Link encap:Ethernet HWaddr 1a:c7:24:ef:36:1a ... UP BROADCAST PROMISC MULTICAST MTU:1500 Metric:1 vethjUErij Link encap:Ethernet HWaddr aa:b0:d1:3b:9a:0a .... UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 The macs learned by the bridge are correct (checked with brctl showmacs). Any insight on what I am doing wrong would be greatly appreciated. Best Regards Daniel

    Read the article

  • Monitoring AWS Systems Behind ElasticBeanStalk

    - by A. Avadis
    So I'm getting a company set up in the Amazon Cloud -- creating IAAS protocol/solutions/standardized implementation, etc while also being the SysAdmin for individual systems, app environments, and day-to-day uptime. One of the biggest issues I'm having is tracking various system/application logs, as well as logging/monitoring/archiving system metrics like memory usage, cpu usage, etc etc In a centralized fashion. E.g. -- Nagios + Urchin. The BIGGEST impediment to my endeavors is the following: The company application is deployed in the form of a Java *.WAR file, uploaded to an Elastic BeanStalk application environment, load balancing and auto-scaling between 3(min) and 10(max) servers, and the EC2's that run the application are fired up and disposed of ad-hoc. That is to say, I can't monitor the individual EC2's for very long because so many are being terminated then auto-provisioned/auto-scaled on the fly -- so I'd constantly be having to "monitor what I'm monitoring", and continuously remove/add EC2 machine addresses to my monitoring lists. IS there some sort of way to use monitoring tools like Zabbix or Nagios to monitor the ElasticBeanStalk, and have it automatically add on new EC2's, and remove terminated/failed EC2's from its monitoring list automatically? Furthermore, is there anything I can do with GrayLog to achieve similar results with the aggregation/centralization of my application logs from multiple EC2 instances into ONE consolidated set of logs/events? If not GrayLog, is there ANYTHING LIKE GrayLog that can automatically detect what EC2 members are being added/removed from the environment, and collect the logs from them automatically? Any and all advice or direction is appreciated. Thanks much, and cheers!!

    Read the article

  • ssd firmware, linux: updating large batch of drives

    - by wryfi
    I was recently hit with a fatal firmware bug that affected dozens of Crucial SSDs deployed in my datacenter. Many of the affected machines use LSI or other proprietary SAS controllers, which Crucial's bootable ISO does not recognize. None of the affected machines has a Windows license. The story is roughly similar for other SSD mfrs, including Samsung and Intel. To resolve this issue, I was forced to stop each machine, remove the affected SSD, remove the SSD from its hotswap caddy, install it temporarily into my ThinkPad, flash the firmware, reverse, rinse, repeat. It took the better part of a day to get through all the affected devices. I am looking for hardware, software, and/or purchasing strategies to ease this pain, as SSD firmware bugs seem inevitable, and our SSD footprint is growing. My first thought is to get a laptop with eSATA and one of these cables (http://www.newegg.com/Product/Product.aspx?Item=N82E16812311004). That should at least make it so I don't have to remove the drives from their caddies. Surely others have run into this. Any novel solutions?

    Read the article

  • In Icinga (Nagios), how do I configure hosts with multiple IPs?

    - by gertvdijk
    I'm setting up Icinga (Nagios fork) and I have some machines with multiple interfaces. Some services are only listening on one of them and to check them correctly, I like to know if it's possible to have multiple IP addresses configured for a single host in Icinga. Here's a minimal example: Remote Server: eth0: 1.2.3.4 (public IP) eth1: 10.1.2.3 (private IP, secure tunnel) Apache listening on 1.2.3.4:80. (public only) OpenSSH listening on 10.1.2.3:22. (internal network only) Postfix SMTP listening on 0.0.0.0:25 (all interfaces) Icinga Server: eth0: 10.2.3.4 (private IP, internet access) Now if I define a host: define host { use generic-host host_name server1 alias server1.gertvandijk.net address 10.1.2.3 } This will not check the HTTP status correctly. And defining an additional host: define host { use generic-host host_name server1-public alias server1.gertvandijk.net address 1.2.3.4 } will check everything, but shows up as two independent hosts. Now I want to 'aggregate' these two hosts to show up as a single host, yet providing an easy configuration to check the services on their proper address. What is the most elegant number-of-configuration-lines-saving solution to this? I read about several plugins available to workaround this, but I can't figure out what is the current way to address it. Solutions go back to 2003, but I'm running Icinga 1.7.1, already capable of the address6 option, yet that triggers IPv6-only resolving on the hostname... Ideally, I wish to configure Icinga to be intelligent enough to know that the Postfix instance running on 10.1.2.3:25 is the same as 1.2.3.4:25 and thus not triggering two alarms. I guess this must have been tackled before and sysadmins have it set up now. Please share your solution to this. Thanks! :)

    Read the article

  • What is the 'best practice' for installing perl modules on Solaris/OpenSolaris?

    - by AndrewR
    I'm currently in the process of writing setup instructions for some software I've written that is implemented as a set of Perl modules. Having done this for various flavours of Linux, I'm now doing the same for Solaris/OpenSolaris (v10 only). Part of the setup process is to make sure that dependent Perl modules are installed. This has been pretty easy on Linux as the Perl modules I require tend to be within the distro's packaging system (eg yum install perl-Cache-Cache). This is not the case on Solaris so I'm working on setup instructions that use the CPAN module to fetch dependent modules (eg perl -MCPAN -e 'install Cache::Cache'). This works ok but there are known problems with modules that require things to be built with a C compiler. The problem is that the C Makefile generated assumes you're using Sun's compiler and uses command-line options not understood by gcc, which you may be using instead. Consulting teh Internetz has thrown up a number of solutions to this: Install and use Sun's compiler Use the perlgcc wrapper script Edit the makefiles by hand (yuk) All of these work. My question to those more familiar with Solaris than me is: Is one of these the 'best' or 'most commonly used' method?

    Read the article

  • How to back up a database with thousands of files

    - by Neal
    I am working with a Fedora server that runs a customized software package. The server software is quite old, and its database consists of 1,723 files. The database files are constantly changing - they continually grow and changes are not necessarily appended to the end. So right now, we currently back up every 24 hours at midnight when all users are off of the system and the database is in an internally consistent state. The problem is that we have the potential to lose an entire day's worth of work, which would be unrecoverable. So I'd like to know if there is a way to take some sort of an instantaneous snapshot of these database files that we could back up every 30 minutes or so. I've read about Linux LVM snapshots, and am thinking that I might be able to do accomplish the goal by taking a snapshot, rsync'ing the files to a backup server, then dropping the snapshot. But I've never done this before,so I don't know if this is the "right" fix. Any ideas on this? Any better solutions? Thanks!

    Read the article

  • Weird PCI bug: lots of missed packets, or data comes in "bursts"

    - by Thomas O
    I have an ABIT KN9 motherboard. It has one PCI-e x16 slot, three PCI-e x4 slots and two legacy PCI. My problem is with the legacy PCI (which I shall just call "PCI".) I currently have an Nvidia GeForce 8600 GT (a low end card) installed in the x16 slot and a TV card in PCI #1; the x4 slots are unused, as is PCI #2. I plan to upgrade the graphics card soon, the current card was spare. I sometimes install a USB expander in PCI #2 but it causes a lot of problems - see below. The problem is under Linux (Ubuntu 10.10, Linux 2.6.35-22-generic), but probably under all operating systems (I have not yet been able to test Windows, but I suspect it will do the same as the problems occur on the BIOS/POST side too, e.g. when using a USB keyboard on the expander the keyboard will not work at all) PCI has an enourmous delay, and packets arrive in large chunks. For example, when using the USB expander, my USB mouse lags and jumps in large steps every second or so, while using the motherboard USB does not present this problem. My TV card will only do one or two frames per second, and the program (xawtv) usually times out and crashes. In dmesg, I'm getting messages like: bttv0: timeout: drop=74, irq=154/100476, risc=31f6256c, bits: VSYNC HSYNC OFLOW RISCI for my TV card, and similar timeout issues for my USB expander with a mouse. I received the motherboard, processor and RAM second hand and have only just got around to building it, so I don't know if this problem has always existed, or if it's a result of my set up. If anyone has any hints or solutions it would be appreciated - this is kind of a show-stopper for me.

    Read the article

  • Linux - preventing an application from failing due to lack of disk space [migrated]

    - by Jernej
    Due to an unpredicted scenario I am currently in need of finding a solution to the fact that an application (which I do not wish to kill) is slowly hogging the entire disk space. To give more context I have an application in Python that uses multiprocessing.Pool to start 5 threads. Each thread writes some data to its own file. The program is running on Linux and I do not have root access to the machine. The program is CPU intensive and has been running for months. It still has a few days to write all the data. 40% of the data in the files is redundant and can be removed after a quick test. The system on which the program is running only has 30GB of remaining disk space and at the current rate of work it will surely be hogged before the program finishes. Given the above points I see the following solutions with respective problems Given that the process number i is writing to file_i, is it safe to move file_i to an external location? Will the OS simply create a new instance of file_i and write to it? I assume moving the file would remove it and the process would end up writing to a "dead" file? Is there a "command line" way to stop 4 of the 5 spawned workers and wait until one of them finishes and then resume their work? (I am sure one single worker thread would avoid hogging the disk) Suppose I use CTRL+Z to freeze the main process. Will this stop all the other processes spawned by multiprocessing.Pool? If yes, can I then safely edit the files as to remove the redundant lines? Given the three options that I see, would any of them work in this context? If not, is there a better way to handle this problem? I would really like to avoid the scenario in which the program crashes just few days before its finish.

    Read the article

  • Can somebody please recommend a good local file backup utility that will be Windows & OSX Compatible?

    - by JAG2007
    I have an external hard drive that I keep all of my work files on and transfer them back and forth between my Windows 7 box at work, and my Mac at home (I work from home frequently). Can someone recommend a really good backup utility that I can use on that external drive, to back the files up to my work computer locally, or the other external drive on my machine at work? I'm looking for preferably a free or open source software, and I'd prefer it to be cross system compatible, although I would also consider using a software that will only work on the Windows box. Also, I will consider a software that has a price assuming it is a really good piece of software and the price is reasonable (like under $50 or so). I checked out CrashPlan a bit, but not sure if that's gonna be really what I'm looking for. To reiterate I'm not looking for online backup solutions, just a piece of software that can back up my data to another drive locally. CrashPlan Free seems to offer this, but not sure how good it is (considering their goal is to get me to buy a pay for version). *NOTE: I'm running Windows 7 in 64bit so I need a piece of software that will be compatible with 64bit OS. My previous software, PC Backup, is not. That's partly why I'm in this boat.

    Read the article

  • How do I uncompress vmlinuz to vmlinux?

    - by Lord Loh.
    I have already tried uncompress, gzip, and all other solutions that come up as google results and these have not worked for me. To get just the image search for the GZ signature - 1f 8b 08 00. > od -A d -t x1 vmlinuz | grep '1f 8b 08 00' 0024576 24 26 27 00 ae 21 16 00 1f 8b 08 00 7f 2f 6b 45 so the image begins at 24576+8 => 24584. Then just copy the image from the point and decompress it - > dd if=vmlinuz bs=1 skip=24584 | zcat > vmlinux 1450414+0 records in 1450414+0 records out 1450414 bytes (1.5 MB) copied, 6.78127 s, 214 kB/s Got these instructions verbatim from a forum online: http://www.codeguru.com/forum/showthread.php?t=415186 This process does not work for me and end up giving errors that states file not found 0024576 and all subsequent numbers. How do I proceed extracting vmlinux from vmlinuz? Thank you. EDITED: This is a reverse engineering question. I have no access to the distro to install any RPM or recompile. I start with nothing but vmlinuz.

    Read the article

  • Mavericks permission issues with Windows Server deduplicated shares

    - by dmohlmaster
    We have a number of 10.9-10.9.3 - Mavericks - machines installed throughout our facility. Much of the user content is pulled from shares stored on our Windows Server 2012 fileservers with deduplication enabled. I have found that files newly written or unoptimized are able to be accessed without issue - read, written, modified, etc. Once the file gets optomized/deduplicated and Windows adds the P & L attributes - sparse and symlink - the Macs running Mavericks begin to have access issues. Once the files get deduplicated, users begin receiving read access errors when copying files (see error1 below). This happens when copying to folders within the current folder tree or copying somewhere to the local system. If you 'stop' the copy operation and retry a few more times, it may eventually work for the specific instance but fail again later. I am however, able to copy these files without issue via the terminal. Other systems running 10.7 do not experience the same issues and are able to access file server resources without issue. Many of the systems having issues are newer and thus not able to be downgraded to 10.8 or 10.7. I have tried finder replacements such as Pathfinder but the results are the same. I know this is at least similar to the issues many Mac users are already experiencing and posting about but I haven't seen it directly linked to deduplication and the attributes written by Windows server. Has anyone seen this issue? Have any solutions been found? Error 1: When copying files after the PL attributes have been set by deduplication. "One or more items can't be copied to "Foler" because you don't have permissions to read them. ******************************************' Via the system.log, I am also seeing the following error when accessing these deduplicated file shares. The reparse point tag listed below is "IO_REPARSE_TAG_DEDUP" Reported error: "smbfs_nget: filename.ext - unknown reparse point tag 0x80000013"

    Read the article

  • Give access to specific services on Windows 7 Professional machines?

    - by Chad Cook
    We have some machines running Windows 7 Professional at our office. The typical user needs to have access to stop and start a service for a local program they run. These machines have a local web server and database installed and we need to restrict access to certain folders and services related to the web server and database for these users. The setup I have tried so far is to add the typical user as a Power User. I have been able to successfully restrict them from accessing certain folders (as far as I can tell) but now they do not have access to the service needed for starting and stopping the local program. My thought was to give them access to the specific service but I have not had any luck yet. In searching the web for solutions the only results I have found relate to Windows Server 2000 and 2003 and involve creating security templates and databases through the Microsoft Management Console. I am hesitant to try an approach like this as these articles are typically older and I worry this method is outdated. Is there a better way to accomplish the end goal of giving the user permission to run the service and restrict their access to certain folders? If any clarification is needed on the setup or what we are trying to achieve, please let me know. Thanks in advance.

    Read the article

  • Best practices for thin-provisioning Linux servers (on VMware)

    - by nbr
    I have a setup of about 20 Linux machines, each with about 30-150 gigabytes of customer data. Probably the size of data will grow significantly faster on some machines than others. These are virtual machines on a VMware vSphere cluster. The disk images are stored on a SAN system. I'm trying to find a solution that would use disk space sparingly, while still allowing for easy growing of individual machines. In theory, I would just create big disks for each machine and use thin provisioning. Each disk would grow as needed. However, it seems that a 500 GB ext3 filesystem with only 50 GB of data and quite a low number of writes still easily grows the disk image to eg. 250 GB over time. Or maybe I'm doing something wrong here? (I was surprised how little I found on the subject with Google. BTW, there's even no thin-provisioning tag on serverfault.com.) Currently I'm planning to create big, thin-provisioned disks - but with a small LVM volume on them. For example: a 100 GB volume on a 500 GB disk. That way I could more easily grow the LVM volume and the filesystem size as needed, even online. Now for the actual question: Are there better ways to do this? (that is, to grow data size as needed without downtime.) Possible solutions include: Using a thin-provisioning friendly filesystem that tries to occupy the same spots over and over again, thus not growing the image size. Finding an easy method of reclaiming free space on the partition (re-thinning?) Something else? A bonus question: If I go with my current plan, would you recommend creating partitions on the disks (pvcreate /dev/sdX1 vs pvcreate /dev/sdX)? I think it's against conventions to use raw disks without partitions, but it would make it a bit easier to grow the disks, if that is ever needed. This is all just a matter of taste, right?

    Read the article

  • CDN Rerouting on 404 (file not yet in synch with original storage)

    - by Alan Ristic
    Here is the problem. I've setup my app(on EC2) to store uploaded images directly on Amazon S3. I'd like to be able to serve static files(cdn) from my 'home' server so I wrote script that does sync from S3. But there is a window of (at least) one minute in synch. Now I see two solutions on the problem of pics not been available on 'home' server here: 1.I write script on EC2 (where the app resides) to fetch from DB pics that have status of "not-yet-synch", which is default state when user uploads picture. The script then does a ping to picture and if it gets OK response, updates DB from "not-yet-synch" to "synch". 2.Prefered solution would be to let apache (in this case) redirect request for an image if it sees 404 (e.g. doesent find image requested) to S3. This way I wouldn't need script from solution 1. So what approach do you suggest I take in solving this redundancy problem? Or what is practice in production environments? To further clarify; I'd like so serve images first from 'home' server, if that fails serve them from S3. Tnx, Alan

    Read the article

  • Own server, multiple website: most secure PHP setup

    - by plua
    Hi there, We have a company server with a variety of websites. They are maintained by different people from within our company. All websites are public. The server access is limited to our company only. This is NOT a shared hosting environment. We are looking into securing the server, currently analyzing the risk related to permissions of files. We feel the highest risk is when files are uploaded and then opened/executed by the public. This should not happen, but an error in a script might allow people to do so (there are image uploaders, file uploaders, etc). Uploader scripts use PHP. So the question is: what is the best way of setting / organizing permissions of files and processes? There seem to be several options to run PHP (and Apache), and setting the permissions. What should we take into consideration? Any tips? We are considering mod_php and FastCGI, but perhaps given our situation other solutions are preferred?

    Read the article

< Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >