Search Results

Search found 5758 results on 231 pages for 'contents'.

Page 170/231 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Some doubts about the use of usermod and groupmod command

    - by AndreaNobili
    I am not yet a true "Linux guy" and I have the following doubts about how exactly do the following shell procedure (a list of commands steps) founded in a tutorial that I am following (I want deeply understand what I am doing before do it): sudo passwd root then login again as root usermod -l miner pi usermod -m -d /home/miner miner groupmod -n miner pi exit So at the beginning it enable the root account and I have to login again in the system as root...this is perfectly clear for me. And now I have the followings doubts: 1) The usermod command: usermod -l miner pi usermod -m -d /home/miner miner Reading the official documentation of the usermod command I understand that this command modify the informations related to an existing account Reading the documentation it seems to me that the -l parmether modify the name of the user pi in miner and then the -m -d paramether move the contents of the old home directory to the new one (named miner) and use this new directory as home directory My doubt is: what exactly do the executions of these operation? I think that: Rename the existing pi user in miner Then move the content of the old home directory (the pi home directory? or what?) into a new directory (/home/miner) that now is the home directory for the miner user. Is it right? The the second doubt is related to this command groupmod -n miner pi It seems to me that change the group name from pi in miner But what exactly is a group in Linux and why is it used? Tnx

    Read the article

  • Outlook 2007 - repainting (?) problems when copying and pasting between windows: have to switch focus before pasted text is visible

    - by Rory
    For some time now I've had a problem when copying and pasting between Outlook 2007 windows, or from other office apps into Outlook 2007. When I paste, say into a new email, the email window's text area goes blank. The window isn't 'not responding', the To and Subject contents are still visible, but it looks like all the text in the email has been deleted. Initially I thought it was just taking ages to paste, but it turns out I need to switch focus to another window and then switch focus back to the Outlook window. Only then does the body of the email repaint itself. It's at the point that I click onto the Outlook window that the body area changes from blank white to showing all the text that was there before plus the pasted text. Any ideas? I've updated my graphics driver. Not sure what else it could be. I do sometimes have similar problems in Visual Studio 2010 too: when I paste text into a code window it doesn't show immediately, but the rest of the window shows what was there before I pasted. I'm using Win XP with all updates applid, on a Dell Vostro 1510.

    Read the article

  • Virtualhost one https site, the rest http

    - by RJP1
    I have a linode server with Apache2 running a handful of sites with virtualhosting. All sites work fine on port 80, but one site has a ssl certificate and also runs okay. My problem is as follows: The non-https sites, if visiting https://domain.com - show the contents of the only secure site... Is there a way of disabling the *:443 match for these non-secure sites? Thanks! EDIT (more information): Here's a typical config in sites-available for a normal insecure http site: <VirtualHost *:80> ServerName www.insecure.com ServerAlias insecure.com ... </VirtualHost> The secure https site is as follows: <VirtualHost *:80> ServerName www.secure.com Redirect permanent / https://secure.com/ </VirtualHost> <VirtualHost *:80> ServerName secure.com RedirectMatch permanent ^/(.*) https://secure.com/$1 </VirtualHost> <VirtualHost *:443> SSLEngine on SSLProtocol all SSLCertificateChainFile ... SSLCertificateFile ... SSLCertificateKeyFile ... SSLCACertificateFile ... ServerName secure.com ServerAlias secure.com ... </VirtualHost> So, visiting: http:/insecure.com - works http:/www.insecure.com - works http:/secure.com - redirects to https:/secure.com - works http:/www.secure.com - redirects to https:/secure.com - works https:/insecure.com - shows https:/secure.com - WRONG!

    Read the article

  • Cloning a failing disk (Win 7)

    - by daveh551
    I have a Windows 7 machine with several partitions on a 1.5T drive. Windows has been complaining about disk errors and imminent failure, so I have purchased a new 2TB drive. The failing disk has not completely failed, and, in fact, I was able to boot Windows from it (after a couple tries) and examine the SMART logs - the only RED item was 1 sector being reallocated. But when I try to Clone it to the new Drive using Acronis True Image Home (2010), True Image can see the drive, the partitions, and the contents, but when it goes to actually do the clone, it says "Failed to move. Make sure the destination disk is not smaller than the source disk, and that there are not errors on the disk" (or something like that). What are some other options for simply cloning the failing drive. I'd like to clone the entire disk, but am willing to do it partition by partition if necessary. Was this a known failing of the 2010 edition of ATI, or is it really something hosed in my system. Would upgrading to the 2012 edition be likely to work any better? (I'd download the trial and try it out, but if I remember right, the cloning operation is disabled in the trial version), and I don't have enough free disk space to make an entire image.) What are some other cloning software packages if ATI won't work? Note that I'm only looking to clone the disk, not make an image as a back up - I use Ghost for that, and can fall back to that if I have to. It looks to me like CloneZilla would do the job. Any recommendations? Thanks, and if this duplicates other questions, I apologize.

    Read the article

  • Is there a way to do a sector level copy/clone from one hard drive to another?

    - by irrational John
    Without going into distracting details, I'm attempting to duplicate the contents of the 500GB drive in my MacBook to another 500GB drive. But this is turning out to be an unexpected hassle because the drive contains both the OS X partition and an NTFS partition with Win 7 via Apple's Boot Camp. With the exception of Clonezilla, the tools I have looked at so far all have some limitation. The Mac tools don't want to deal with the NTFS partition. The Windows tools are totally clueless about either the HFS+ partition and/or the hybrid MBR/GPT Boot Camp partitioning. Clonezilla looked like it would do what I want but apparently I can't figure out how to use it. After doing what I thought was a sector to sector copy I found that only the NTFS partition had been migrated. The others were apparently empty. (And frankly, I'm not positive Clonezilla migrated the partition table correctly either). Note: It takes over 2 hours using SATA to read/write all sectors with these drives. So I'm not up for using trial & error to narrow in on the right combination of Clonezilla options to use. I'm beginning to think that maybe the answer is to boot Linux (probably Ubuntu) and then use some ancient BSD command. Trouble is I don't know what command (or parameters to use) in order to do a sector level copy from one drive to another. As far as I know the drives have the same number of sectors so this should be trivial. Sigh.

    Read the article

  • Clone a Windows Installation to a 3TB Hard Drive; MBR to GPT

    - by DanBlakemore
    I have Windows 7 Professional 64-bit installed on my desktop. Unfortunately for me and my wallet my hard drive is failing. I have purchased a 3TB hard drive as a replacement for my current 2TB drive. I would like to avoid as much hassle as possible in moving to this new drive so I would like to copy my current partition to the new drive using Gparted. The problem is that I suspect that my current partition is MBR, and I need GPT on my new drive since it is 3TB. Can I simply copy the MBR partition onto the new disk and then convert it to GPT after the fact (can you even convert the type of a partition)? Or would I need to somehow copy the contents of the partition into a GPT partition on the new drive? How do I go about making this transistion? Also, are there any issues I should be wary of booting to a GPT partition? If it matters, my motherboard is 1 year old as of May, 2012. Edit: My motherboard is 1 day old. My old one does not have UEFI compatibility, so I decided to make an upgrade to Intel today given that I would need a UEFI motherboard to use my new HDD. How much can I use a dying hard drive (bad sectors according to Hitachi Drive Fitness Test)? I have assumed not at all, to be safe.

    Read the article

  • How can I take browser screenshots at a higher resolution than my browser supports?

    - by user53575
    I need to take a screenshot of a website as it would appear on a very high resolution monitor... say 16000x12800 pixels. My laptop's screen has a native resolution of 1280x800. Basically, I need to simulate having a monitor resolution much higher than my monitor and video card actually supports. I want the screenshot of the site to look pretty much how it does when you hit CTRL MINUS (zoom out) in Firefox repeatedly, but without any loss of pixels due to scaling. How can I do this? Is there some way to use virtual machine software to simulate a super-high-res display? If not, is there some way to open a browser window bigger than the screen, and then capture its contents as a PNG somehow? Anything else that might work? Here was an answer: http://superuser.com/questions/120266/how-can-i-take-browser-screenshots-at-a-higher-resolution-than-my-browser-support But it doesn't work. Firefox remains in the resolution of the physical screen. The window blinks and shrinks back to normal resolution. Please Help!!

    Read the article

  • Pull network or power? (for contianing a rooted server)

    - by Aleksandr Levchuk
    When a server gets rooted (e.g. a situation like this), one of the first things that you may decide to do is containment. Some security specialists advise not to enter remediation immediately and to keep the server online until forensics are completed. Those advises are usually for APT. It's different if you have occasional Script kiddie breaches. However, you may decide to remediate (fix things) early and one of the steps in remediation is containment of the server. Quoting from Robert Moir's Answer - "disconnect the victim from its muggers". A server can be contained by pulling the network cable or the power cable. Which method is better? Taking into consideration the need for: Protecting victims from further damage Executing successful forensics (Possibly) Protecting valuable data on the server Edit: 5 assumptions Assuming: You detected early: 24 hours. You want to recover early: 3 days of 1 systems admin on the job (forensics and recovery). The server is not a Virtual Machine or a Container able to take a snapshot capturing the contents of the servers memory. You decide not to attempt prosecuting. You suspect that the attacker may be using some form of software (possibly sophisticated) and this software is still running on the server.

    Read the article

  • How to back up initial state of external backup drive?

    - by intuited
    I've picked up an HP Simplesave external drive. It comes with some fancy software that is of no use to me because I don't use Windows. Like many current consumer-targeted backup drives, the backup software is actually contained on the drive itself. I'd like to save the drive's initial state so that I can restore it if I decide to sell it. The backup box itself is somewhat customized: in addition to the hard drive device, it presents a CDROM-like device on /dev/sr0. I gather that the purpose of this cdrom device is to bootstrap via Windows autoplay the backup application which lives on the disk itself. I wouldn't suppose any guarantees about how it does this, so it seems important to preserve the exact state of the disk. The drive is formatted with a single 500GB NTFS partition. My initial thought was to use dd to dump the disk (/dev/sdb) itself, but this proved impractical, as the resulting file was not sparse. This seemed to be because the NTFS empty space is not filled with zeroes, but with a repeating series of 16 bytes. I tried gzipping the output of dd. This reduced to the file to a manageable size — the first 18GB was compressed to 81MB, versus 47MB to tarball the contents of the mounted filesystem — but it was very slow on my admittedly somewhat derelict Pentium M processor. The time to do that first 18GB was about 30 minutes. So I've resorted to dumping the disk state and partition data separately. I've dumped the partition state with sfdisk -d /dev/sdb > sfdisk.-d.out I've also created a compressed image of the NTFS partition (the only one on the disk) with ntfsclone --save-image --output - /dev/sdb1 | gzip -c > ntfsclone.img.gz Is there anything else I should do to ensure that I can restore the precise original state of the drive?

    Read the article

  • Never getting a JSON response when running server-side PHP proxy script but I do with others

    - by Dohk
    I'm on PHP 5.3.4 and Apache 2.2 btw So I'm using (or trying to use) Simple PHP Proxy (Simple PHP Proxy) I enter a URL at his example page at SPP Example Page and it works fine, I see the JSON response and all the headers. However, when I copy the exact URL, only changing the URL to now have localhost, I get both empty headers and no JSON. Assuming that the script on his site is the same I downloaded, could this be due to a multitude of things or a setting in Apache and/or the PHP ini? So for example: benalman.com/code/projects/php-simple-proxy/ba-simple-proxy.php?url=http://github.com/&full_headers=1&full_status=1 That will get me a ton of info back Now changing to localhost http://localhost/ba-simple-proxy.php?url=http://github.com/&full_headers=1&full_status=1 {"headers":[],"status":{"url":"https:\/\/github.com\/","content_type":"text\/html","http_code":301,"header_size":194,"request_size":182,"filetime":-1,"ssl_verify_result":0,"redirect_count":1,"total_time":0.094,"namelookup_time":0,"connect_time":0.047,"pretransfer_time":0,"size_upload":0,"size_download":185,"speed_download":1968,"speed_upload":0,"download_content_length":185,"upload_content_length":0,"starttransfer_time":0,"redirect_time":0.047,"certinfo":[]},"contents":null} I even went basic and just used some curl and of course, empty objects being returned other than false for my content and the url I set in my JSON. Any help is deeply appreciated or any ideas.

    Read the article

  • One workstation gets slow access to the server, but others are fast

    - by Mike Hanson
    I've just setup a machine with Windows Server 2008. It hosts various services, like IIS, POP3, SMTP, Music for Squeezeboxes, VNC. All was working well for the first week or so. One day I needed to create a mapped drive on the server, so it could access files on my workstation. Windows indicated that Network Discovery was needed, so I turned it on with the "Home / Office" option (rather than "Public"). This may be coincidence, but since that time I've been having troubles accessing various services from my main workstation (running Windows 7/64): POP3 continued working correctly, but SMTP was delayed or failed entirely. (Telnet took 20 seconds to connect, but Outlook would never send messages.) VNC failed entirely. I reinstalled it on the server, and now it works but feels sluggish. The music web server was extremely delayed and usually failed. I tried reinstalling, and now it takes about 30 seconds to show the page name on the browser tab, and another 30 seconds to display any page contents. Other machines on the local network seem fine, as do machines connected via the Internet. I don't believe I changed anything on my own machine that would cause this. I considered the possibility that my anti-virus was involved, so I uninstalled AVG (commercial version), but that didn't help. I installed Norton 360 after that, and it didn't complain of viruses on my machine, and the delays remained. Because only my machine is affect, I'm tempted to blame it, except that reinstalling software on the server improved the situation, so there is almost certainly something going on with the server too. The firewall has all the necessary ports open, and it works fine for the other workstations (including external machines connected via the Internet), which indicates that it should be OK. Any ideas?

    Read the article

  • Apache file appearing in directory list, but giving 404 when attempting access?

    - by aayush
    Please forgive my lack of knowledge, this is more of a learning project than anything else. I have a linux box, and it works pretty much fine. When i go to example.com/css it says theres one file in there, bootstrap.min.css When i go to example.com/css/bootstrap.min.css, it gives me a 404 error. I have only one htaccess file to remove the index.php from the url, which also i renamed to htaccess (Instead of .htaccess, so apache wont find it) and i restarted the server, yet no help. I also tried to chmod the css file 755 but no help. Contents of the htaccess file: RewriteEngine On RewriteBase / # Allow any files or directories that exist to be displayed directly RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L,QSA] Please help, i am very confused about this. I tried to google excessively but i came up with nothing. Edit: I found the solution to be renamed the htaccess file to something entirely different and restarting. Is there any way i can still implement the losing of the .php?

    Read the article

  • System randomly freezes yet mouse still moves

    - by user784446
    This problem has lasted for the past 48 hours. The first time it happened, a program I was running stopped responding, so I tried to end it from task manager. The processes at first were listed fine until hovered upon. Eventually, despite the mouse still being able to move, after a few persisting clicks the mouse finally stopped moving. The screen went blank shortly thereafter. The second time it occurred, items on the screen stopped responding - hovering over the taskbar or such wouldn't elicit a response. Sound would still play however. Eventually, the mouse became unresponsive and the system restarted itself. I suspect that it may be a problem of my SSD drive. After looking through some Google search results, I downloaded HDTunePro to determine if there's a problem with the drive. Results returned a problem of reallocated sector count. An error scan also revealed 48 bad sectors. Also, an attempt to backup the contents of the most important areas of the drive returned a few explorer "Error: cannot read source from disk" errors. Should I ditch the drive and use another drive or is there anything that can be done to repair the drive? SSD: OCZ Petrol 64gb CPU: AMD Athlon II X4 640 RAM: Generic 3GB DDR2 Motherboard: Gigabyte MA74GM-S2H OS: Windows 7 Ultimate x64 Thanks!

    Read the article

  • How to circle out something in a picture?

    - by T...
    What is the easiest way to circle out something in a picture, like this example This is accomplished in Gimp: Here are the steps necesary to draw an empty ellipse without clearing the contents of the image below it. 1 - Layer New layer 2 - Make the layer to be the same size as the image and layer fill type to transparency. This should be already selected by default. 3 - On the toolbox select the ellipse select tool and make an ellipse 4 - Use the bucket fill tool to paint the ellipse with your desired color. 5 - Right click on it and go to Select Shrink... 6 - Type in how many pixels you want the border to be and click ok. 7 - Go to the menu and click Edit Clear. I feel it is very indirect, in the sense that first fill out the region enclosed by the ellipse, and then shrink the region to the boundary. I wonder if there is a quicker and more direct way to circle out something, such as by directly drawing the boundary? My OS is Ubuntu. What I was asking may be done outside of gimp, but must be by some software under Ubuntu. Thanks!

    Read the article

  • Problem running MVC3 app in IIS 7

    - by mjmoore99
    I am having a problem getting a MVC 3 project running in IIS7 on a computer running Windows 7 Home-64 bit. Here is what I did. Installed IIS 7. Accessed the server and got the IIS welcome page. Created a directory named d:\MySite and copied the MVC application to it. (The MVC app is just the standard app that is created when you create a new MVC3 project in visual studio. It just displays a home page and an account logon page. It runs fine inside the Visual Studio development server and I also copied it out to my hosting site and it works fine there) Started IIS management console. Stopped the default site. Added a new site named "MySite" with a physical directory of "d:\Mysite" Changed the application pool named MySite to use .Net Framework 4.0, Integrated pipeline When I access the site in the browser I get a list of the files in the d:\MySite directory. It is as if IIS is not recognizing the contents of d:\MySite as an MVC application. What do I need to do to resolve this?

    Read the article

  • PXELinux and compressed kernels/images

    - by Yvan JANSSENS
    Is it possible to boot compressed kernels with a compressed initrd with PXELinux? First, a little background: We created a custom Linux distro, for diskless OpenCL computing nodes. We want those nodes to fetch their OS from the network. Our Distro is composed out of a kernel (duh) and a large initrd which is loaded into RAM and everything is executed from there. We chose to run everything off the initrd for two reasons: NFS was not an option to serve the filesystem's extra contents Fast file access from RAM. No persistent storage needed, data and config is pulled dynamically through a SOAP service. Now our initrd is about 450M in size. At our network speeds, it takes about two to three minutes to load a single client. Will compression speed up te downloading, and if yes, which one should be used? Is LZMA supported by PXELinux, or do we need to stick to bzip2 or gzip? Because of the 2-3 minutes loading time, booting 15 nodes over the same network link takes quite a lot of time. We decided not to use hard drives or CD/DVD drives, for financial reasons (cheapest HDD @ €30 times 15 is a lot of money saved ;-) ) So, our question is: what compression options are available for this setup? And how do we do this? Thank you for your time! Yvan Janssens

    Read the article

  • Puppet: is it ok to "force" certname when you expect to shuffle nodes around?

    - by Luke404
    We all know (good example on SF) that Puppet hostname detection could be... fun. At our company (and I guess we're not alone at this) we usually pre-configure servers at our offices and test them before bringing the gear to a remote datacenter and rack them. Of course the reverse dns will change when doing that, even if we don't change the actual hostname of the system. We're slowly drafting our puppet setup and I'd like to be sure those moves won't create problems. My idea is to explicitly configure the desired full FQDN of the system as certname in puppet.conf at server provision time (before the very first puppet run). My process would look something like this: basic o.s. installation basic network configuration, enough to reach the internet and resolve dns install puppet and set up certname start puppet and let him manage the whole configuration test, fix problems in config (via puppet), re-test, and so on... manually stop puppet set up new network configuration for the datacenter network move the machine to DC turn it on puppet should automatically start and keep on doing its job The process is supported by detecting the environment in puppet's manifests (eg. based on subnet, like they do at Wikimedia) and modify configuration as needed (eg. resolv.conf contents appropriate for each network). Each node's certname will never change for the whole system life cycle. Is there any problem with this approach? Could it be improved?

    Read the article

  • File exists but is unreadable by PHP

    - by Aron
    More than once I have ran into this issue: I have a cache file that is automatically generated by PHP. It contains some generated PHP code. However for some reason the file cannot be read and parsed by PHP. These are the symptoms: File actually exists on file system. Using Terminal you can navigate to the file, view its contents (which are fully intact), etcetc. PHP file_exists() will report that the file exists...which is correct since it does :) Then I include() the file. But when actually parsing the file, PHP will just consider it an empty file. No fatal error, just no PHP code actually executed. Again, its as if the file was completely empty (which I assure you, it is not)... It is not a permissions issue. Permissions are set as needed. Workaround: open the file in Terminal via 'nano' or some other text editor and just save it to the disk again. After that (despite no changes to the content) PHP will run it just fine... As a clarification, I'd like to add that this happens rarely, but frequently enough to be a problem. And even when it does, there are hundreds of other similar files on the same system that work without a problem... If this were an issue affecting only my own scripts, I would consider that there must be a bug in the way I generate the PHP code. But no, the issue has occurred more than once when deploying to a server (usually from Beanstalk repository via FTP). The issue has been present on various servers, Debian and Ubuntu running Zend Community Server. Any ideas? One that crossed my mind was opcode cache-ing (part of Zend Server CE)...could it be that an empty version of the file is cached if it is requested while the write operation is still in progress?

    Read the article

  • Samba access works with IP address only

    - by Sebastian Rittau
    I added a Debian etch host (hostname: webserver, IP address: 192.168.101.2) running Samba to a Windows network with a Windows 2003 PDC (IP address 192.168.101.3). The Samba server exports a public guest share, called "Intranet". The server shows up fine in the network, but trying to click on it produces an error dialog, stating I don't have the necessary permissions. So does entering \webserver manually and using \webserver\internet states that the path does not exist. Interestingly, accessing the share by IP address (\192.168.101.2 or \192.168.101.2\intranet) works fine. DNS is configured correctly, and "smbclient //webserver/intranet" on another Linux client works fine. One complicating issue is that the webserver is only a VMware virtual machine running on PDC server. Here is our smb.conf: [global] workgroup = Foobar server string = Webserver wins support = yes ; commenting out these wins server = 192.168.101.3 ; two lines has no effect dns proxy = no guest account = nobody [... snipped some unrelated bits, like logging ...] security = share [... snipped some password-related things ...] domain master = no [intranet] comment = Intranet path = /srv/webserver/contents browseable = yes guest ok = yes guest only = yes read only = yes create mask = 0775 directory mask = 0775

    Read the article

  • Virtual Host Configuration and mod_rewrite - Removing PHP Extension and Adding Forward Slash

    - by nicorellius
    On my production server, things are fine: PHP extension removal and trailing slash rules are in place in my .htaccess file. But locally, this isn't working (well, partially, anyway). I'm running Apache2 with a virtual host for the site in question. I decided to not use the .htaccess file in this case and just add the rules to the httpd-vhosts.conf file instead, which, I've heard, if possible on your server, is a better way to go. The virtual host is working and the URL I use for my site is like this: devserver:9090 Here is my httpd-vhosts.conf file: NameVirtualHost *:9090 # for stuff other than this site <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs" ServerName localhost </VirtualHost> # for site in question <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs/devserver" ServerName devserver <Directory "/opt/lampstack/apache2/htdocs/devserver"> Options Indexes FollowSymLinks Includes AllowOverride None Order allow,deny Allow from all </Directory> <IfModule rewrite_module> RewriteEngine ON # remove PHP extension and add trailing slash # note - this doesn't work for directories, and throws 404 # TODO - fix so directories use index.php RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{THE_REQUEST} ^GET\ /[^?\s]+\.php RewriteRule (.*)\.php$ /$1/ [R=302,L] RewriteCond %{REQUEST_FILENAME} !-d RewriteRule (.*)/$ /$1.php [L] RewriteCond %{REQUEST_FILENAME}.php -f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .*[^/]$ /$0/ [R=302,L] </IfModule> # error docs ErrorDocument 404 /errors/404.php </VirtualHost> The problem I'm facing is that when I go to directories on the site, I get a 404 error. So for example, this: devserver:9090/page.php goes to devserver:9090/page/ but going to a directory (that has an index.php): devserver:9090/dir/ throws 404 error page. If I type in devserver:9090/dir/index.php I get devserver:9090/dir/index/ and the contents I want appear... Can anyone help me with my rewrite rules?

    Read the article

  • How to auto syncronize files with network drive on Windows XP?

    - by stephenmm
    Windows XP: I would like to auto synchronize files between a a local drive and a network drive. I am aware of Windows Briefcase but it is very slow and I have to tell it to synchronize. I really like the way Dropbox does there synchronization as it is almost instantaneous. It is very impressive. I would just use Dropbox but I cannot install it on the remote machine. Is there some tool or script I can create that will watch a particular folder for any changes and then sync those changes to the networked drive automatically and nearly instantaneously? CLARIFICATION: I would like this tool/script to to be a daemon that starts when windows starts and continually monitors a folder for any changes to its contents. Once it observes changes in the source or the destination it synchronizes the files that changed (Very similar to the way Dropbox works). I have a good idea about how I would do this in a Perl script and if a tool does not exist that does this I will write it myself in Perl. If someone has already done this can they share the script?

    Read the article

  • Custom attributes in Active Directory - determining usage/function and possible removal options?

    - by HopelessN00b
    I've bumped into a highly-customized Active Directory environment (2003 FL) that's got me wondering if there's any particularly easy way to figure out what a custom attribute's function is, and what, if anything, is "using" that particular attribute. And then what some good options for potentially removing custom attributes from the schema might be. Aside from a restore or starting from scratch. If such an option exists. For example, I think I can be fairly certain what the "isDumbass" attribute with a value of TRUE means, but not so much with "IRPextCONST", containing a value of 393684. Likewise, I'd think it should be pretty safe to delete the "isDumbass" attribute, but would like to a) be sure and b) find out what's querying or updating that value anyway, because I suspect that anything using that attribute might be next on the list of things to remove. Ideally, without having to run a search on the contents of every custom script and bit of source code I can get my hands on, of course. And finally, aside from rebuilding from scratch, or doing an authoritative AD restore from backups that don't exist... is there a way to delete a given custom attribute? (Not blank the value, but actually delete the attribute from the schema - some folks would rather not have attributes like "FaggotMeter" and "DouchebagCounter" hanging around.) I've been able to find and successfully test a method on Windows 2k, but it seems like Microsoft disabled this option in SP4, and the domain in question is a 2003 functional level.

    Read the article

  • Utilu IE Collection IE6 crashes under Windows 7

    - by Aron
    I've installed Utilu IE Collection under Windows 7 64-bit, but am unable to successfully run IE6 (specifically, "Internet Explorer 6.0 (6.00.2900.2180)"). I haven't tried other versions of IE from the package, so their behavior is unknown. It starts up fine, but the address bar isn't shown. (Attempted to post image, but I'm a newbie on SU and it wouldn't let me. Can be found here.) View-Toolbars shows 'Address Bar' already selected. If I try to (de-)select Address Bar in that list, it crashes: Problem signature: Problem Event Name: APPCRASH Application Name: iexplore.exe Application Version: 6.0.2900.2180 Application Timestamp: 41107b81 Fault Module Name: StackHash_e162 Fault Module Version: 0.0.0.0 Fault Module Timestamp: 00000000 Exception Code: c0000005 Exception Offset: 00000000 OS Version: 6.1.7600.2.0.0.256.48 Locale ID: 1033 Additional Information 1: e162 Additional Information 2: e1625455580a00d3fda64a8cdf366fbb Additional Information 3: 58c7 Additional Information 4: 58c75753c6745fad91929ab6430a83fa That's not all that's wrong ('Search' pane has no contents as well, probably more), but it's the most glaring problem. I've seen some allusions to a registry change needed to fix this, but haven't been able to find any details. Anybody? I'm running the latest version of the collection, 1.7.0.6.

    Read the article

  • Copy a harddrive from a failed desktop machine using a second working one.

    - by MrEyes
    Heres the scenario: I have PC-A, an old PC that runs Windows XP but now refuses to boot due to a failed motherboard (or maybe PSU). This PC has a single 80gb IDE drive. I also have PC-B, running Windows Vista, this is working fine. I want to copy all the data off PC-As HDD onto PC-B. To do this I have taken the HDD out of PC-A and connected it as a slave to PC-B. PC-B now boots and sees the additional drive. However, when I attempt to access/copy user folders (i.e. Documents and Settings/[username]/*) I am told that I cannot access the folders due to user permissions. I am doing this under an adminstrator account on PC-B. So the question is, how can I "backup" the data? Preferably without making any changes to the drive contents. The reason for this is that it is possible that PC-A is failing due to a bad PSU, so I intend to replace it before writing off the machine. However I would feel much happier if I had a backup of the data on the HDD.

    Read the article

  • How can I create a macro that acts on a relative reference rather than an absolute reference to cell A1?

    - by Bruce
    I have a master rent statement in an Excel 2007 (macro enabled) spreadsheet that shows all tenants in rows with columns formed by the months. Each tenant then has a separate rent statement sheet like the one below that pulls the data through from the master rent statement and all I do then is to copy the last 4 columns to the right and add them to the right, just renaming the month labelled as ‘rent due’ with the current month and then hiding the previous last 4 columns to the left so that the statement always shows the previous month's activity and the amount due for the current month: I used a macro to speed up the creation of these statements, but then found that in some cases the result was wrong and needed major correction because the macro use absolute references i.e. its starting position was relative to cell A1 whereas some of my rent worksheets commence from a different column and in some cases from a different column and a different row. I have tried recording the macro with 'Use relative references' but when trying to use the macro it only gets part way through its operation before it stops and the message appears: Run time error '1004' Application defined or object defined error with the option to End or Debug or go to Help and then I'm stuck as I don't know how to debug and work in VBA or understand what has gone wrong. I want to record a single macro that always remains relative to the last 'Total Due' column heading (in the sample, it’s cell FF3 but on another worksheet could be cell GA26) and thus enables me regardless of where on the worksheet the rent statement is placed to add through my recorded macro a further four columns with updated dates and a repositioned 'Total Due' summary (in the sample in cells FE23 and FF23). The contents of cells FE23 and FE22 are always the same number of rows from the 'Sample Rent Statement, Service Charge and Sub Total' rows. I've searched on the web and in the help files of Excel 2007 but have been totally stumped by this, so currently I have to re-record a quantity of macros each month to cover all of the permutations of the worksheets in my Excel rent workbook, which is starting to become pointless in terms of saving time. Does someone know a solution to this problem please?!

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >