Search Results

Search found 34758 results on 1391 pages for 'linear linked list invert'.

Page 603/1391 | < Previous Page | 599 600 601 602 603 604 605 606 607 608 609 610  | Next Page >

  • Remote desktop connection to network printer

    - by andand
    I'm trying to print a document from a remote WinXP machine to a network printer I use on a local Win7 machine using Remote Desktop. The network printer does not appear in the list of those available on the WinXP box. In more detail, the local machine runs Windows 7 (no admin rights) and connects to a network printer managed by a print server (i.e. not using a local TCP/IP Port). I have access to a Windows XP host on a separate network which I access using Remote Desktop. I would like to have print requests from the remote XP box forwarded to the network printer I use on the Windows 7 machine. The XP machine cannot access the print server I use on the Win7 machine nor can it create a TCP/IP port to connect directly to the printer (network configuration issues). After having consulting the KB312135 I confirmed the "Printers" option was selected in the Remote Desktop Client, Local Resources Tab, yet the network printer does not appear on the list of available printers on the XP box. Is this a lost cause or is there something else I haven't managed to locate yet?

    Read the article

  • Export NFS path containing "-" (dash)

    - by qdot
    I'm in a bit of a pinch with NFS exports file. Specifically, I can't find a way to export a directory containing "-" in the path name. Manual (exports(5)) states: Also, each line may have one or more specifications for default options after the path name, in the form of a dash ("-") followed by an option list. The option list is used for all subsequent exports on that line only. It then states: If an export name contains spaces it should be quoted using double quotes. You can also specify spaces or other unusual character in the export name using a backslash followed by the character code as three octal digits. Unfortunately, that is not the case. Specifically, if the pathname contains "-", either verbatim, or with \055 or is enclosed in double quotes, it still refers to the name without "-" Any ideas? I have a large number of directories, all of the form /vol/buildsystem-s3c2440 /vol/buildsystem-tao3530 and I'd prefer to have them all available as nfs exports. Short of replacing the "-" with "_" everywhere in the scripts, can it be done with "-" ?

    Read the article

  • How to eliminate the downtime when a dynamic IP address changes?

    - by xenon
    We currently have a number of client computers linked up to a database server (MS SQL 2008) for replication. The database server recognises the computers based on their Windows hostname. We are using dynamic IP addresses at this time because we tend to change the computers’ hardware quite frequently, and so the MAC address may be different. Unless static IP has a good way for us to manage frequent changing of MAC addresses, we are keeping it to dynamic IP. The problem with dynamic IP addresses, however, is that when a client fetches an new IP from the DHCP, ie, there is a change in the IP address, there is going to have a downtime for the hostname to reflect the new IP address, the client’s DNS cache of the hostname to reload, and also the server’s DNS cache to reload to see the new IP from the hostname. All of these have different timings and the delay can be really bad at times. Restarting the computer doesn't work all the time too. The clients are on Windows 7. How can I eliminate the amount of downtime required when there is a change in IP in the case of dynamic IP addresses?

    Read the article

  • Can't connect to MS SQL Server database using SSMS

    - by Charles
    I have a database on line with Godaddy (who uses SQL Server 2005). They provide basic management tools, but tell you that for more advanced tools you can connect directly using SSMS. I followed their instructions to ensure my online database will accept remote connections, and can apparently log in using SSMS with success (after giving my hostname and access data). However: Now from in SSMS, when attempting to expand the "Databases" folder tree, I get the following error: Failed to retrieve data for this request. (Microsoft.SqlServer.Management.Sdk.Sfc) An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) The server principal "cmitchell" is not able to access the database "3pointdb" under the current security context. (Microsoft SQL Server, Error: 916) The irony is that 3pointdb isn't my database. It is just another in a long list of databases that show up when I access my Godaddy backend. From SSMS, I selected the default database to be the name of my database, which it did locate on the list when I browsed. Still same error message. It is trying to connect to a database that isn't mine! :( Godaddy support, after a bit of testing, said the problem isn't on their end. it's on mine. – Charles

    Read the article

  • ash scripting: space-containing variable refuses to be grepped

    - by Luci Sandor
    I am trying to run the script listed at http://talk.maemo.org/showthread.php?t=70866&page=2 on its intended hardware, a Nokia Linux phone running BusyBox ash. The script receives the name of WiFi network as a parameter, and tries to connect the phone to it. I suspect the script works, but my SSID, BU (802.1x), has space and parentheses in it. So when I type at the command prompt autoconnect.sh BU\ \(802.1x\) I get various errors. First, LIST=`iwconfig wlan0 | awk -F":" '/ESSID/{print $2}'` if [ $LIST = "\"$1\"" ]; then ...fails, even I am connected to the network. The error is not avoided by using single or double quotes instead of escaping characters at the command prompt. Second, if [ -z `iwlist wlan0 scan | grep -m 1 -o \"$1\"` ]; then echo SSID \"$1\" not found; shows that grep does not find the string, although the same grep, typed directly into the command prompt, does find 'BU (802.1x)'. How do I quote $1 in the two circumstances above so that it will work with my network SSID, containing spaces and parentheses? Thank you.

    Read the article

  • Moving symlinks into a folder based on id3 tags.

    - by Reti
    I'm trying to get my music folder into something sensible. Right now, I have all my music stored in /home/foo so I have all of the albums soft linked to ~/music. I want the structure to be ~/music/<artist>/<album> I've got all of the symlinks into ~/music right now so I just need to get the symlinks into the proper structure. I'm trying to do this by delving into the symlinked album, getting the artist name with id3info. I can do this, but I can't seem to get it to work correctly. for i in $( find -L $i -name "*.mp3" -printf "%h\n") do echo "$i" #testing purposes #find its artist #the stuff after read file just cuts up id3info to get just the artist name #$artist = find -L $i -name "*.mp3" | read file; id3info $file | grep TPE | sed "s|.*: \(.*\)|\1|"|head -n1 #move it to correct artist folder #mv "$i" "$artist" done Now, it does find the correct folder, but every time there is a space in the dir name it makes it a newline. Here's a sample of what I'm trying to do $ ls DJ Exortius/ The Trance Mix 3 Wanderlust - DJ Exortius [TRANCE DEEP VOCAL TECH]@ I'm trying to mv The Trance Mix 3 Wanderlust - DJ Exortius [TRANCE DEEP VOCAL TECH]@ into the real directory DJ Exortius. DJ Exortius already exists, so it's just a matter of moving it into the correct directory that's based on the id3 tag of the mp3 inside. Thanks! PS: I've tried easytag, but when I restructure the album, it moves it from /home/foo which is not what I want.

    Read the article

  • Can't Install msysgit/tortoisegit

    - by Jay
    I ran msysGit-netinstall-1.7.0.2-preview20100407-2.exe.   (http://code.google.com/p/msysgit/downloads/list) Then I ran TortoiseGit-1.4.4.0-64bit.msi.   (http://code.google.com/p/tortoisegit/downloads/list) msysgit was installed in C:\ TortioseGit appears to have been installed in C:\Program Files\TortoiseGit I have: "Git Clone..." "Git Create repository here" "TortoiseGit" in Explorer context menu. When I try to clone, I get "git have not installed" [sic]. I have tried setting the MSysGit path, in the TortioseGit settings, to everything imaginable. Nothing works. Neither C:\Program Files or C:\Program Files (x86) have a Git folder. The git command gives "command not found" from both cmd.exe and bash (that msysgit installed) I don't not see msysgit in - Control Panel - Programs - Program Features, but I do see TortioseGit in there. I would like a procedure for verifying that msysgit is properly installed. A procedure for uninstalling msysgit would be an added bonus. I would like a procedure for getting TortoiseGit to work. I am running Windows 7 on a MacBook Pro.

    Read the article

  • Backup hardware and strategy on distributed Windows Server 2008 network

    - by CesarGon
    This question is a follow up to this. We have a Windows Server 2008 R2 domain over a network that spans two different buildings, linked by a 100-Mbps point-to-point line. Over 60 users work in the organisation. We are planning to use DFS folders and DFS replication for file serving across the organisation. The estimated data volume is over 2 TB, and will grow at approximately 20% annually. The idea is to set up a DFS file server in each building and use DFS so that all the contents stay replicated over the 100-Mbps link. We are now considering backup hardware and strategies. We are Dell customers and, after browsing the online Dell catalogue, I can see a number of backup hardware options. My main doubts are the following: Would you go for a tape library, disk backup, or are there other options worth considering? Would you perform batch backups (i.e. nightly) or would you use continuous backup (i.e. while users are working)? Would you use a dedicated backup server to which the tape library (or any other backup device) is attached, or is there any other alternative way of doing things? My experience with backup hardware and overall setup is limited, so I appreciate any good piece of advice that you may have. Thanks.

    Read the article

  • Resize a RAID 1 volume on OSX Snow Leopard - how? (Note: software raid)

    - by Emmel
    I've scoured the Internet in search of an answer to this question, and as usual with OSX-related topics, I often don't find any deep-dive technical explanations sufficient enough to feel confident doing dangerous things. Here is my question: I have a Mac Pro, running OSX 10.6.2. I have, as my main root/boot disk, a RAID 1 volume called "Mirror1". Mirror1 is comprised of two 1 TB disks. Mirror1, however, is fixed at 640 GB. That's because, I originally took a 640GB disk, bought a terabyte disk, mirrored it (using diskutil appleraid enable...), when it synced I removed the 640GB and replaced it with a second 1 TB disk, and synced again. Voila! A single 640 GB replaced by two 1 TB disks in a mirror.. Actually, no. There's still something missing from the equation: Mirror1 needs to be expanded from 640GB to 1 TB to match the partition sizes on each of those disks. How do I do this? Perhaps the diskutil output will help: -> diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk0 1: EFI 209.7 MB disk0s1 2: Apple_RAID 999.9 GB disk0s2 3: Apple_Boot Boot OSX 134.2 MB disk0s3 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk1 1: EFI 209.7 MB disk1s1 2: Apple_RAID 999.9 GB disk1s2 3: Apple_Boot Boot OSX 134.2 MB disk1s3 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *640.1 GB disk2 1: EFI 209.7 MB disk2s1 2: Apple_HFS Mac Disk 2 536.7 GB disk2s2 3: Microsoft Basic Data BOOTCAMP 103.1 GB disk2s3 /dev/disk3 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS Mirror1 *639.8 GB disk3 -> diskutil appleraid list AppleRAID sets (1 found) =============================================================================== Name: Macintosh HD Unique ID: 1953F864-B474-4EB6-8E69-41834EBD0247 Type: Mirror Status: Online Size: 639.8 GB (639791038464 Bytes) Rebuild: manual Device Node: disk3 ------------------------------------------------------------------------------- # Device Node UUID Status ------------------------------------------------------------------------------- 0 disk1s2 25109BAE-5697-40EA-B612-0217851444F7 Online 1 disk0s2 11B83AB0-8148-4DB6-8761-DEF08C855F8D Online =============================================================================== Thanks in advance.

    Read the article

  • copying folder and file permissions from one user to another after switching domains [closed]

    - by emptyspaces
    Please excuse the title, this was the best way I could think to describe this scenario without an entire paragraph. I am using C#. Currently I have a file server running windows server 2003 setup on a domain, we will call this oldDomain, and I have about 500 user accounts with various permissions on this server. Because of restrictions out of my control we are abandoning this domain and using another one that is more dominant within the organization, we will call this newDomain. All of the users that have accounts on oldDomain also have accounts on newDomain, but the usernames are completely different and there is no link between the two. What I am hoping to do is generate a list of all user accounts and this appropriate sid's from AD on the oldDomain, I already have this part done using dsquery and dsget. Then I will have someone go through and match all of the accounts from oldDomain to the correct username on newDomain. Ultimately leaving me with a list of sids from oldDomain and the appropriate username from newDomain. Now I am hoping to copy the file and folder permissions from the old user from oldDomain to the new user on newDomain once I join the server to newDomain. Can anyone tell me what the best way to copy permissions from the sid to the user on newDomain? There are a bunch of articles out there about copying permissions from user a to user b but I wanted to check and see what the recommended practice is here since there are a ton of directories.

    Read the article

  • Reporting Services 2008: Virtual directories not visible in IIS7..

    - by Ryan Barrett
    I'm having some problems with Reporting Services on Windows Server 2008 Standard. I've installed server 2008 as a standalone webserver (with roles/features of an web application server). On top of that, I've installed Sql Server 2008 Standard with Reporting Services (and the rest of the BI tools). Problem is, I want to modify the rights on the virtual directories. However, the virtual directories aren't appearing in IIS 7 management tool. I can connect to reporting services, albeit only with the local windows admin account. I can download Report Builder fine from an session on the server (but not from any clients). I've tried removing the default website from IIS, and that stops the reporting services website from working. The machine (a VM) isn't for production use - it's used on a closed network internally for testing and development purposes. I need to be able to let my fellow developers login without a password, and they must be able to install ReportBuilder 2.0. Must not be linked to a domain or active directory in any form. Google isn't much help, the results suggest I modify the virtual directory Does anyone have any suggestions?

    Read the article

  • Windows Bluescreen - atikmpag.sys

    - by Mochan
    Information Name: atikmpag.sys bluescreen (BSOD or BlueScreen of Death) Error code: 0x00000116 Appears when: Playing games, watching videos Can be reproduced: Yes Cause: Graphics Card is the main assumption System Specifications Before we begin - I will inform you of my specifications. OS: Windows 7 x64 Home Edition Model: Dell Inspiron 15R Special Edition (aka Inspiron 7520) (Add 2GB of RAM to the model linked) Hard Drive: 1TB CPU: Intel Quad-Core i7 Sandy Bridge (I think) Processor at 2.10GHz (I think it can be clocked to 3GHz?) RAM: 6GB (I think 1 x 4GB and 1 x 2GB) Display: 15.6" HD (1366x768) Graphics: AMD Radeon HD 7500M 2GB Details So now that you know some basics about my computer, I'll get to the problem. Being an Ubuntu user I hardly use Windows, but occasionally I do. Like to run Skyrim and other games incompatible with Linux and WINE. The new Sims 3 Seasons patch is also now not supported. When playing these two games and other ones, theoretically. I have also heard others saying that while watching HD movies and video series it also happens. While watching the bluescreen as it happens, I see it is the 'atikmpag.sys' error. I have not installed much and nothing significant. I think I have downloaded Skyrim, Firefox and The Sims 3. I haven't done much more... since Ubuntu is definitely the best in comparison! (No hate, just a joke :P). I can reproduce it easily (just by running a game for less than a minute). It is always there each time, but it's never at a specific time or anything. So far I have found that it may be caused by lack of power to the graphics card, or it may be damaged or fried. Since I've had the computer for a mere 4 months (and have had other problems with it also). I have contacted Dell but they are useless beyond belief. Anyone with any information, solutions or details are encouraged to share your knowledge, as it would be immensely appreciated.

    Read the article

  • Tool to modify properties/metadata of a PDF? i.e. Change "Title", "Author"? Sony Reader showing som

    - by Chris W. Rea
    I own a Sony Reader PRS-600 ebook reader. I bought a ton of Manning Publications ebooks (DRM-free) recently. Many of the books are PDFs since not all the ones I wanted are available in epub format. The problem: Some of the PDF books I purchased have incorrect or missing metadata. Making things worse, the Sony Reader only displays the "Title" from the PDF metadata when displaying book titles in the reader's collection of books! The Reader doesn't display the filename. So, even though I have a PDF informatively named "Windows PowerShell In Action.pdf", it shows up as "untitled" in the Reader. Imagine how useful the Reader's list of book titles becomes when many are just "untitled" or "unnamed document" ! Yes, it is maddening. So – short of expecting the publisher to fix the files or Sony to add a filename-based list instead, I'm looking for a way to fix the PDF metadata. I can view the metadata with Adobe Reader, but it doesn't permit modification of the properties. Leading to: Question: Is there a tool – free, or cheap – and either for PC or Mac, that can modify the properties / metadata of a DRM-free PDF document? I want to correct "Title" and "Author" fields, specifically.

    Read the article

  • who has files open on a linux server

    - by Robert
    I have the fairly common task of finding who has files open on our Linux (Ubuntu ) file server in our Windows environment. We use Samba on the network and I use Putty from my workstation to establish a shell window to run bash scripts. I have been using something like this to find what files are open: (this returns a list of process ids with each open file) Robert:$ sudo lsof | grep "/srv/office/some/folder" Then, I follow up with something like this to show who owns the process: (this returns the name of the machine on the network using the IP4 protocol who owns the process) Robert:$ sudo lsof -p 27295 | grep "IPv4" Now I know the windows client who has a file open and can take action from there. As you can tell this is not difficult but time consuming. I would prefer to have a windows application I can run that would just give me what I want. So, I have been thinking about creating some process I can run on Linux that listens on a port and then returns a clean list of all open files with the IP address of the host who has the file open. Then, a small windows client application that can send the request on the port. It seems like this should be a very common need but I can not find anything like this that has been done before. Any suggestions?

    Read the article

  • Listing group members using ldapsearch

    - by colemanm
    Our corporate LDAP directory is housed on a Snow Leopard Server Open Directory setup. I'm trying to use the ldapsearch tool to export an .ldif file to import into another external LDAP server to authenticate with externally; basically trying to be able to use the same credentials internally and externally. I've got ldapsearch working and giving me the contents and attributes of everything in the "Users" OU, and even filtering down to only the attributes I need: ldapsearch -xLLL -H ldap://server.domain.net / -b "cn=users,dc=server,dc=domain,dc=net" objectClass / uid uidNumber cn userPassword > directorycontents.ldif That gives me a list of users and properties that I can import to my remote OpenLDAP server. dn: uid=username1,cn=users,dc=server,dc=domain,dc=net objectClass: inetOrgPerson objectClass: posixAccount objectClass: organizationalPerson uidNumber: 1000 uid: username1 userPassword:: (hashedpassword) cn: username1 However, when I try the same query on an OD "group" instead of a "container," the results are something like this: dn: cn=groupname,cn=groups,dc=server,dc=domain,dc=net objectClass: posixGroup objectClass: apple-group objectClass: extensibleObject objectClass: top gidNumber: 1032 cn: groupname memberUid: username1 memberUid: username2 memberUid: username3 What I really want is a list of users from the top example filtered based on their group memberships, but it looks like membership is set from the Group side, rather than the user account side. There must be a way to filter this down and only export what I need, right?

    Read the article

  • How do I find information about a particular trojan? "W32/Smalltroj.XVGT", as reported by Norman

    - by Lasse V. Karlsen
    I tried checking the Norman antivirus page, Virus-descriptions, but sadly it seems Norman has intentionally obfuscated their search results (I tried clicking on W, and it seems they just list viruses with a W somewhere in the description, instead of more typical, all viruses with a name starting with a W.) Is there a common virus-list somewhere, or is it as I suspect, every antivirus manufacturer is free to come up with their own identification tags for each virus? Several "vshost32.exe" files, related to Microsoft Visual Studio 2008, has been quarantined on our server today, probably related to a test-deployment of some internal software. Some developer machines that have grabbed that latest version of our program has also had the same files quarantined. Now, these files should not have been deployed in the first case, so I'll be looking into that, but whenever any developer now builds a program locally and attempts to debug, the same file is placed in the build output directory, and promptly quarantined. Does anyone have any clues as to how I can go about verifying this before I pointedly ask the antivirus software to go take a hike on this particular virus? Edit: I've copied one of the quarantined files manually to a machine over the network that doesn't have antivirus installed, and compared the file on that machine with a local copy (on that machine) of the vshost32.exe template file, and they're bit-for-bit identical. I guess this is a false positive. I still would like to know if it would be possible for me to verify this in any other way though, since next time such a trojan might be reported in a compiled file that we won't have a pristine copy of.

    Read the article

  • can't install mysql-devel on centos 6.5

    - by Latheesan Kanes
    I need mysql-devel package to be installed on my CentOS 6.5 running Percona 5.5 (already installed & running). When I try to install the devel package like this: yum --enablerepo=remi install mysql-devel I get the following error: Error: Package: mysql-devel-5.5.37-1.el6.remi.i686 (remi) Requires: real-mysql-libs(x86-32) = 5.5.37-1.el6.remi Available: mysql-libs-5.5.36-1.el6.remi.i686 (remi) real-mysql-libs(x86-32) = 5.5.36-1.el6.remi Available: mysql-libs-5.5.37-1.el6.remi.i686 (remi) real-mysql-libs(x86-32) = 5.5.37-1.el6.remi Error: Package: mysql-5.5.37-1.el6.remi.i686 (remi) Requires: real-mysql-libs(x86-32) = 5.5.37-1.el6.remi Available: mysql-libs-5.5.36-1.el6.remi.i686 (remi) real-mysql-libs(x86-32) = 5.5.36-1.el6.remi Available: mysql-libs-5.5.37-1.el6.remi.i686 (remi) real-mysql-libs(x86-32) = 5.5.37-1.el6.remi Error: mysql conflicts with Percona-Server-client-55-5.5.37-rel35.0.el6.i686 Here's what's currently installed on my server: [root@server1 ~]# yum list installed | grep mysql php-mysqlnd.i686 5.4.29-1.el6.remi @remi [root@server1 ~]# yum list installed | grep percona Percona-Server-client-55.i686 5.5.37-rel35.0.el6 @percona Percona-Server-server-55.i686 5.5.37-rel35.0.el6 @percona Percona-Server-shared-55.i686 5.5.37-rel35.0.el6 @percona [root@server1 ~]# Any ideas how to fix this dependency error?

    Read the article

  • Unsigned lenny packages with aptitude safe-upgrade

    - by Liam
    I have several Debian lenny computers. Two have nearly identical sources.list files. On both, I do regular update/safe-upgrades. On one it always goes smoothly. On the other, much of the time I get the following: sudo aptitude safe-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Reading task descriptions... Done The following packages will be upgraded: krb5-clients krb5-ftpd krb5-rsh-server krb5-telnetd krb5-user libimlib2 libkadm55 libkrb53 libpng12-0 libpulse0 xpdf xpdf-common xpdf-reader 13 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 2906kB of archives. After unpacking 36.9kB will be used. Do you want to continue? [Y/n/?] WARNING: untrusted versions of the following packages will be installed! Untrusted packages could compromise your system's security. You should only proceed with the installation if you are certain that this is what you want to do. krb5-rsh-server krb5-user krb5-ftpd krb5-clients libkrb53 xpdf-reader libpng12-0 libkadm55 xpdf libpulse0 libimlib2 krb5-telnetd xpdf-common Do you want to ignore this warning and proceed anyway? To continue, enter "Yes"; to abort, enter "No": no Abort. Needless to say, I don't proceed. What is going on? How do I fix it? These are the non-comment lines in the sources.list for this computer: deb ftp://ftp.debian.org/debian/ lenny main contrib non-free deb-src ftp://ftp.debian.org/debian/ lenny main contrib deb http://security.debian.org/ lenny/updates main contrib non-free Thank you.

    Read the article

  • Powershell Copy-Item fails silently

    - by R W
    I have a powershell 2.0 script running on Windows Server 2008 R2 64bit that copies some Hyper-V .vhd files to another server as a 'backup solution'. The script gets a list of the .vhd's to copy then iterates over that list to copy them using Copy-Item. It also writes some logging info to a file as well. The files are copied to another server (Windows Server 2003 Sp2) into a directory compressed with NTFS compression. One of the files isn't copied. It's relatively big ~ 68Gb. The others are 20Gb or less. The wierd thing is that during the copy process the file appears on the destination server and the log file generated seems to indicate the file is copied due to the difference in the times of the log file entries. I see no error messages on the log file and nothing in the event log of either machine. Here's the code that does the copy. Get-ChildItem $VMSource *.vhd -Recurse | foreach-object { $time = Get-Date -format HH.mm.ss Add-Content $logFileName "$time : File Copy ($_) started" $fullname = $_.FullName Add-Content $logFileName "$time : Copying $fullname to $VMDestination" Copy-Item $fullname $VMDestination -Force -ErrorAction SilentlyContinue -ErrorVariable errors foreach($error in $errors) { if ($error.Exception -ne $null) { Add-Content $logFileName "'tERROR COPYING FILE : $($error.Exception)" } } $time = Get-Date -format HH.mm.ss Add-Content $logFileName "$time : File Copy ($_) finished" } I can only think there's some problem with copying a file that big to a compressed directory maybe? Any ideas?

    Read the article

  • Built local glibc, broke system, how do I ssh without parsing the .bashrc?

    - by Mikhail
    The cluster I am on had really old build tools and I needed to use CUDA5. I'm a pretty clever dude and I planned on building the necissary tools. So, I built a local copy of gcc, bintools, and glibc. Everything a CUDA5 could want. All builds finished without error. and I tested gcc and bintools. Everything was wonderful and I built and ran a few of the programs. I set up the LD_LIBRARY_PATHs in the .bashrc and logged back in, expecting a productive night ahead. To my horror I realized that everything is dynamically linked. Now I can't do simple commands like ls [ex@uid377 ~]$ ls ls: error while loading shared libraries: __vdso_time: invalid mode for dlopen(): Invalid argument and I can't do commands to fix the problem like rm or vim! Is there a way for me to ssh but also to ignore .bashrc file? Any suggestions are much appreciated. This machine is obviously under maintained and I don't know when I could have administrator support.

    Read the article

  • Hard drive had reallocated sectors...but now it magically doesn't! Can I trust it?

    - by rob
    Last week my SMART diagnostics utility, CrystalDiskInfo, reported that the external hard drive that I was saving my backups to had suddenly reported 900+ reallocated sectors. I double-checked to confirm, then ordered a replacement drive. I spent all of this week copying data from that drive to the new drive. But toward the end of the copy, something peculiar happened. CrystalDiskInfo popped up an alert that the reallocated sector count had gone back down to 0. I know that when SMART detects a read error on a block, it adds that block to the current pending reallocation list. If it subsequently is successfully written or read later, it is removed from the list and assumed to be fine, but if a subsequent write fails, it is marked bad and added to the reallocated sector count. What concerns me most is that I've never read anywhere that a sector can be recovered as "good" after it has been marked as a bad sector and remapped. I've just finished running an extended SMART diagnostic, and it found no surface errors. Now I'm doubtful that the manufacturer will honor a warranty claim if the SMART info does not report any problems. Has anyone had this happen? If so, then is the drive, indeed, okay, or should I be concerned about an imminent failure?

    Read the article

  • Listing group members using ldapsearch

    - by colemanm
    Our corporate LDAP directory is housed on a Snow Leopard Server Open Directory setup. I'm trying to use the ldapsearch tool to export an .ldif file to import into another external LDAP server to authenticate with externally; basically trying to be able to use the same credentials internally and externally. I've got ldapsearch working and giving me the contents and attributes of everything in the "Users" OU, and even filtering down to only the attributes I need: ldapsearch -xLLL -H ldap://server.domain.net / -b "cn=users,dc=server,dc=domain,dc=net" objectClass / uid uidNumber cn userPassword > directorycontents.ldif That gives me a list of users and properties that I can import to my remote OpenLDAP server. dn: uid=username1,cn=users,dc=server,dc=domain,dc=net objectClass: inetOrgPerson objectClass: posixAccount objectClass: organizationalPerson uidNumber: 1000 uid: username1 userPassword:: (hashedpassword) cn: username1 However, when I try the same query on an OD "group" instead of a "container," the results are something like this: dn: cn=groupname,cn=groups,dc=server,dc=domain,dc=net objectClass: posixGroup objectClass: apple-group objectClass: extensibleObject objectClass: top gidNumber: 1032 cn: groupname memberUid: username1 memberUid: username2 memberUid: username3 What I really want is a list of users from the top example filtered based on their group memberships, but it looks like membership is set from the Group side, rather than the user account side. There must be a way to filter this down and only export what I need, right?

    Read the article

  • IIS Manager - Connect to Another Server (Win7 to Win2008 server)

    - by Matt
    I am running Windows 7 Ultimate. If I open up IIS Manager, I see a list of "connections" on the left hand side. In previous versions, I would be able to select an option to "connect to another server" or "connect to another machine", but there is no such option visible anywhere here. The only thing in the list is my local machine. Even in the address bar, if I manually type in the server location (\servername, even tried just servername), nothing happens (it just reverts back to my current local computer) The documentation at http://technet.microsoft.com/en-us/library/cc732466%28WS.10%29.aspx seems to imply the very same steps... but there is just no button or menu option anywhere to do this. Am I missing something? I'm not even seeing a grayed out menu option. EDIT: Under the "File" menu, I see 2 options: Save Connections (grayed out) Exit Under the "Connections" pane, I see 1 button, grayed out. When I hover the mouse, it simply says "Up", appears to be usable if I browse into an element in my local computers IIS settings If I right click inside the pane itself, I see Refresh Add website (to the current host) Start Stop Rename Switch to Content View UPDATE: I downloaded and installed the Remote Server Administration tools from http://www.microsoft.com/downloads/details.aspx?FamilyID=7D2F6AD7-656B-4313-A005-4E344E43997D&displaylang=en, and I enabled everything listed under "Remote Server Administration Tools" under "Turn Windows Features On or Off". Still nothing.

    Read the article

  • SSL stops working on IIS7 after a reboot

    - by Mark Seemann
    I have a Windows 2008 Server with IIS7. Every time the server reboots, SSL stops working. Normal HTTP requests work fine, but any request to an HTTPS address gives the typical error message in the browser: Cannot find server or DNS I can temporarily fix it by opening IIS Manager and bring up the Bindings… window for the website in question. Then I select “https”, click on “Edit” then click “Ok” without making any changes to the settings. After doing this, browsing to https:// works again until the next reboot. This issue look as lot like the one described here, but according to the Certificates MMC snapin, the certificate in question does have a private key. I'm also pretty sure that I never installed the certificate in the personal store, but imported it straight into the machine store, but it's been a while... There's not a lot in the event log apart from the event ID 36870 also described in the post I linked to. Can anyone help me troubleshoot this issue so that SSL will work even after a server reboot?

    Read the article

  • ADO 2.8, VMWare and SQL Server - sudden bulk dropped connections

    - by CodeByMoonlight
    We have an overworked server currently running a single SQL Server 2000 instance on physical hardware, and about 40 different apps interact with it on a daily basis. Last year, the RAID controller failed and we had no spare, so IT Support hurriedly migrated it overnight to a copy running on a VMWare Server. While it was on that server everything ran much quicker due to it being a big improvement in spec. However, the biggest app using it had occasional serious errors which never occurred on physical hardware. Specifically, several times a week it would disconnect batches of users - anywhere from just ten to hundreds at once, and all at the same time. It didn't affect any particular users or PCs or offices - all were affected equally. The only common thing was the app, which is a VB6 app using ADO 2.8 to connect. The other apps connecting to that virtualised instance of SQL Server seemingly had no problems, although they were (and are) responsible for only a tiny fraction of the work involving this server. The upshot is that after about two weeks of loving the speed and hating the random mass disconnections (which we were never able to find a cause for), we sadly took the decision to return to physical hardware and the disconnections vanished. Now we've reached the point where the old server just can't handle all that's being asked of it, and we're intending to migrate everything to 2 or more other servers. The snag is that there's a good chance they'll have to be virtual ones again. Given what happened last time, I'm trying to find out what possible reasons there could be for these mass disconnections. We were running VMWare ESX, but the network is Novell-based. Also, the server had a linked server setup to connect to an Informix server using a known-to-be-buggy ODBC driver, and this is used throughout the day. Any ideas on the cause(s)?

    Read the article

< Previous Page | 599 600 601 602 603 604 605 606 607 608 609 610  | Next Page >