Search Results

Search found 18677 results on 748 pages for 'current'.

Page 532/748 | < Previous Page | 528 529 530 531 532 533 534 535 536 537 538 539  | Next Page >

  • Whats the best way to update Ubuntu 9.04?

    - by Fu86
    I have a Ubuntu 9.04 server which has no packase support anymore. If I want to update my package lists, I get th following errors: Err http://de.archive.ubuntu.com jaunty-security/multiverse Packages 404 Not Found [IP: 141.30.13.10 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/jaunty/main/binary-amd64/Packages 404 Not Found [IP: 141.30.13.10 80] .... I read at the official Ubuntu-Support-Page, that there is a update-manager-core-Package to upgrade to a new release. Unfortunately I dont have this package installed and I am unable to install it because of the lack of package sources. EDIT: Installing the package update-manager-core from another release doesn't work because it depends on a higher version of python-apt. (Tried with 10.04) $ dpkg -i update-manager-core_0.134.7_amd64.deb Selecting previously deselected package update-manager-core. (Reading database ... 28743 files and directories currently installed.) Unpacking update-manager-core (from update-manager-core_0.134.7_amd64.deb) ... dpkg: dependency problems prevent configuration of update-manager-core: update-manager-core depends on python-apt (>= 0.7.13.4ubuntu3); however: Version of python-apt on system is 0.7.9~exp2ubuntu10. update-manager-core depends on python-gnupginterface; however: Package python-gnupginterface is not installed. dpkg: error processing update-manager-core (--install): dependency problems - leaving unconfigured Errors were encountered while processing: update-manager-core So, whats the best way to upgrade to to current Release without reinstalling the complete (virtual) server?

    Read the article

  • IPv6 - Public IPs, private IPs, IPs derived from the MAC address? Confused!

    - by sinni800
    I'm pretty much excited for IPv6 because of the large address room and (potential?) owning of more than one IP, or even tens of IPs (/122 subnet?) Though one magazine has now confused me. In a current issue (no. 3) of "CT", a German computer magazine, I read that when using IPv6 your IP address consists of your MAC address and various other things, and that this address will be public on the web, no matter what access point / LAN you connect to. My knowledge of IP(v6) is in contrary of this. I thought you will normally always have a a local network IP and NAT takes care of your Internet access, and your provider gives the NAT router an IP. I've heard of the 6to4 interface, but does this one give you your own ip in the IPv6 net? Personally I hope it still is through a personal IP space (like 192.168, 127.16-31, 10. in IPv4) in private networks with a NAT going to the Internet. And also I hope that providers will offer subnets to private customers so they don't have to use NAT anymore. Yay for converting your LAN into the WAN and using better security (so Computers from the same subnet still get access rights like normal).

    Read the article

  • virtual memory commited

    - by vinu
    After a server bounce happens, and after around 40-45 days time period, we receive continuous “Committed Virtual Memory” alerts which indicates the usage of swap space in the magnitude of 4GB This also causes the application to perform very slowly and experience a number of stalled transactions. Server Setup: 4 Tomcat Servers (version 7.0.22) that are load balanced (not clustered) by 2 Apache Servers. And the Apache servers themselves supply static content and routing to these 4 tomcat servers. Java Runtime Version: java version "1.6.0_30" Java(TM) SE Runtime Environment (build 1.6.0_30-b12) Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode Memory Startup Parameters: MEMORY_OPTIONS="-Xms1024m -Xmx1024m -Xss192k -XX:MaxGCPauseMillis=500 -XX:+HeapDumpOnOutOfMemoryError -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled" Monitoring – Wily monitoring is available in all the production servers that monitors key server parameters and sends out configurable alert emails based on pre defined settings. Note: Each of the servers also has two other separate tomcat domains that run different applications Investigated area: There is no Heap Memory Leak and the GC is running fine without any issues over any period of time The current busy thread count corresponds directly to the application usage – weekends and nights have lesser no. of threads compared to business hours ThreadLocal uses a WeakReference internally. If the ThreadLocal is not strongly referenced, it will be garbage-collected, even though various threads have values stored via that ThreadLocal. Additionally, ThreadLocal values are actually stored in the Thread; if a thread dies, all of the values associated with that thread through a ThreadLocal are collected. If you have a ThreadLocal as a final class member, that's a strong reference, and it cannot be collected until the class is unloaded. But this is how any class member works, and isn't considered a memory leak. The cited problem only comes into play when the value stored in a ThreadLocal strongly references that ThreadLocal—sort of a circular reference. In this case, the value (a SimpleDateFormat), has no backwards reference to the ThreadLocal. There's no memory leak in this code. Can anyone please let me know what could be the cause of this and what to be monitored?

    Read the article

  • How to prevent remote hosts from delivering mail to Postfix with spoofed From header?

    - by Hongli Lai
    I have a host, let's call it foo.com, on which I'm running Postfix on Debian. Postfix is currently configured to do these things: All mail with @foo.com as recipient is handled by this Postfix server. It forwards all such mail to my Gmail account. The firewall thus allows port 25. All mail with another domain as recipient is rejected. SPF records have been set up for the foo.com domain, saying that foo.com is the sole origin of all mail from @foo.com. Applications running on foo.com can connect to localhost:25 to deliver mail, with [email protected] as sender. However I recently noticed that some spammers are able to send spam to me while passing the SPF checks. Upon further inspection, it looks like they connect to my Postfix server and then say HELO bar.com MAIL FROM:<[email protected]> <---- this! RCPT TO:<[email protected]> DATA From: "Buy Viagra" <[email protected]> <--- and this! ... How do I prevent this? I only want applications running on localhost to be able to say MAIL FROM:<[email protected]>. Here's my current config (main.cf): https://gist.github.com/1283647

    Read the article

  • Torrent upload ratio not updated on Synology DS212+

    - by user179271
    I have a Synology DS212+ NAS running DSM 4.2-3211 (current version). I use it for several purposes including torrent download using Download Station and a tracker that needs authentication. My problem is that my download/upload ratio isn't updated, so it constantly falls down. My NAS is behind a router, and I configured the NAT to forward ports 6890 to 6999 to the internal IP address of the NAS. Here are the Download Station settings : TCP port : 6990, Sharing ratio : 900%, Sharing time : infinite, max download speed : 0 (no limit), max upload speed : 0 (no limit), BT protocol encryption : checked, max numbers of peers allowed by torrent file : 4000, DHT : checked, with port 6889. When the DHT option is not checked, the NAS doesn't upload any files. I don't know what is this option for. Can someone help me to solve this problem ? Did I miss any step, or does it come from the NAT ? How is the authentication managed by Dowload Station ? (Sorry for my english) Thanks.

    Read the article

  • Thundrbird 3: can't change column width?

    - by rumtscho
    I recently installed Thunderbird 3.0.3. Just noticed a suboptimal UI setting: in the upper pane, which lists the e-mails in the current folder, the Date column is about 200px wide. So when I keep the window at 480x600, all I see in a row is: | tree icon | favourites icon | attachment icon | read icon | junk icon | Date and time, followed by 5cm whitespace | ... | P Where "P" is the first letter of the name of the sender. And the "..." is actually shown this way, I have no idea which column it is meant to be. But I don't see neither the sender, nor the message subject, which makes scrolling a folder for a certain mail rather pointless. I see these when I maximize the window, actually the columns are then not only bigger, they are arranged in another sequence. But I feel that holding a mail client permanently maximised at 1600x1200 is a waste of screen real estate. My naive solution attempt was to try to go with the mouse cursor to the right edge of the date column and try to shrink it by moving the cursor left while holding down the left mouse button. Not only is this default behaviour for all resizable columns I've ever encountered in GUIs, the cursor actually turns into a horizontal double-headed arrow. But pulling has no effect at all. I cannot make a wide column narrow, and I cannot make the narrow columns wide. I didn't find anything in the preferences either. So can please somebody explain how to get the columns arranged sensibly?

    Read the article

  • Managing disk in a VM

    - by dst
    I'm replacing my two old rack servers with a new one that has plenty of power to take over the functionality my current servers. The server is a 4U rack mount with 16 3.5" SAS drive bays, two 2.5" bays, a Xeon E3-1230v2 CPU and 32GB of ECC RAM. My issue is the following. I would like to have a FreeBSD file server with ZFS managing disks. However, I need other VMs for e.g. a shell/git server, mail server etc. I'm wondering how to deal with the following issues: I want ZFS to fully manage the disks, so I'm not using any hardware RAID. Should I pass the SAS controller directly to the FreeBSD system as passthrough PCI? I want to maximize the reliability of the setup. On what disks should I install the hypervsor and keep server system disks? For (2) I have the option of having a RAID setup on the SAS controller and using that as system disk to store the hypervisor as well as VM images. However, this makes PCI passthrough to the file server impossible. Another option is using the two 2.5" bays. In terms of reliability how are SSDs compared to e.g. WD RE4 disks? Would it make sense to have two SSDs in software RAID as boot disks for the hypervisor or should I just go with e.g. WD RE4 disks in a software RAID setup. I also need to think about where to store the mails for the mail server, but this could be done over NFS between the VMs. BTW, this is for home use, so the load is not really that big. What I'm looking for is best practices for splitting up a server.

    Read the article

  • Some guest networking and VMware Tools functionality broken with Sprint SmartView on the host

    - by Mads
    Using VMware Workstation 6.5.3 on Vista 64-bit. I started having problems with VMware networking about 6 months ago after upgrades to Sprint SmartView. I did not have problems previously, but I don't know if that is because I was lucky. The main symptoms of the problem when SmartView is installed are: I can no longer drag files from the host to copy them to the guest. When they are dragged, the disallowed cursor (the circle with a slash) shows in the guest. If I try to enable shared folders in the guest while it is running, I will not be able to see the shared files and will be informed that networking is not working. I can still ping guests from the host and I can still access network services via NAT most of the time when connected via my USB broadband adapter. When I configure shared folders so they are "always enabled" (with a mapped drive), I can access files on the via the mapped folders. I can also copy the file on the host and then paste it in the guest, as was suggested in some other threads concerning drag-and-drop problems that I found. The VMware Tools icon is showing in all cases and I don't see any obvious errors in the host's event viewer. If I uninstall SmartView, the problems disappear. If SmartView (current version is 2.28.0082) is reinstalled I will experience the same problems. I have tried uninstalling/reinstalling VMware and SmartView in various ways but it appears tha these problems are consistent when SmartView is installed (not just when it is running or connected, but when it is present on the system). I'm wondering if this is a combination of software (WS 6.5.3, Vista64, and SmartView) that works for other people, which would indicate a problem that is peculiar to my configuration.

    Read the article

  • Domain changes required for SSL integration

    - by user131003
    Currently my site supports regular payment options (User is taken to Payment Gateway/PG website). Now I'm trying to implement "seamless" PG integration. I need SSL for this. I'm having a dedicated server with 5 static IPs from Hostgator/HG. options: I take SSL for www.my_domain.com. According to HG, I need to change IP of main site as current IP is not really dedicated as it is being shared by cpanel etc. So They need to bind another dedicated IP to main domain for SSL to work. This would required DNS change for main website and hence cause few hours downtime (which is ok). I've noticed that most of the e-commerce websites are using subdomains like secure.my_domain.com for ssl/https. This sounds like a better approach. But I've got few doubts in this case: a) Would I need to re-register with existing PGs (Paypal, Google Checkout, Authorize.net) if I switch to subdomain? Re-registering is not an option for me. b) Would DNS change be required for www.my_domain.com in this case. This confusion arose because of following reply from HG : "If the sub domain secure.my_domain.com is added to an existing cPanel it will use the IP for that cPanel so as long as it is a Dedicated IP that will be fine. If secure.my_domain.com gets setup as its own cPanel it will need to be assigned to a Dedicated IP which would have a DNS change involved.". PLease suggest.

    Read the article

  • Windows 7 fails to connect to the internet a few minutes after startup

    - by SageTheGreat
    Problem Earlier today, when I turned on my desktop computer, my internet connection works fine. Cryptocurrency miners connecting and hashing as usual and I can browse websites. But after a few minutes, my miner fails indicating that there is something wrong with my internet connection. Tried refreshing my browser and is stuck at "resolving host", and then presented me an error. After that, i can't browse sites anymore. But the weird thing is that the network icon in Windows 7 shows no signs of problems. Solutions Made Restarted my computer without doing anything: Problem persists. Tried using the network troubleshooter of windows: Reported no problems Stopped bonjour still no progress. Loaded windows using Last good config: still no progress. Restarted Modem: No change. Current Status I currently did a system restore to my system to a point before installing the latest update from Microsoft. Because earlier today, I installed some updates and after that, the problem started to appear. (After system restore, same problem.) Latest Programs installed before the problem MS Visual Studio 2013 (but internet still worked fine after the install). I hope someone could provide answers on this problem. It is my first time encountering this. EDIT: Additional Info OS: Windows 7 SP1 64-bit AV: Avast Free Antivirus Internet Connection Type: Ethernet It appears that my Laptop can't even connect to the machine thru Remote Desktop My laptop and phone on WiFi works fine and can connect to the internet. EDIT 2: Whenever I boot into Safe Mode, my Internet is fine.

    Read the article

  • Can't write to samba share

    - by Tiddo
    I try to setup a samba file server, but whatever I do I can't get write access to work (reading works fine). This is my current situation: I have a local fileserver with 3 harddisks mounted at /mnt/share/disk<nr>. 2 of these use the ext4 filesystem, the third one is ntfs. This file server runs Fedora 18 32-bit. The root folders of these harddisks are owned by superman:superman, and testparm outputs the following: [global] workgroup = WORKGROUP netbios name = FILE_SERVER server string = Samba Server Version %v interfaces = lo, eth0, 192.168.123.191/8 log file = /var/log/samba/log.%m max log size = 50 unix extensions = No load printers = No idmap config * : backend = tdb hosts allow = 192.168.123. cups options = raw wide links = Yes [share] comment = Home Directories path = /home/share/ write list = superman, @users force user = superman read only = No create mask = 0777 directory mask = 0777 inherit permissions = Yes guest ok = Yes I've tried a lot to get this to work: the disk are chmodded to 777, I've tried turning off selinux, I've added the samba_share_t label to the disks and as can be seen in the above output I tried to make the smb config as permissive as I could, but still I cannot write to the share (tried from Windows 7 and another Fedora installation). What can I try to be able to write to the shares? EDIT: The replies I got so far are mostly concerned with the smb.conf. I have however tried a lot of different setup, ready made configs, and solutions to similar problems for the smb.conf file, so I suspect that the real problem is somewhere else.

    Read the article

  • repo sync "CyanogenMod/android_prebuilt" size and resume capability.?

    - by james
    I'm downloading CyanogenMod-10.1 source on a low speed broadband. About 4GB of source is downloaded . In that 4GB, there is a big project "CyanogenMod/android_frameworks_base" which alone took 1GB of download without any interruption. Ok now, after 4GB of download, my internet got disconnected and I had to stop (ctrl + z) repo sync while it was downloading the project "CyanogenMod/android_prebuilt". Before I stopped repo sync the android_prebuilt got downloaded till 250MB and is at 42percent. I checked the working folder and there is a file "tmp_pack_df5CKb" of size 250MB in the path "$WORKING_DIR/.repo/projects/prebuilt.git/objects/pack/" . Then I restarted repo sync and it was downloading the android_prebuilt project. But I'm not sure if it was downloading from start or resuming from 250MB. While downloading this time , the previous "tmp_pack_df5CKb" isn't deleted and the content is being downloaded to a new file "tmp_pack_HPfvFG". I heard repo sync cannot be resumed for a project. But here, since the previous file isn't deleted I want to ask if android_prebuilt is resuming or downloading from start again? Now that my high speed internet is over (current speed 256kbps), I'm not sure if I can download the remaining ~4GB if single project is in size 500 MB.

    Read the article

  • Bad Mumble control channel performance in KVM guest

    - by aef
    I'm running a Mumble server (Murmur) on a Debian Wheezy Beta 4 KVM guest which runs on a Debian Wheezy Beta 4 KVM hypervisor. The guest machines are attached to a bridge device on the hypervisor system through Virtio network interfaces. The Hypervisor is attached to a 100Mbit/s uplink and does IP-routing between the guest machines and the remaining Internet. In this setup we're experiencing a clearly recognizable lag between double-clicking a channel in the client and the channel joining action happening. This happens with a lot of different clients between 1.2.3 and 1.2.4 on Linux and Windows systems. Voice quality and latency seems to be completely unaffected by this. Most of the times the client's information dialog states a 16ms latency for both the voice and control channel. The deviation for the control channels mostly is a lot higher than the one of the voice channels. In some situations the control channel is displayed with a 100ms ping and about 1000 deviation. It seems the TCP performance is a problem here. We had no problems on an earlier setup which was in principle quite like the new one. We used Debian Lenny based Xen hypervisor and a soft-virtualised guest machine instead and an earlier version of the Mumble 1.2.3 series. The current murmurd --version says: 1.2.3-349-g315b5f5-2.1

    Read the article

  • Using WebDAV for automated downloads

    - by Geo Ego
    I currently manage a number of sites (at one point about a dozen, currently four, but soon growing into the dozens or hundreds) that serve a piece of software to clients at their remote locations. Our web server is Windows SBS Server 2k3, and the remote servers are Windows Server 2k3.When we have new versions of the software, I upload this new software to a specific directory and rename it; each time the clients boot, they pull their software from that specific directory. With just a few sites, it's no problem for me to RDP in and copy the files over. As the number grows, this will quickly become quite unwieldy. So I'm thinking that WebDAV would be part of a solution, so that I could simply push the newest version to our server (Windows SBS Server 2003) and make it available to the sites to grab. However, on the remote server side, what are some suggestions for automating the download? I only want the servers to download the files during downtime (between 3 AM and 9 AM), and I only want them to download if there is a new version available. I had thought of writing a program that checked the files on the WebDAV server at a regular interval, compared a hash of the current software to a hash of the software on the server, and only downloaded if they were different, but I'm wondering if there is something I am unaware of that can automate the process.

    Read the article

  • VMWare Setup with 2 Servers and a DAS (DELL MD3220)

    - by Kumala
    I am planning to use a VMWare based setup consisting of two VMWare servers (2 CPU, 256GB Memory) and a DAS (DELL MD3220 with 24x900GB disks). The virtual machines will be half running MS SQL databases (Application, Sharepoint, BI) and the other half of the VM will be file services, IIS. To enhance the capacity of the storage, we'll be adding a MD1220 enclosure with another 24x900GB to the MD3220. Both DAS will have 2 controllers. Our current measured IOPS is 1000 IOPS average, 7000 IOPS peak (those happen maybe twice per hour). We are in the planning phase now and are looking at the proper setup of the disks. The intention is to setup up both DAS one of the DAS with RAID 10 only and the other DAS with RAID 5. That will allow us to put the applications on the DAS that supports the application performance needs best. Question is how best to partition the two DASs to get best possible IOPS/MBps, each DAS will have to have 2 hot spares? For the RAID 5 Setup: Generally speaking, would it be better to have one single disk group across all 22 disks (24 - 2 hot spares) with both controllers assigned to the one disk group or is it better to have 2 disk groups each 11 disks, assigned to one of the two controllers? Same question for the RAID 10 setup: The plan is: 2 disks for logs (Raid 1), 2 Hotspare and 20 disks for RAID 10. Option 1: 5 * 4 disks (RAID 10), with two groups assigned to 1 controller and 3 groups to the other controller Option 2: One large RAID 10 across all the disks and have both controllers assigned to the same group? I would assume that there is no right or wrong, but it all depends very much on the specific application behaviour, so I am looking for some general ideas what the pros and cons are of the different options. IF there are other meaningful options, feel free to propose them.

    Read the article

  • Should I partition a 1TB Hard Disk whose primary use is media storage?

    - by Senthil
    I am going to get a 1TB hard disk. I will be storing 1080p or 720p movies, high-bitrate music and pictures in it. I use my PC 90% of the time only to play/listen/see those. I am running out of space in my current HD so I am getting another one. My specs are 2.7GHz Dual Core, 512MB GeForce 9400GT, 2GB DDR2 RAM and all the proper matroska codecs/players. I guess that is enough to play 1080p movies withough a glitch, given an ideal hard disk. I've read about proper partitioning giving performance improvement etc.. I don't want my hard disk to be the bottleneck. Can someone tell me whether I should partition my 1TB hard disk into many drives? If I should, what is the ideal size of each partition? Smooth playing of movies is very important to me. Once I start filling up the disk, there is no turning back. So I want to get it right before I start. Thanks.

    Read the article

  • Can't upgrade NVIDIA GeForce 310M display driver on Acer Aspire 5745PG

    - by Emerson
    I've been for days already trying to update my video driver. I have an Acer Aspire 5745PG with a "NVIDIA GeForce 310M" board, and I was trying to run Sony Vegas video editor with Boris Continunn plugins. It happened that some of the plugins, like BCC Text Extrude wouldn't work, showing the message "Insufficient depth resolution to run Blue". I then read somewhere that updating the display driver would do the trick. That was when my nightmares started, I lost already good 3 nights trying to sort this out, without success :( The display driver that was before (and that I current have after restoring) was the version 8.16.11.8997. First thing I tried was downloading the 8.17.12.6619 driver directly from Acer, which was shown as the latest version from Acer website: http://support.acer.com/product/default.aspx?modelId=2466 Running it would say "Diver Package Failure - Setup failed to read the required Display Driver to be used with this package" I then tried directly the NVIDIA own driver, which the latest was version 296.10: http://us.download.nvidia.com/Windows/296.10/296.10-notebook-win7-winvista-64bit-international-whql.exe That gave me similar error message :/ So after some researching I found out that some people had the same issue and they had to change the configuration file to allow the installer to recognize this NVIDIA board: http://forums.nvidia.com/index.php?showtopic=222904 That topic said to look for the "Device Instance Id" property of the "NVIDIA GeForce 310M" display , which I couldn't find, instead I found the "Hardware Id", which seemed to be the right one. I followed the instructions and changed the inf file first for the Acer installation, and after for the NVIDIA own driver. It actually managed to go ahead with the installation in both instances, but the only thing I got was a black screen, while the computer still apeared to be running fine. I had to hard reset, and then it would come back with generic vga driver. I could only get my display back using the recovery function. I imagine thousands of this notebook was sold, and it can't have its driver updated?? Could someone help me with this?? Thanks Echo

    Read the article

  • Can I get a domain controller not to act as DNS for the members?

    - by rsw
    Hi, Let me try to explain my current setup. I have one linux machine acting as DHCP and DNS (dhcpd3 and bind) in my network. This works fine, all computers I hook up to the network gets an IP address and proper DNS servers set. Let's call it 10.12.0.10 However, we also have a Windows Server 2003 Domain Controller in our network to which we add our Windows computers (running XP), let's call it 10.12.0.20. I noticed that when I run 'nslookup' on one of the windows machines, it says that the primary DNS is 10.12.0.20. This have not been much of a problem since: The Windows clients are stationary The Windows server in itself point out my real DHCP/DNS, since I can reach everything specified in it However, this turns out to be a problem when we use Laptops. They connect to the domain here and gets a DNS server, but when the user travels or connect the computer from home, we hit a problem. They are connected to their internet, but their DNS is 10.12.0.20 which they can't reach since they're at home and not at the office network. I solved this by removing the register key called "NameServer" with the value 10.12.0.20, but it gets set again whenever they logon to the domain the next time (when they get back to the office). Can I somehow make the computers take whatever DNS server they are handed when connecting to the internet or a home network, instead of always trying to reach the Domain Controller?

    Read the article

  • Can I trick Carbonite into backing up an external hard drive?

    - by Brian
    I use Carbonite to back up my PC (Windows XP). We were running low on disk space on our home PC (down to 15 GB), so I went out and purchased an external hard drive. However, Carbonite will not back it up. Is it possible to set up Carbonite to backup an external hard drive? I just want the external drive to be extra disk space. From their FAQ: The current version of Carbonite backs up only the files that reside on permanent hard drives on your computer. It will not back up network drives, external drives, and NAS (network accessed storage) drives. If there are files on a remote drive that you wish to include in your Carbonite backup, you should copy the files to a folder on your local hard drive. If the files are on a shared network drive, you could install Carbonite on the computer on which the network shared drive physically exists, and back the files up directly from that computer. Check back soon for a Carbonite service plan that will allow you to back up your external drives.

    Read the article

  • Seizing naming master from child domain server

    - by meera
    when I am trying to seize the role from my child domain server the naming master I get the following error fsmo maintenance: seize naming master Attempting safe transfer of domain naming FSMO before seizure. ldap_modify_sW error 0x34(52 (Unavailable). Ldap extended error message is 000020AF: SvcErr: DSID-03210380, problem 5002 (UN AVAILABLE), data 8438 Win32 error returned is 0x20af(The requested FSMO operation failed. The current FSMO holder could not be contacted.) ) Depending on the error code this may indicate a connection, ldap, or role transfer error. Transfer of domain naming FSMO failed, proceeding with seizure ... Server "win-fb20ixk90mu" knows about 5 roles Schema - CN=NTDS Settings,CN=WIN-3918XHC5STU,CN=Servers,CN=Default-First-Site-Na me,CN=Sites,CN=Configuration,DC=HCL,DC=com Naming Master - CN=NTDS Settings,CN=WIN-FB20IXK90MU,CN=Servers,CN=Default-First- Site-Name,CN=Sites,CN=Configuration,DC=HCL,DC=com PDC - CN=NTDS Settings,CN=WIN-FB20IXK90MU,CN=Servers,CN=Default-First-Site-Name, CN=Sites,CN=Configuration,DC=HCL,DC=com RID - CN=NTDS Settings,CN=WIN-FB20IXK90MU,CN=Servers,CN=Default-First-Site-Name, CN=Sites,CN=Configuration,DC=HCL,DC=com Infrastructure - CN=NTDS Settings,CN=WIN-FB20IXK90MU,CN=Servers,CN=Default-First -Site-Name,CN=Sites,CN=Configuration,DC=HCL,DC=com

    Read the article

  • All commands stopped working in centos 6.5

    - by Michael
    I have made a big mistake while removing some duplicate packages as it appears to be broken. yum 1036 rpm -e --nodeps glibc-2.12-1.132.el6_5.2.x86_64 1037 rpm -e --nodeps nscd-2.12-1.132.el6_5.2.x86_64 1038 rpm -e --nodeps glibc-common-2.12-1.132.el6_5.2.x86_64 1040 rpm -e --nodeps glibc-common-2.12-1.132.el6.x86_64 glibc-devel-2.12-1.132.el6.x86_64 glibc-headers-2.12-1.132.el6.x86_64 1041 rpm -e glibc.x86_64 1042 rpm -e --nodeps glibc.x86_64 The issue happened after doing 1042 step. None of commands work(including yum, rpm, ls, cp etc) and getting error /lib64/ld-linux-x86-64.so.2: bad ELF interpreter: No such file or directory I thought that installing glibc after removing all the current ones would help to resolve the duplicate package error :( Now I realised that it is used as the C library in the GNU system and most systems with the Linux kernel. It defines the "system calls" and other basic facilities such as open, malloc, printf, exit, etc. Is there any possible solutions other than reinstall? I have lost ssh access. Maybe anything can be done using rescue cd? Thanks

    Read the article

  • What is the best way to create a failover cluster for my IIS website?

    - by ObligatoryMoniker
    Our eCommerce website www.tervis.com currently runs on two servers: SQL server: 2005 x 86 on Windows Server 2003 Standard x86 with a single dual core processor and 4 gb of memeory IIS server: Windows Server 2008 Web edition x64 with dual quad core hyper threaded processors and 32 gb of memory Tervis.com's revenue has steadily grown to the point where we need to have redundant servers deployed with a fail over mechanism so that we do not have any down time. Because the SQL server is so underpowered compared to the web server my thought was to purchase: 2 x SQL Server 2008 R2 web edition x64 single processor license 2 x Windows Server 2008 R2 Web Edition Licenses 1 x New Physical dual quad core 32 GB server 1 x F5 Load Balancer I need the Windows Server 2008 R2 Web Edition licenses so that I can run SQL and IIS on the same box for both of these servers. The thought is to run this as an active/passive fail over cluster that could be upgraded to an active/active cluster if we purchased the additional SQL licensing. The F5 load balancer would serve as the device that monitors the two servers and if the current active one stops responding then fails over to using the other server. To be clear this is not windows clustering but simply using a load balancer to fail over between two computers so that you now have a cluster in the general sense. Is this really the best way to accomplish what I need? Is there some way to leverage the old server 2003 SQL server to function as the devices that funnels http requests to the appropriate active server and then fails over if a problem occurs? Is there any third party clustering software that might help me accomplish this in a simpler fashion?

    Read the article

  • Windows: How to make programs think they're not running in a terminal server session?

    - by sinni800
    I am using the program "SoftXPand 2011 Duo" by Miniframe on my Windows 7 PC. It makes two workstations out of one computer. It uses the terminal services built into Windows to create the additional session. I use two screens, two keyboards and two mice to create this "illusion" of two computers. It works quite well and I can even play two different 3D games on the two screens attached to this single machine (using a Radeon HD5770 and a Core i5 2500k with 8 Gbytes RAM). There are a few downsides to this. I just found about one that is hidden on the first look. The sessions you are in (even on the first workstation) will identify as a terminal server session! Now some programs will run with limited effects (graphical), and some won't run at all. This also resulted in some games not running at all. They just say "Cannot be run in a terminal server session" and exit. I have already proven that top modern games (DirectX 10, 11) run just as good as on the same machine without SoftXPand, so this is a pretty artificial limitation! So, can I somehow hack my current session so it doesn't look like a terminal server session anymore? I. E. #include <windows.h> #pragma comment(lib, "user32.lib") BOOL IsRemoteSession(void) { return GetSystemMetrics( SM_REMOTESESSION ); } Will return FALSE? (Not a programming question! Just an example how programs detect if they're in a terminal server session!)

    Read the article

  • Can I take my ReadyNAS drive in Raid1 and plug it straight into new different machine?

    - by jacko
    I would assume that I can just take my HDD out of my NAS (in raid1 mirror) and plug it into another enclosure and have it work off the bat but I'd like to make sure... Any ideas? Edit: My current setup is a Netgear ReadyNAS in (hardware) raid1. I'm hoping to replace this with a home theatre type PC (possibly running Ubuntu), and would like to migrate my data without having to do a bulk transfer over my network between the 2 machines. Can anyone confirm the case for the Netgear ReadyNAS? Edit: Ok after further reading it seems that the ReadyNAS Duo formats my drive as ext3 in 16k blocks. There are instructions for mounting a drive into a linux box here: Mounting Sparc-based ReadyNAS Drives in x86-based Linux There is also talk about a linux image here: ReadyNAS Data Recovery - VMware recovery tool I'm not sure whether this means they ReadyNAS actually implements software raid under the hood, or what? So it appears like it IS do-able, but do any of you linux guru's know whether this is viable and whether the fact that they are in raid 1 affect matters?

    Read the article

  • VNC from Windows to OS X Lion: App stuck in fullscreen mode

    - by Jonny
    I'm connecting to a remote Mac through a Windows. ahh it gets more complicated than that. I'm sitting by my iMac. I use Virtual Box in it to launch Windows 7. In it I have a VPN connection to a remote Windows network, which allows me to use Remote Desktop to one of the Windows (Vista!) boxes over there. From that Vista box I VNC into a Mac OS X Lion. (Don't ask me why, but that Mac doesn't have a public ip which prevents me from accessing it in the first place.) So: OSXLion - (virtual)Windows7 - Windows Vista - OSX Lion That last Mac was recently upgraded from Snow Leopard. Now with Lion, sometimes apps run in fullscreen. Somehow I can't get out of that fullscreen. Normally you'd move the mouse pointer to the top of screen and a menu list bar drops down allowing you to reach the fullscreen button top right. Now, in my current setup that menu list bar never drops down on the remote Mac at the end of the line. Any ideas?

    Read the article

< Previous Page | 528 529 530 531 532 533 534 535 536 537 538 539  | Next Page >