Search Results

Search found 27056 results on 1083 pages for 'build mode'.

Page 737/1083 | < Previous Page | 733 734 735 736 737 738 739 740 741 742 743 744  | Next Page >

  • Remote Desktop to Server 2008R2 fails from one particular Win7 client

    - by Jesse McGrew
    I have a VPS running Windows Web Server 2008 R2. I'm able to connect using Remote Desktop from my home PC (Windows 7), personal laptop (Windows 7), and work laptop (Windows XP). However, I cannot connect from my work PC (Windows 7). I receive the error "The logon attempt failed" in the RDP client, and the server event log shows "An account failed to log on" with this explanation: Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: username Account Domain: hostname Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: JESSE-PC Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): - Key Length: 0 I can connect from the offending work PC if I start up Windows XP Mode and use the RDP client inside that. The server is part of a domain but my account is local, so I'm logging in using a username of the form hostname\username. None of the clients are part of a domain. The server uses a self-signed certificate, and connecting from home I get a warning about that, but connecting from work I just get the logon error.

    Read the article

  • File is not compatible with the version of Windows you're running

    - by vaccano
    I have a really old installer (legacy app) that we are trying to get running on a Windows 7 64 bit os. Previously it has only been installed on Windows XP 32 bit. I get the following error when I try to run it: The version of this file is not compatible with the version of Windows you're running. Check your computer's system information to see whether you need an x86 (32-bit) or x64 (64-bit) version of the program, and then contact the software publisher. Contacting the software publisher is not an option (software is super old). Is there a way to get this to work? Some sort of compatibility mode? The only thing I have heard of that will work is a Virtual XP on the Win 7 box. The problem is that this software is a part of a whole software set. I would have to put all of the pieces on the Virtual XP or none at all. Before I go down the road of putting it all on the virtual xp I would like to know that there is no way to get it all on the Win 7 os.

    Read the article

  • How to do 'search for keyword in files' in emacs in Windows without cygwin?

    - by Anthony Kong
    I want to search for keyword, says 'action', in a bunch of files in my Windows PC with Emacs. It is partly because I want to learn more advanced features of emacs. It is also because the Windows PC is locked down by company policy. I cannot install useful applications like cygwin at will. So I tried this command: M-x rgrep It throws the following error message: *- mode: grep; default-directory: "c:/Users/me/Desktop/Project" -*- Grep started at Wed Oct 16 18:37:43 find . -type d "(" -path "*/SCCS" -o -path "*/RCS" -o -path "*/CVS" -o -path "*/MCVS" -o -path "*/.svn" -o -path "*/.git" -o -path "*/.hg" -o -path "*/.bzr" -o -path "*/_MTN" -o -path "*/_darcs" -o -path "*/{arch}" ")" -prune -o "(" -name ".#*" -o -name "*.o" -o -name "*~" -o -name "*.bin" -o -name "*.bak" -o -name "*.obj" -o -name "*.map" -o -name "*.ico" -o -name "*.pif" -o -name "*.lnk" -o -name "*.a" -o -name "*.ln" -o -name "*.blg" -o -name "*.bbl" -o -name "*.dll" -o -name "*.drv" -o -name "*.vxd" -o -name "*.386" -o -name "*.elc" -o -name "*.lof" -o -name "*.glo" -o -name "*.idx" -o -name "*.lot" -o -name "*.fmt" -o -name "*.tfm" -o -name "*.class" -o -name "*.fas" -o -name "*.lib" -o -name "*.mem" -o -name "*.x86f" -o -name "*.sparcf" -o -name "*.dfsl" -o -name "*.pfsl" -o -name "*.d64fsl" -o -name "*.p64fsl" -o -name "*.lx64fsl" -o -name "*.lx32fsl" -o -name "*.dx64fsl" -o -name "*.dx32fsl" -o -name "*.fx64fsl" -o -name "*.fx32fsl" -o -name "*.sx64fsl" -o -name "*.sx32fsl" -o -name "*.wx64fsl" -o -name "*.wx32fsl" -o -name "*.fasl" -o -name "*.ufsl" -o -name "*.fsl" -o -name "*.dxl" -o -name "*.lo" -o -name "*.la" -o -name "*.gmo" -o -name "*.mo" -o -name "*.toc" -o -name "*.aux" -o -name "*.cp" -o -name "*.fn" -o -name "*.ky" -o -name "*.pg" -o -name "*.tp" -o -name "*.vr" -o -name "*.cps" -o -name "*.fns" -o -name "*.kys" -o -name "*.pgs" -o -name "*.tps" -o -name "*.vrs" -o -name "*.pyc" -o -name "*.pyo" ")" -prune -o -type f "(" -iname "*.sh" ")" -exec grep -i -n "action" {} NUL ";" FIND: Parameter format not correct Grep exited abnormally with code 2 at Wed Oct 16 18:37:44 I believe rgrep tried to spwan a process and called 'FIND' with all the parameters. However, since it is a Windows, the default Find executable simply does not know how to handle. What is the better way to search for a keyword in multiple files in Emacs on Windows platform, without any dependency on external programs? Emacs version: 24.2.1

    Read the article

  • kvm works only when kvm-intel is unloaded

    - by Sathya
    I am new to kvm. I have this strange issue. But before explaining the issue, here is my set up. I try to install VM on my Host which is a Acer Laptop 5720 Has T7500 Intel processor. The cpu flags indicate that Virtualization is supported. I run Ubuntu 10.04 (lucid) on it. It comes with kvm. Now coming to the issue - I dont get any errors while executing "sudo modprobe kvm-intel". So I presume my processor does indeed support hardware virtualization. I use virt-manager and create a VM on which I install ubuntu from an *.iso file. When I start the VM it says it is running. No signs of any trouble. I can see the domain list in "virsh list". But when I try to connect to the VM thru VNC, all I get to see is a blank screen (no cursor). There is no response to any key press. I changed the video mode etc. Tried all different combinations but none work. But strangely, if I shutdown the vm an virt-manager and then unload the module by doing "sudo modprove -r kvm-intel", everything works fine. ie., I can see the screen via VNC. I am able to install the OS and so on. So what does this mean ? IS hardware virtualization not supported ? How come there is no error anywhere ? dmesg | grep kvm doesnt report anything. Can someone throw light on what excatly is happening ?

    Read the article

  • Win7 Pro x64 task manager hangs when restarting explorer.exe after waking from sleep

    - by Brandon Dybala
    I have a desktop running Windows 7 x64 Pro, set for Hybrid Sleep on a wired network. Wakeup is only enabled from the keyboard (wake on mouse and Wake-On-LAN are both disabled). Sometimes when it wakes up, there is no network connectivity. The notification area icons for both network and volume don't respond to clicks. If I open the Network and Sharing Center, clicking the red X doesn't do anything. Restarting does fix the problem, but I'm looking for a solution that does not require restarting (if at all possible). Drivers are all up to date. I've tried opening Task Manager and restarting the explorer.exe process, but Task Manager freezes for a few minutes, the "New Task" dialog closes, and explorer.exe has not restarted. CPU and memory usage are both normal. One thread suggested making sure the BIOS was set for S3 sleep mode only (not S1 or S1 & S3), but I haven't checked this yet. Going back to sleep and waking back up does not help. So far only a reboot has fixed the issue. System specs: Windows 7 x64 Pro Asus P8Z68-V PRO/GEN3 128 GB Crucial m4 SSD (Firmware version 0309) Intel Core i7 2600 3.4 GHz 16 GB RAM Any ideas? Brandon

    Read the article

  • IPMI not fucntioning with Network Bonding

    - by muhammed sameer
    Hey, I am having problems with running IPMI on my servers that have network bonding enabled. Platform: CentOS release 5.3 (Final) Kernel: 2.6.18-92.el5 64bit Dell PowerEdge 1950 Ethernet controller: Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet I have bonded the interface eth0 and eth1 as active passive, with eth0 as the active interface, below is conf description from /proc Bonding Mode: fault-tolerance (active-backup) Primary Slave: eth0 Currently Active Slave: eth0 MII Status: up MII Polling Interval (ms): 30 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth0 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:22:19:56:b9:cd Slave Interface: eth1 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:22:19:56:b9:cf My IPMI device is as follows IPMI Device Information Interface Type: KCS (Keyboard Control Style) Specification Version: 2.0 I2C Slave Address: 0x10 NV Storage Device: Not Present Base Address: 0x0000000000000CA8 (I/O) Register Spacing: 32-bit Boundaries I Have used openIPMI as well as freeipmi both to control the chassis via the IPMI card, but on servers which have bonding enabled, the command times out, below is the full run of the command with debug info. ipmi_lan_send_cmd:opened=[0], open=[4482848] IPMI LAN host 70.87.28.115 port 623 Sending IPMI/RMCP presence ping packet ipmi_lan_send_cmd:opened=[1], open=[4482848] No response from remote controller Get Auth Capabilities command failed ipmi_lan_send_cmd:opened=[1], open=[4482848] No response from remote controller Get Auth Capabilities command failed Error: Unable to establish LAN session Failed to open LAN interface Unable to get Chassis Power Status On the other hand I configured IPMI on a box with the same specs as mentioned above without bonding and IPMI works perfectly. Has anyone faced this problem with IPMI + Bonding ? I would be thankful is someone helps circumvent this issue. Muhammed Sameer

    Read the article

  • Vboxheadless without a command prompt (VirtualBox)

    - by joe
    I'm trying to run VirtualBox VM's in the background from a service. I'm having trouble starting a process the way I desire. I'd like to start the virtualbox guest in headless mode as a separate process and show nothing as far as GUI. Here's what I've tried: From command line: start vboxheadless -s "Ubuntu Server" In C#: ProcessStartInfo info = new ProcessStartInfo { UseShellExecute = false, RedirectStandardOutput = true, ErrorDialog = false, WindowStyle = ProcessWindowStyle.Hidden, CreateNoWindow = true, FileName = "C:/program files/sun/virtualbox/vboxheadless", Arguments = "-s \"Ubuntu Server\"" }; Process p = new Process(); p.StartInfo = info; p.Start(); String output = p.StandardOutput.ReadToEnd(); //BLOCKS! (output stream isnt closed) I want to be able to get the output to know if starting the server was a success. However, it seems as though the window that's spawned never closes its output stream. It's also worth mentioning that I've tried using vboxmanage startvm "Ubuntu Server" --type=vrdp. I can determine whether the server started properly using this. But it shows a new command prompt window for the newly started VirtualBox guest.

    Read the article

  • How can I make my PCI-E graphics card visible to Ubuntu when the motherboard has integrated graphics

    - by Norman Ramsey
    I have a Gigabyte GA-MA74GM-S2 motherboard with integrated graphics that shows up on lspci as an ATI Radeon 2100. I also bought a PCI-Express Nvidia graphics card so I could use the VDPAU feature on Linux (plays H.264 in hardware). The BIOS has three settings about which display to initialize first: Integrated graphics PCI graphics PCI-Express graphics (PEG) I set the BIOS on PEG, but I cannot get anything, not even a splash screen or POST messages, to emerge from the PCI-Express graphics card. (I'm using a DVI connector; the card also has an HDMI output.) I cannot get the kernel lspci to see the graphics card; the only VGA controller it acknowledges is the integrated one. Running dmidecode acknowledges the existence of an x16 PCI Express slot, and it says Current usage: Unknown There is an additional BIOS setting called "Internal Graphics Mode" which is normally set to "Auto" which means it is supposed to prefer a PCI Express VGA card. I set it to "Disabled" which now means I'm getting no output at all. I will soon be learning how to do a BIOS reset! Other information: The PCI-E card is a MSI N210-MD512H GeForce 210. This is a fanless card. Although there are no fans to see turning, the heat sink on the PCI-E card is definitely getting hot, so the card is getting some sort of power. It gets all its power from the PCI-E slot; there is no external power connector. The BIOS is an AMI Award BIOS. My question: how can I make the PCI Express graphics card visible to Ubuntu?

    Read the article

  • HAProxy + NodeJS gets stuck on TCP Retransmission

    - by sled
    I have a HAProxy + NodeJS + Rails Setup, I use the NodeJS Server for file upload purposes. The problem I'm facing is that if I'm uploading through haproxy to nodejs and a "TCP (Fast) Retransmission" occurs because of a lost packet the TX rate on the client drops to zero for about 5-10 secs and gets flooded with TCP Retransmissions. This does not occur if I upload to NodeJS directly (TCP Retransmission happens too but it doesn't get stuck with dozens of retransmission attempts). My test setup is a simple HTML4 FORM (method POST) with a single file input field. The NodeJS Server only reads the incoming data and does nothing else. I've tested this on multiple machines, networks, browsers, always the same issue. Here's a TCP Traffic Dump from the client while uploading a file: ..... TCP 1506 [TCP segment of a reassembled PDU] >> everything is uploading fine until: TCP 1506 [TCP Fast Retransmission] [TCP segment of a reassembled PDU] TCP 66 [TCP Dup ACK 7392#1] 63265 > http [ACK] Seq=4844161 Ack=1 Win=524280 Len=0 TSval=657047088 TSecr=79373730 TCP 1506 [TCP Retransmission] [TCP segment of a reassembled PDU] >> the last message is repeated about 50 times for >>5-10 secs<< (TX drops to 0 on client, RX drops to 0 on server) TCP 1506 [TCP segment of a reassembled PDU] >> upload continues until the next TCP Fast Retransmission and the same thing happens again The haproxy.conf (haproxy v1.4.18 stable) is the following: global log 127.0.0.1 local1 debug maxconn 4096 # Total Max Connections. This is dependent on ulimit nbproc 2 defaults log global mode http option httplog option tcplog frontend http-in bind *:80 timeout client 6000 acl is_websocket path_beg /node/ use_backend node_backend if is_websocket default_backend app_backend # Rails Server (via nginx+passenger) backend app_backend option httpclose option forwardfor timeout server 30000 timeout connect 4000 server app1 127.0.0.1:3000 # node.js backend node_backend reqrep ^([^\ ]*)\ /node/(.*) \1\ /\2 option httpclose option forwardfor timeout queue 5000 timeout server 6000 timeout connect 5000 server node1 127.0.0.1:3200 weight 1 maxconn 4096 Thanks for reading! :) Simon

    Read the article

  • Mediawiki extension error

    - by vinylguitar
    I'm running the latest version of mediawiki using MoWeS Portable II from my desktop. I just installed this extension on the wiki http://www.mediawiki.org/wiki/Extension:MsUpload It adds an option to upload files (to be embedded in an article) to the edit screen of an article. After installing it when I try and edit an article I get the following error: Fatal error: Call to undefined method OutputPage::addModules() in C:\Users\User\Desktop\knowledge mapedia 10 25 13 copy\mowes_portable\www\mediawiki\extensions\MsUpload\msupload.php on line 65 Also here is what I posted in the localsettings.php file (I put it in at the end of localsettings.php if it makes a difference): Start --------------------------------------- MsUpload $wgMSU_ShowAutoKat = false; #autocategorisation $wgMSU_CheckedAutoKat = false; #checkbox for autocategorisation checked $wgMSU_debug = false; #debug mode $wgMSU_ImgParams = '400px'; #default max-size for inserted image $wgMSU_UseDragDrop = true; #show drag&drop area require_once "$IP/extensions/MsUpload/msupload.php"; End --------------------------------------- MsUpload require_once "$IP/extensions/msupload/msupload.php"; At line 65 in the localsettings.php file there is the following: line 64 ## Database settings line 65 $wgDBtype = "mysql"; line 66 $wgDBserver = "localhost"; line 67 $wgDBname = "mediawiki"; line 68 $wgDBuser = "root"; line 69 $wgDBpassword = ""; Any idea what I'm doing wrong?

    Read the article

  • How to diagnose disk errors when disk appears to be ok?

    - by Kylotan
    I have a six-month-old 1TB Seagate drive formatted into 2 NTFS partitions, and the disk appeared to be failing with Windows dropping down from UDMA to PIO mode, reporting Delayed Write Errors, and hanging Explorer when browsing directories. My initial suspicion was that the disk was dying. However, on further examination it appears that Ubuntu, which doesn't write to the volume frequently like Windows does, was able to read the disk properly and retrieve all the data intact, saving me from having to use an older backup. Finally, running the Seatools DOS diagnostic reported that the disk has no problems, ie. SMART errors and no bad sectors, apparently. This, in combination with the relative youth of the disk, suggests that something else is broken. The cable? The PSU? The integrated disk controller? But what would be a good way to diagnose the problem without risking damaging the data? I intend to extract the disk and try it in an external eSATA enclosure and see if the write errors cease, but in the event of the disk appearing to be fine, I would like to be able to confirm what part of the hardware is actually broken here in order to know just what needs replacing. Are there any good ways to go about this?

    Read the article

  • How to diagnose disk errors when disk appears to be ok?

    - by Kylotan
    I have a six-month-old 1TB Seagate drive formatted into 2 NTFS partitions, and the disk appeared to be failing with Windows dropping down from UDMA to PIO mode, reporting Delayed Write Errors, and hanging Explorer when browsing directories. My initial suspicion was that the disk was dying. However, on further examination it appears that Ubuntu, which doesn't write to the volume frequently like Windows does, was able to read the disk properly and retrieve all the data intact, saving me from having to use an older backup. Finally, running the Seatools DOS diagnostic reported that the disk has no problems, ie. SMART errors and no bad sectors, apparently. This, in combination with the relative youth of the disk, suggests that something else is broken. The cable? The PSU? The integrated disk controller? But what would be a good way to diagnose the problem without risking damaging the data? I intend to extract the disk and try it in an external eSATA enclosure and see if the write errors cease, but in the event of the disk appearing to be fine, I would like to be able to confirm what part of the hardware is actually broken here in order to know just what needs replacing. Are there any good ways to go about this?

    Read the article

  • How do I force Windows 7 to recognize my Projector?

    - by user63564
    I have a new Dell XPS with an NVIDIA GeForce GT 445M and an old (several years) Epson PowerLite Cinema 550 projector. Windows 7 refuses to recognize that the projector is connected under normal conditions (I'll get to the strange condition in a moment). Here are some things that I have already tried: Confirm that the projector continues to work well on my old Windows XP laptop. Confirm that the video cable (HDMI to HDMI) is connected Make sure the Dell laptop is plugged in to wall power at all times Reboot both the computer and the projector Click "Detect" under the "Connect to an External Display" Windows dialog (no reaction) Click "Rigorous Display Detection" under NVIDIA Control Panel (dialog: none found) Checked "Force Television Detection on startup" under "My display is not shown..." in NVIDIA Control Panel (no effect) Here's where it gets weird... My projector has three states: off, standby and on. Standby means the power switch on the back is on, but the projector is effectively off (no picture, no access to menu or controls). When I plug in the HDMI cable while the projector is in standby, Windows detects the projector! It lets me switch to Duplicate, Extend, or Project Only mode, and adjusts the resolution appropriately. A new Generic Plug-n-Play monitor shows up in my device manager. A "Seiko EPSON PJ" display shows up in my NVIDIA control panel. Then if I turn my projector on, Windows no longer recognizes the display. This is true whether I turn the projector on while the HDMI cable is plugged in, or if I unplug the HDMI cable while turning on the projector. Anyone have any ideas, because I'm completely stumped...?

    Read the article

  • How to tunnel a local port onto a remote server

    - by Trevor Rudolph
    I have a domain that i bought from DynDNS. I pointed the domain at my ip adress so i can run servers. The problem I have is that I don't live near the server computer... Can I use an ssh tunnel? As I understand it, this will let me access to my servers. I want the remote computer to direct traffic from port 8080 over the ssh tunnel to the ssh client, being my laptop's port 80. Is this possible? EDIT: verbose output of tunnel macbookpro:~ trevor$ ssh -R *:8080:localhost:80 -N [email protected] -v OpenSSH_5.2p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /Users/trevor/.ssh/config debug1: Reading configuration data /etc/ssh_config debug1: Connecting to site.com [remote ip address] port 22. debug1: Connection established. debug1: identity file /Users/trevor/.ssh/identity type -1 debug1: identity file /Users/trevor/.ssh/id_rsa type -1 debug1: identity file /Users/trevor/.ssh/id_dsa type 2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'site.com' is known and matches the RSA host key. debug1: Found key in /Users/trevor/.ssh/known_hosts:9 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/trevor/.ssh/identity debug1: Trying private key: /Users/trevor/.ssh/id_rsa debug1: Offering public key: /Users/trevor/.ssh/id_dsa debug1: Authentications that can continue: publickey,password debug1: Next authentication method: password [email protected]'s password: debug1: Authentication succeeded (password). debug1: Remote connections from *:8080 forwarded to local address localhost:80 debug1: Requesting [email protected] debug1: Entering interactive session. debug1: remote forward success for: listen 8080, connect localhost:80 debug1: All remote forwarding requests processed

    Read the article

  • signed applet automatically running as insecure

    - by Terje Dahl
    My application is deployed as a self-signed applet to several thousand users at more than 50 schools across the country (in Norway). The user is presented with the standard Java security warning asking if they will accept the signature. When they do, the applet runs perfectly. However, about half a year ago a group of 7 school, all under a common IT department, stopped getting the security warning. In stead the applet loads and starts running in untrusted mode, without first giving the user an option to accept or reject the signature. The problem is on Windows machines, and only when the machine is connected to the schools network. If they take the same machine home with them, the program functions as it should, with security warnings and everything. I know little about Window systems in general, but I would think it would be some sort of policy-file or something that is loaded when a machine hooks up to/through the schools network. Furthermore, the problem only started occurring in these 7 schools after changes made after a security breach they had a while back. The IT department is stumped. I am stumped. Any thoughts, comments, suggestions?

    Read the article

  • Windows 8 unable to connect to WPA2 AES Wireless Network

    - by user170193
    I'm running Windows 8 and am unable to connect to my home wireless network. I've tried restarting the router, patching the drivers to the next version, patching the drivers to the last version, running windows update and patching the chipset drivers to the latest version. So far nothing has worked. My computer can get on the internet via USB tethering on my phone or an open WiFi connection, but it is unable to connect to my home WPA2 AES secured wireless network. It sees the network, attempts to connect, gets a limited connection and then drops the connection. All the other wireless devices in my household have no problems. I have the new Dell XPS 12, running Windows 8 using an Intel Centrino Advanced-N 6235 wireless adapter. I've refreshed windows twice now to try different driver configurations. I've tried uninstalling all the Dell software, I've tried uninstalling all the Intel software and reinstalling just the drivers. I've tried turning switching the ability for the Wireless driver to turn the computer off or on. I've tried setting up the connection manually from desktop mode. I've tried switching it on and off using the wireless button on the keyboard and in the software. So far nothing has allowed me to connect to the secured network. It just keeps getting a limited connection, dropping the connection and retrying. It's driving me crazy, any ideas, anything I missed? Thanks.

    Read the article

  • HP ProCurve & Cisco switches interoperability

    - by Kamil Z
    I have a couple of questions regarding Cisco and HP ProCurve interoperability. Here's a link to pdf with my network topology. Can someone help me with basic VLAN configuration in such topology? Below there are some details of my configuration: # m_management_2 interface FastEthernet0/43 switchport access vlan 250 switchport mode access spanning-tree port-priority 32 spanning-tree cost 100 # MTA2-swmgmt1 vlan 1 name "DEFAULT_VLAN" untagged 1-48 ip address 10.10.249.190 255.255.255.128 exit # MTA2-swtr1 vlan 1 name "DEFAULT_VLAN" untagged 1-14,16-48 no ip address no untagged 15 exit vlan 100 name "MTA Mgmt" untagged 15 ip address 10.10.249.188 255.255.255.128 exit # MTA2-swtr2 vlan 1 name "DEFAULT_VLAN" untagged 1-14,16-48 no ip address no untagged 15 exit vlan 100 name "MTA Mgmt" untagged 15 ip address 10.10.249.189 255.255.255.128 exit I don't post MTA2-bcsw[12] configuration, because I wasn's successfull in this one yet. Every time I configure VLANs on MTA2-bcsw[12] Fa0/24 interface on m_management_2 goes down bacause of receiving tagged BPDUs on access port (there are no VLANs configured on MTA2-swmgmt1 because of fact that only 250 VLAN is allowed in this switch. Is it correct?). Can someone provide me some basic configuration for this topology? Second thing I want to ask is concept of connection from MTA2-swmgmt1 to MTA2-swtr[12] HP switches for the sake of management. How to configure such ports on HP switches (managed switch and manager switch). Is my actual configuration correct?

    Read the article

  • Internal disk not correctly recognised by Windows 7

    - by david
    i'm having problems configuring a disk in a brand new, clean windows-7 install. here are some system specifics- . disk- western digital velociraptor wd6000hlhx . mobo- gigabyte z77x-ud3h . bios sata-mode set to ahci [not raid], w/disk connected to sata0 [6gb/s hi-speed sata]. . windows 7 enterprise sp1 x64 the disk is recognized by bios and correctly identified [name & size ok]. the disk is also recognized by windows on a h/w level, but it won't show up in the explorer. windows reports the device is working correctly. windows disk manager shows the drive, but says it's uninitialized and has no partitions [which is incorrect]. if i try to initialize the drive, windows throws an error saying that it "cannot find the file specified". [which file???] before connecting the drive to the new machine, i partitioned and formatted the disk under windows xp sp2, giving it 2 partitions [mbr, not gpt] and copying over a boatload of data. obviously none of this appears under windows 7. removing the disk from the new machine and replacing it back in the windows xp machine shows the disk and all data are intact and functional. i'd like to have windows 7 recognize the disk w/o having to lose the data and start over. is this possible? if so, how would i do that? I checked this post, but even though the problem seems identical, the information didn't help. any help appreciated. thanks!

    Read the article

  • IPCop server slows down download speed

    - by noocyte
    I have an IPCop server running at home, been doing just fine for ~5 months, but last week I suddenly started getting time-outs and slow downloads from the 'net. I first thought that this was my ISP acting up, then I thought it might be one of my 3 switches or some of my cabling. In due order I've tested everything above and found them all to be working as they should. The only factor remaining is my IPCop server. Facts: I've got a 15/15 Mbit line (fiber) and I get ~15 Mbit upload, but only 0.5 Mbit download with the IPCop box as router (ISP router set in bridge mode). If I connect without the IPCop box (using the ISP router) I get ~12 Mbit upload and ~15 Mbit download. The load on the IPCop box appears to be light and it used to handle this traffic just fine 2 weeks ago. The memory usage is ~60%, I tried to restart it and test again, the memory fell to ~50% then (5 months of uptime). I'm thinking that one of my nics are busted, but I'm sort of perplexed that this could be the outcome; slow download but full speed upload. Anybody ever seen that happening before? Could it just be one of the nics that needs to be replaced? Will try that as soon as I can get my hands on a couple of new ones.

    Read the article

  • How to automatically remove Flash history/privacy trail? Or stop Flash from storing it?

    - by Arjan van Bentem
    Many people have heard about third-party cookies, and some browsers even block those by default. Some people may even be using Private Browsing modes. However, only few seem to realise that Adobe's Flash player also leaves a cross-browser trail on your local hard drive, and allows for sending cookie-like information back to the server, including third-party sites. And because it is a plugin, Flash does not take any of the browser's privacy settings into account. Sorry for the long post, but first some details about why using Flash raises a privacy concern, followed by the results of my tests: The Flash player keeps a cross-browser history of the domain names of the Flash-sites your computer has visited. Unlike your browser's history, this history is not limited to a certain number of days. History is also recorded while using so-called Private Browsing modes. It is stored on your hard drive (though, as described below, without going to Adobe's site you won't know what is stored). I am not sure if any date and time information is kept about each visit, but to see the domain names: right-click on some Flash content, open the settings dialog, and click the Help icon or click the Advanced button within the Privacy tab. This opens a browser to the help pages on Adobe.com, where one can click through to the Website Storage Settings panel. One can clear the existing list, but one cannot stop it from being recorded again. Flash allows for storing data on your local hard drive, using so-called Local Shared Objects (aka "Flash Cookies"). Just like HTTP cookies, this data can be sent back to the server, for tracking purposes. They are cross-browser, have no expiration date, and no user defined maximum lifetime can be set in the Flash preferences either. These not being HTTP cookies, they are (of course) not blocked by a browser's cookies preferences and are not removed when the normal HTTP cookies are deleted. Adobe has announced that version 10.1 will obey Private Browsing in most popular browsers, but unfortunately no word about also removing the data whenever normal cookies are deleted manually. And its implementation might be confusing: [..] if the browser is in normal browsing mode when the Flash Player instance is created, then that particular instance will forever be in normal browsing mode (private browsing is turned off). Accordingly, toggling private browsing on or off without refreshing the page or closing the private browsing window will not impact Flash Player. Local Shared Objects are not limited to the site you visit, and third-party storage is enabled by default. At the Global Storage Settings panel one can deselect the default Allow third-party Flash content to store data on your computer. Because of the cross-browser and expiration-less nature (and the fact that few people know about it), I feel that the cross-browser third-party Flash Cookies are more dangerous for visitor tracking than third-party normal HTTP cookies. They are even used to restore plain HTTP cookies that the user tried to delete: "All advertisers, websites and networks use cookies for targeted advertising, but cookies are under attack. According to current research they are being erased by 40% of users creating serious problems," says Mookie Tenembaum, founder of United Virtualities. "From simple frequency capping to the more sophisticated behavioral targeting, cookies are an essential part of any online ad campaign. PIE ["Persistent Identification Element"] will give publishers and third-party providers a persistent backup to cookies effectively rendering them unassailable", adds Tenembaum. [..] To justify this tracking mechanism, UV's Tenembaum said, "The user is not proficient enough in technology to know if the cookie is good or bad, or how it works." When selecting None (zero KB) for Specify the amount of disk space that website websites that you haven't yet visited can use to store information on your computer, and checking Never ask again then some sites do not work. However, the same site might work when setting it to None but without selecting Never ask again, and then choose Deny whenever prompted. Both options would result in zero KB of data being allowed, but the behaviour differs. The plugin also provides a Flash Player cache for Adobe-signed files. I guess these files are not an issue. So: how to automatically delete that information? On a Mac, one can find a settings.sol file and a folder for each visited Flash-website in: $HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer/sys/ Deleting the settings.sol file and all the folders in sys, removes the trail from the settings panels. However, the actual Local Shared Ojects are elsewhere (see Wikipedia for locations on other operating systems), in a randomly named subfolder of: $HOME/Library/Preferences/Macromedia/Flash Player/#SharedObjects But then: how to remove this automatically? Simply removing the folders and the settings.sol file every now and then (like by using launchd or Windows' Task Scheduler) may interfere with active browsers. Or is it safe to assume that, given the cross-browser nature, the plugin would not care if things are removed while it is active? Only clearing during log-off may not work for those who hibernate all the time. Firefox users can install BetterPrivacy or Objection to delete the Local Shared Objects (for all others browsers as well). I don't know if that also deletes the trail of website domain names. Or: how to stop Flash from storing a history trail? Change of plans: I'm currently testing prohibiting Flash to write to its own sys and #SharedObjects folders. So far, Flash has not tried to restore permissions (though, when deleting the folders, Flash will of course recreate them). I've not encountered any problems but this may take some while to validate, using multiple browsers and sites. I've not yet found a log that reports errors. On a Mac: cd "$HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer" rm -r sys/* chmod u-w sys cd "$HOME/Library/Preferences/Macromedia/Flash Player" # preserve the randomly named subfolders (only preserving the latest would suffice; see below) rm -r \#SharedObjects/*/* chmod -R u-w \#SharedObjects I guess the above chmods cannot be achieved on an old Windows system (I'm not sure about XP and Vista?). Though maybe on Windows one could replace the folders sys and #SharedObjects with dummy files with the same names? Anyone? Obviously, keeping Flash from storing those Local Shared Objects for all sites may cause problems. Some test results (Flash 10 on Mac OS X): When blocking the sys folder (even when leaving the #SharedObjects folder writable) then YouTube won't remember your volume settings while viewing multiple videos. Temporarily allowing write access to the blocked folders while visiting trusted sites (to only create folders for domains you like, maybe including references in settings.sol) solves that. This way, for YouTube, Flash could be allowed to write to sys/#s.ytimg.com and #SharedObjects/s.ytimg.com, while Flash could not create new folders for other domains. One may also need to make settings.sol read-only afterwards, or delete it again. When blocking both the sys and #SharedObjects folders, YouTube and Vimeo work fine (though they might not remember any settings). However, Bits on the Run refuses to even show the video player. This is solved by temporarily unblocking the #SharedObjects folder, to allow Flash to create a subfolder with some random name. Within this folder, it would create yet another folder for the current Flash website (content.bitsontherun.com). Removing that website-specific folder, and blocking both #SharedObjects and the randomly named subfolder, still seems to allow Bits on the Run to operate, even though it still cannot write anything to disk. So: the existence of the randomly named subfolder (even when write protected) is important for some sites. When I first found the #SharedObjects folder, it held many subfolders with random names, some created on the very same day. I wonder when Flash decides it wants a new folder, and how it determines (and remembers) that random name. For a moment I considered not blocking write access for sys and #SharedObjects, but explicitly creating read-only folders for well-known third-party tracking domains (like based on a list from, for example, AdBlock Plus). That way, any other domain could still create Local Shared Objects. But the list would be long, and the domains from AdBlock Plus are probably all third-party domains anyway, so disabling Allow third-party Flash content to store data on your computer might have the very same result. Any experience anyone? (Final notes: if the above links to the settings panels do not work in the future, then use the URL that is known to Flash player as a starting point: www.adobe.com/go/settingsmanager. See also "You Deleted Your Cookies? Think Again" at Wired.com -- which uses Flash cookies itself as well... For the very suspicious using Time Machine: you may want to exclude both folders, for each user, and remove the trace that is already on your backup.)

    Read the article

  • How do I fix issue causing "incomplete startup packet" log message trying to implement replication in Postgresql?

    - by colour me brad
    I've got two cloud servers running Ubuntu 13.04 and PostgreSQL 9.2. I've primarily used this blog post to aid me in setting things up. However, to do the initial database dump to the slave I'm using pg_start_backup/pg_stop_backup strategy used in this other blog post. I've read through the docs and postgres wikis as well. I ran into several problems I was able to solve, but I can't get past this wretched "the database is starting up" failure. I'm not sure if seeing "cp: cannot stat '/var/lib/postgresql/9.2/archive/00000001000000000000003A': No such file or directory" after "consistent recover state reached" is normal or the first sign of a problem. The searching I've done on "the database is starting up" and "incomplete startup packet" tells me that something is sending empty TCP packets to the slave. The only thing that even knows about the slave is the master, so I'm not sure why it's sending empty packets... Has anyone worked with this and have an idea what might be going wrong? The postgres log on the slave looks like so: 2013-08-26 13:01:38 CDT LOG: entering standby mode 2013-08-26 13:01:38 CDT LOG: restored log file "000000010000000000000039" from archive 2013-08-26 13:01:38 CDT LOG: incomplete startup packet 2013-08-26 13:01:39 CDT LOG: redo starts at 0/39000020 2013-08-26 13:01:39 CDT LOG: consistent recovery state reached at 0/390000E0 cp: cannot stat '/var/lib/postgresql/9.2/archive/00000001000000000000003A': No such file or directory 2013-08-26 13:01:39 CDT LOG: streaming replication successfully connected to primary 2013-08-26 13:01:39 CDT FATAL: the database system is starting up 2013-08-26 13:01:39 CDT FATAL: the database system is starting up 2013-08-26 13:01:40 CDT FATAL: the database system is starting up 2013-08-26 13:01:40 CDT FATAL: the database system is starting up 2013-08-26 13:01:41 CDT FATAL: the database system is starting up 2013-08-26 13:01:42 CDT FATAL: the database system is starting up 2013-08-26 13:01:42 CDT FATAL: the database system is starting up 2013-08-26 13:01:43 CDT FATAL: the database system is starting up 2013-08-26 13:01:43 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT LOG: incomplete startup packet 2013-08-26 13:03:27 CDT FATAL: the database system is starting up 2013-08-26 13:03:27 CDT FATAL: the database system is starting up 2013-08-26 13:03:30 CDT FATAL: the database system is starting up 2013-08-26 13:03:30 CDT FATAL: the database system is starting up thanks! brad

    Read the article

  • Put one monitor of a dual monitor windows system into standby

    - by Psycogeek
    Standby not Disabled! When running 2 monitors on windows 7 or Windows XP, I would like to be able to put one of the monitors at a time into standby. The method can be manual. When running 2 monitors , the second monitor is not always needed, shutting off the monitors own power switch will turn off the monitor, that does work Ok. Problems with that are , the delay with the monitor logo at turn on, and the power switch is not very accessable, and the switch might not live forever turning it on and off so many times. Using disable methods like devcon, WIN-P and Display, causes all the windows to properly move to the other monitor. While that is what a person would want to happen so they can get hold of the windows, that is not what I want to happen, and some things on the other monitor have to be re-arranged after a re-enable. By putting it into standby mode, nothing changes other than the monitor going into standby. Disconnecting the DVI cable still can cause the system to (properly) shift all the windows over to the one monitor, just like any of the disable methods do. That makes a mess of the windows, and is so unacceptable, that I would prefer to leave the monitor on, wasting power and the hardware, when it could easily go into standby for some time. For both monitors I am using a "MonitorOff" program that puts both monitors into standby, but I can not find a utility that will put only ONE monitor into standby for the windows system. If someone comes along and suggests "ultramon" you must know for a fact that it will put One of either of the monitors into actual standby. And it does not really suit me to use ultramon, I tested it (it was nice) and I did not feel that it was a program I wanted. The 2 monitors are running off of an ATI 4890 card, they are both hooked up DVI-I, the OS is both Windows 7 (primary) and Windows XP. In addition it would also be interesting to have seperate standby activity timers, and follow mouse kind of standby changes, but any manuel method , shortcut, batch , tray, or gadget kind of operation would be a good start.

    Read the article

  • Is it possible to shrink the size of an HP Smart Array logical drive?

    - by ewwhite
    I know extension is quite possible using the hpacucli utility, but is there an easy way to reduce the size of an existing logical drive (not array)? The controller is a P410i in a ProLiant DL360 G6 server. I'd like to reduce logicaldrive 1 from 72GB to 40GB. => ctrl all show config detail Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: 5001438006FD9A50 Cache Serial Number: PAAVP9VYFB8Y RAID 6 (ADG) Status: Disabled Controller Status: OK Chassis Slot: Hardware Revision: Rev C Firmware Version: 3.66 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 3 secs Surface Scan Mode: Idle Queue Depth: Automatic Monitor and Performance Delay: 60 min Elevator Sort: Enabled Degraded Performance Optimization: Disabled Inconsistency Repair Policy: Disabled Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 15 secs Cache Board Present: True Cache Status: OK Accelerator Ratio: 25% Read / 75% Write Drive Write Cache: Enabled Total Cache Size: 512 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True Array: A Interface Type: SAS Unused Space: 412476 MB Status: OK Logical Drive: 1 Size: 72.0 GB Fault Tolerance: RAID 1+0 Heads: 255 Sectors Per Track: 32 Cylinders: 18504 Strip Size: 256 KB Status: OK Array Accelerator: Enabled Unique Identifier: 600508B1001C132E4BBDFAA6DAD13DA3 Disk Name: /dev/cciss/c0d0 Mount Points: /boot 196 MB, / 12.0 GB, /usr 8.0 GB, /var 4.0 GB, /tmp 2.0 GB OS Status: LOCKED Logical Drive Label: AE438D6A5001438006FD9A50BE0A Mirror Group 0: physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK) Mirror Group 1: physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 146 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 146 GB, OK) SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 Device Number: 250 Firmware Version: RevC WWID: 5001438006FD9A5F Vendor ID: PMCSIERA Model: SRC 8x6G

    Read the article

  • How can I set my bootloader to load my primary (C:) partition?

    - by acidzombie24
    I created 4 partitions and want to use them to have seperate Windows XP, Windows 7, (possibly) Windows Vista installations, and "WinDummy" (to test applications in Vista, XP or another OS). I used Norton Ghost to install an OS to the drive in about 3 minutes. My problem is that I installed the spare first on the 4th partition, then Windows 7 on the second. I tried to set the bootloader (with easybcd) to use the first partition - but it doesn't want to. Heres my debug screen on easybcd As you can see, the device is set to H and i cant figure out how to change it. I can make my bootloader use Windows 7 first, but I can't make it use my C: install of XP instead of my spare H:. How would I fix this? Windows Boot Manager -------------------- identifier {9dea862c-5cdd-4e70-acc1-f32b344d4795} device partition=H: description Windows Boot Manager locale en-US inherit {7ea2e1ac-2e61-4728-aaa3-896d9d0a9f0e} default {bc2d8409-8640-11de-aa7e-a477d86453c4} resumeobject {bc2d8405-8640-11de-aa7e-a477d86453c4} displayorder {bc2d8409-8640-11de-aa7e-a477d86453c4} {bc2d8406-8640-11de-aa7e-a477d86453c4} {bc2d8404-8640-11de-aa7e-a477d86453c4} {466f5a88-0af2-4f76-9038-095b170dc21c} toolsdisplayorder {b2721d73-1db4-4c62-bf78-c548a880142d} timeout 3 Real-mode Boot Sector --------------------- identifier {bc2d8409-8640-11de-aa7e-a477d86453c4} device partition=C: path \NTLDR description Windows XP Windows Boot Loader ------------------- identifier {bc2d8406-8640-11de-aa7e-a477d86453c4} device partition=D: path \Windows\system32\winload.exe description Windows 7 locale en-US inherit {6efb52bf-1766-41db-a6b3-0ee5eff72bd7} recoverysequence {bc2d8407-8640-11de-aa7e-a477d86453c4} recoveryenabled Yes osdevice partition=D: systemroot \Windows resumeobject {bc2d8405-8640-11de-aa7e-a477d86453c4} nx OptIn Windows Boot Loader ------------------- identifier {bc2d8404-8640-11de-aa7e-a477d86453c4} device partition=E: path \Windows\system32\winload.exe description Blank osdevice partition=E: systemroot \Windows Windows Legacy OS Loader ------------------------ identifier {466f5a88-0af2-4f76-9038-095b170dc21c} device partition=H: path \ntldr description Windows XP Spare

    Read the article

  • Resizing a LUKS encrypted volume

    - by mgorven
    I have a 500GiB ext4 filesystem on top of LUKS on top of an LVM LV. I want to resize the LV to 100GiB. I know how to resize ext4 on top of an LVM LV, but how do I deal with the LUKS volume? mgorven@moab:~% sudo lvdisplay /dev/moab/backup --- Logical volume --- LV Name /dev/moab/backup VG Name moab LV UUID nQ3z1J-Pemd-uTEB-fazN-yEux-nOxP-QQair5 LV Write Access read/write LV Status available # open 1 LV Size 500.00 GiB Current LE 128000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 2048 Block device 252:3 mgorven@moab:~% sudo cryptsetup status backup /dev/mapper/backup is active and is in use. type: LUKS1 cipher: aes-cbc-essiv:sha256 keysize: 256 bits device: /dev/mapper/moab-backup offset: 3072 sectors size: 1048572928 sectors mode: read/write mgorven@moab:~% sudo tune2fs -l /dev/mapper/backup tune2fs 1.42 (29-Nov-2011) Filesystem volume name: backup Last mounted on: /srv/backup Filesystem UUID: 63877e0e-0549-4c73-8535-b7a81eb363ed Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131071616 Reserved block count: 0 Free blocks: 112894078 Free inodes: 32044830 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stride: 128 RAID stripe width: 128 Flex block group size: 16 Filesystem created: Sun Mar 11 19:24:53 2012 Last mount time: Sat May 19 13:29:27 2012 Last write time: Fri Jun 1 11:07:22 2012 Mount count: 0 Maximum mount count: 100 Last checked: Fri Jun 1 11:03:50 2012 Check interval: 31104000 (12 months) Next check after: Mon May 27 11:03:50 2013 Lifetime writes: 118 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 383bcbc5-fde9-4720-b98e-2d6224713ecf Journal backup: inode blocks

    Read the article

< Previous Page | 733 734 735 736 737 738 739 740 741 742 743 744  | Next Page >