Search Results

Search found 13180 results on 528 pages for 'non interactive'.

Page 398/528 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • Overriding vhost.conf to always allow PHP include access to directory

    - by Jeremy Dentel
    My predecessor in my job developed a simplistic newsletter system for our school's newspaper utilizing PEAR's Mail package. As I grow this system (and our site) we are constantly stuck with Plesk rewriting the vhost.conf file in which the PEAR include path has been manually entered. This has become an unwieldy task to actually manage and keep running. There's been a "note" from both the previous developer and I to attempt to solve this problem, but we can't entirely figure it out. I'm attempting a move to cPanel through another host, so hopefully it'll go away there, but until then, it can be tedious extremely difficult to get a solid uptake of the system without constant "web-presence." I've searched around and haven't found a solution. I'm rather new to the server management scene (command line was non-existant till around a year ago. =/), so I haven't found anything. Any help would be useful. "Similar Questions" popped this up, but it still seems to rely on vhost.conf, and will still allow changes within Plesk to overwrite the changes.

    Read the article

  • Virtual machines with failover setup

    - by kimmmo
    We have three servers and our plan is to run a number of virtual machines on them in such manner, that if one of the nodes blow up, we can either quickly or seamlessly get a spare running on another node. In addition to the normal networking, they're interconnected via dual 10Gbit NIC's, so networked raid/mirroring shouldn't be a problem. The guest VM's are mostly going to be running text mode linux, but of course it wouldn't hurt to be able to spin up a non-mission critical windows guest for running Visual Studio or checking IE compatibility of a web app. We've spent some time trying to get some magical cloud setup running using Stackops and Crowbar but it started to look like they were offering way too much and were too complicated for our needs. The next candidate, I think, is Ubuntu 11.04 server + KVM + Ganeti + Drbd, unless you can come up with a suggestion for a better solution that we have missed. Requirements: Installation should be simple or at least understandable without being in the dev team A browser interface for creating and managing VM's is a nice bonus Single node's hardware failure should cause minimal downtime for VM's that were running on that node Adding more nodes should be possible without shutting down the VM's.

    Read the article

  • Error setting up Data Protection Manager 2010 Agents / Network "Unauthenticated" in network settings

    - by Bowsa
    I'm not sure if the two are connected but i suspect they are. Basically I'm tring to setup Data Protection Manager 2010 on a fresh install of Server 2008 R2 in a SBS 2003 domain. Everything went fine until trying to install agents across the network. Upon clicking add, i get the following error message: Unable to connect to the Active Directory Domain Services Database. Make sure that the DPM server is a member of a domain and that the controller is running. Also verify that there is network connectivity between the DPM server and the domain controller. ID: 7 As usual (worryingly) the MSDN support for 2010 products is nearly non existant, clicking the error ID simply gives a page not found error. So after 2 days of Googling and trying various fixes (DNS settings, adding permissions to AD objects, rejoining the domain and many more) I thought I'd ask here in the hope that someone out there may have had this issue before. Any help greatly appreciated! Some further info: Firewalls are disabled on the Server 2008, SBS, and client machines. Manually installing and adding the client in also fails, as the DPM server tries to contact the DC first. Edit: I tried creating a new protection group instead, and it gives a different error upon adding the machines: Following machines are not found in AD: COMPUTERNAME.COMPANYNAME.LOCAL Is there a certain directory structure it follows in AD?

    Read the article

  • How private is the Opera Turbo feature of Opera?

    - by Marcus V
    If I'm using Opera with the Opera Turbo feature turned on (always, not set to "automaticly"). Can anyone see what sites I'm visiting (except Opera of course ...)? Opera Turbo uses a proxy server, so it should be that way, but as a not very technical person I'm not sure. Why do I want this? Well: nowadays, at least in my country, more and more (legal) open Wi-Fi connections are available. In those environments I like to have more privacy protections. I don't mind if they can see my IP address, but I just want to hide as much as I can of what I am doing. BTW: I don't care that they can see the data transferred; it doesn't have to be that secret. I only want to hide the requested Internet site links. BTW: I know that Opera Turbo only works with non-secure websites (HTTP), but that's fine for me. I only want it to work with these sites. BTW: I'm not need this for illegal purposes; I only want this for privacy reasons.

    Read the article

  • Why I cannot copy install.wim from Windows 7 ISO to USB (in linux env)

    - by fastreload
    I need to make a USB bootable disk of Windows 7 ISO. My USB is formatted to NTFS, ISO is not corrupt. I can copy install.wim elsewhere but I cannot copy it to USB. I even tried rsync. rsync error sources/install.wim rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32) rsync: write failed on "/media/52E866F5450158A4/sources/install.wim": Input/output error (5) rsync error: error in file IO (code 11) at receiver.c(322) [receiver=3.0.8] Stat for windows.vim File: `X15-65732 (2)/sources/install.wim' Size: 2188587580 Blocks: 4274600 IO Block: 4096 regular file Device: 801h/2049d Inode: 671984 Links: 1 Access: (0664/-rw-rw-r--) Uid: ( 1000/ umur) Gid: ( 1000/ umur) Access: 2011-10-17 22:59:54.754619736 +0300 Modify: 2009-07-14 12:26:40.000000000 +0300 Change: 2011-10-17 22:55:47.327358410 +0300 fdisk -l Disk /dev/sdd: 8103 MB, 8103395328 bytes 196 heads, 32 sectors/track, 2523 cylinders, total 15826944 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc3072e18 Device Boot Start End Blocks Id System /dev/sdd1 * 32 15826943 7913456 7 HPFS/NTFS/exFAT hdparm -I /dev/sdd: SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ATA device, with non-removable media Model Number: UF?F?A????U]r???U u??tF?f?`~ Serial Number: ?@??~| Firmware Revision: ????V? Media Serial Num: $I?vnladip raititnot baelErrrol aoidgn Media Manufacturer: o eparitgns syetmiM Standards: Used: unknown (minor revision code 0x0c75) Supported: 12 8 6 Likely used: 12 Configuration: Logical max current cylinders 17218 0 heads 0 0 sectors/track 128 0 -- Logical/Physical Sector size: 512 bytes device size with M = 1024*1024: 0 MBytes device size with M = 1000*1000: 0 MBytes cache/buffer size = unknown Capabilities: IORDY(may be)(cannot be disabled) Queue depth: 11 Standby timer values: spec'd by Vendor R/W multiple sector transfer: Max = 0 Current = ? Recommended acoustic management value: 254, current value: 62 DMA: not supported PIO: unknown * reserved 69[0] * reserved 69[1] * reserved 69[3] * reserved 69[4] * reserved 69[7] Security: Master password revision code = 60253 not supported not enabled not locked not frozen not expired: security count not supported: enhanced erase 71112min for SECURITY ERASE UNIT. 172min for ENHANCED SECURITY ERASE UNIT. Integrity word not set (found 0xaa55, expected 0x80a5)

    Read the article

  • Dell 2970 - HP 1/8 G2 autoloader keeps falling off LSI 2032 SCSI chain

    - by middaparka
    I've a somewhat irritating problem with a Dell 2970 that has a HP 1/8 G2 autoloader (the Ultrium LTO 2 model) attached to the Dell/LSI 2032 non-RAID SCSI card. In essence, sometimes the autoloader/drive completely fails to appear on the SCSI chain (i.e.: there's neither a media changer or tape drive present within the device manager) and sometimes it appears but then subsequently disappears at a seemingly random (yet always inconvenient) time, resulting in backup failures. On most occasions, there are simply no errors logged in the system event log, but I did manage to capture a series of LSI_SCSI event ID 11 ("The driver detected a controller error on \Device\RaidPort0") errors followed by an event ID 129, ("Reset to device, \Device\RaidPort0, was issued") error during testing. I've tried two different cables, both with the same effect – sometimes the autoloader appears (for a while), sometimes it's completely absent. There's only one terminator I've tried to use, but as I've since successfully tested the autoloader on multiple occasions (albeit via a Adaptec U160 card on a different machine), my gut feel is that the issue doesn't lie with the terminator, or indeed the autoloader itself. As such, I'm just wondering if anyone has any ideas? It's most likely not relevant, but this is all under Windows SBS 2008, running Backup Exec 12.5 SBS edition (the Dell version), both fully patched. Addidtionally, the autoloader is running the latest firmware. It's been a while since I've dealt with anything SCSI, so all suggestions will be gratefully, gratefully received.

    Read the article

  • Virtual Fileserver

    - by Sergei
    Hi, We are planning to move our production servers to the datacenter and virtualize remaining servers in the process.Datacenter will have HP blades with vSphere on top.Currentliy we are using Celerra NS20 as fileserver.Since datacenter is using HP kit and EVA 4400 as SAN, we cannot have Celerra there, as EMC supoprt for Celerra does not work for non EMC array. I have searched for possible options and one of them was to have HP NAS blade X3800sb instead of Celerra.However this seems like overkill for me.We are only using Celerra for about 100 users and 50 servers and I think having X3800sb could be waste of resources. The other option would be to have a virtual fileserver as a part of vmware environment in datacenter.We only need CIFS to be provided.The only option I can think of is Windows Storage server.We had a bad expirience with Windows servers used as fileservers ( memory leaks one thing) in the past and this was one of the reasons we moved to Celerra. What are the other options?We need something as reliable as Celerra with as many options as possible.For example , Celerra has per folder quotas, deduplication, dynamic volume allocation, automatic failover, VTLU, replication. Also we would need to replicate NAS data to the failover site.We could use block level replication , SAN-to-SAN, but this would mean wasted bandwidth, as we need only subset of folders to be replicated.We used CA XSoft for windows servers in the past and Celerra has option for Celerra replication. Thank you very much in advance, Please ask me if I missed any details!

    Read the article

  • GitLab post-receive hook not firing

    - by Ben Graham
    Apologies if this isn't the right stackexchange. I have a GitLab install. It was installed over the top of a gitolite install that was only a few days old, and I assume this non-standard setup is at the root of my problem, but I cannot pin it down. The problem is straightforward: post-receive hooks are not fired. This prevents 'project activity' appearing in GitLab. The problem looks like: $ git push #... error: cannot run hooks/post-receive: No such file or directory Hook Exists The post-receive hook/symlink exists and is executable: -rwxr-xr-x 1 git git 470 Oct 3 2012 .gitolite/hooks/common/post-receive lrwxrwxrwx 1 git git 45 Oct 3 2012 repositories/project.git/hooks/post-receive -> /home/git/.gitolite/hooks/common/post-receive It's Executable By GitLab The gitlab user can execute the script (I have removed the /dev/null redirect and fed in blank input to get an 'OK' as output): sudo su - gitlab -c /home/git/.gitolite/hooks/common/post-receive OK GitLab Can Find It GitLab is looking for hooks in the correct location: $ grep hooks /srv/gitlab/gitlab/config/gitlab.yml hooks_path: /home/git/.gitolite/hooks/ and $ bundle exec rake gitlab:app:status RAILS_ENV=production # ... /home/git/.gitolite/hooks/common/post-receive exists? ............YES Environment The env -i line in the hook is commonly cited as an issue. I think that would occur after this problem, but for completeness, redis-cli is found OK: $ env -i redis-cli redis> I've run out of debugging ideas on this one. Does anybody have any suggestions?

    Read the article

  • Force Juniper-network client to use split routing

    - by craibuc
    I'm using the Juniper client for OSX ('Network Connect') to access a client's VPN. It appears that the client is configured to not use split-routing. The client's VPN host is not willing to enable split-routing. Is there a way for me to over-ride this configuration or do sometime on my workstation to get the non-client network traffic to by-pass the VPN? This wouldn't be a big deal, but none of my streaming radio stations (e.g. XM) work will connected to their VPN. Apologies for any inaccuracies in the terminology. ** edit ** The Juniper client changes my system's resolve.conf file from: nameserver 192.168.0.1 to: search XXX.com [redacted] nameserver 10.30.16.140 nameserver 10.30.8.140 I've attempted to restore my preferred DNS entry to the file $ sudo echo "nameserver 192.168.0.1" >> /etc/resolv.conf but this results in the following error: -bash: /etc/resolv.conf: Permission denied How does the super-user account not have access to this file? Is there a way to prevent the Juniper client from making changes to this file?

    Read the article

  • My linux server "Number of processes created" and "Context switches" are growing incredibly fast

    - by Jorge Fuentes González
    I have a strange behaviour in my server :-/. Is a OpenVZ VPS (I think is OpenVZ, because /proc/user_beancounters exists and df -h returns /dev/simfs drive. Also ifconfig returns venet0). When I do cat /proc/stat, I can see how each second about 50-100 processes are created and happens about 800k-1200k context switches! All that info is with the server completely idle, no traffic nor programs running. Top shows 0 load average and 100% idle CPU. I've closed all non-needed services (httpd, mysqld, sendmail, nagios, named...) and the problem still happens. I do ps -ALf each second too and I don't see any changes, only a new ps process is created each time and the PID is just the same as before + 1, so new processes are not created, so I thought that process growing in cat /proc/stat must be threads (Yes, seems that processes in /proc/stat counts threads creation too as this states: http://webcache.googleusercontent.com/search?q=cache:8NLgzKEzHQQJ:www.linuxhowtos.org/System/procstat.htm&hl=es&tbo=d&gl=es&strip=1). I've changed to /proc dir and done cat [PID]\status with all PIDs listed with ls (Including kernel ones) and in any process voluntary_ctxt_switches nor nonvoluntary_ctxt_switches are growing at the same speed as cat /proc/stat does (just a few tens/second), Threads keeps the same also. I've done strace -p PID to all process too so I can see if any process is crating threads or something but the only process that has a bit of movement is ssh and that movement is read/write operations because of the data is sending to my terminal. After that, I've done vmstat -s and saw that forks is growing at the same speed processes in /proc/stat does. As http://linux.die.net/man/2/fork says, each fork() creates a new PID but my server PID is not growing! The last thing I can think of is that all process data that proc/stat and vmstat -s show is shared with all the other VPS stored in the same machine, but I don't know if that is correct... If someone can throw some light on this I would be really grateful.

    Read the article

  • Server 2012, Jumbo Frames - should I expect problems?

    - by TomTom
    Ok, this sound might stupid - but is there any negative on just enabling jumbo frames in practice? From what I understand: Any switch or ethernet adapter that sees a jumbo frame it can not handle will just drop it. TCP is not a problem as max frame size is negotiated in the setinuo phase. UCP is a theoretical problem as a server may just send a LARGE UDP packet that gets dropped on the way. Practically though, as UDP is packet based, I do not really think any software WOULD send a UDP packet larger than 1500 bytes net without app level configuration changes - at least this is how I do my programming, as it is quite hard to get a decent MTU size for that without testing yourself, so you fall back in programming to max 1500 packets. The network in question is a standard small business network - we upgraded now from a non managed 24 port switch to a 52 port switch with 4 10g ports (netgear - quite cheap) and will mov a file server to 10g for also ISCSI serving. All my equipment on the Ethernet level can handle minimum 9000 bytes and due to local firewalls I really want to get packets larger (less firewall processing), but the network is also NAT'ed to the internet. On top, different machines move around (download) large files (multi gigabyte area) quite often for processing. The question is - can I expect problems when I just enable jumbo frames? Again, this is not totally ignorance - I just don't see programs sending more than 1500 byte UDP packets (if that is a practical problem please tell me) and for TCP the MTU is negotiated anyway. if there is a problem I can move to a dedicated VLAN, but this has it's own shares of problems as basically most workstations must then be on both VLAN's.

    Read the article

  • Conditionally Rewrite Email Headers (From & Reply-To) Exchange 2010

    - by NorthVandea
    I have a client who maintains Company A (with email addresses %username%@companyA.com) and they own the domain companyB.com however there is no "infrastructure" (no Exchange server) set up specifically for companyB.com. My client needs to be able to have the end users within it's company (companyA.com) add a specific word or phrase to the Subject (or Body) line of the Outgoing email (they are only concerned with outgoing, incoming is a non-issue in this case) that triggers the Exchange 2010 servers to rewrite the header From and Reply-To [email protected] with [email protected] but this re-write should ONLY occur if the user places the key word/phrase in the Subject (or Body). I have attempted using Transport Rules and the New-AddressRewriteEntry cmdlet however each seems to have a limitation. From what I can tell Transport Rules cannot re-write the From/Reply-To fields and New-AddressRewriteEntry cannot be conditionally triggered based on message content. So to recap: User sends email outside the organization: From and Reply-To remain [email protected] User sends email outside the organization WITH "KeyWord" in the Subject or Body: From and Reply-To change to [email protected] automatically. Anyone know how this could be done WITHOUT coding a new Mail Agent? I don't have the programming knowledge to code a custom Agent... I can use any function of Exchange Management Shell or Console. Alternatively if anyone knows of a simple add-on program that could do this that would be good too. Any help would be greatly appreciated! Thank you!!!

    Read the article

  • Hyper-v on 2012R2 startup gen1 vm causes the host to freeze up

    - by sputnik
    I've searched a lot to resolve the following issue, but nothing helped me. My problem is, that starting up a first-gen vm locks up the whole host. Only a hard reset helps. Second-gen vm starts and runs perfectly. The freezes happened on 3 different vms. FreeBSD, Ubuntu, Windows Server 2008R2, while Windows 8.1 on second gen config works perfectly. Im using this pc mainly as a workstation. No eventlog errors nor dumps are generated. My system: Windows Server 2012R2 FX-8350, non OC ASRock 870 Extreme R2 (Crappy board imho) 32GB DDR3 1866@1600 (My motherboard, against the "support" for 1866ram won't work with full speed) 120GB SSD 4.5TB Storage space device I dont think that its due to my system, because vmware workstation was running without problems. Did I forget to configure something? Any help is appreciated. P.S: Even deactivating C1E, C6, C&Q didnt work. P.P.S: With no virtual network adapter set, the system still locks up. Creating a first gen vm without any hdds and network and launching works. Attaching a boot dvd causes the host to freeze. The host freezes as the gen1 vm begins to boot, doesn't matter if from dvd or hdd

    Read the article

  • Connect Chrome to TOR

    - by Jack M
    I'm having difficulty connecting Chrome to TOR. I started trying yesterday. I started Vidalia and the TOR Browser and then followed the advice at http://lifehacker.com/5614732/create-a-tor-button-in-chrome-for-on+demand-anonymous-browsing - downloading Proxy Switchy and setting it up as stated. This resulted in Error 130 (net::ERR_PROXY_CONNECTION_FAILED) (in Chrome, when I tried to load a webpage). So I looked into Vidalia's settings and noticed that it appeared to be using port 9051, so I set that instead of 8118 as everyone on the internet seems to be suggesting. Then I got a new error: Error 111 (net::ERR_TUNNEL_CONNECTION_FAILED). Digging a bit, I found that Tor should be set as a SOCKS proxy, not an HTTP proxy, so I unticked "use same settings for all protocols" in Proxy Switchy and just set localhost:9051 for SOCKS. That got me Error 7 (net::ERR_TIMED_OUT). And that's when I came here for help. I typed up the above question, but then at the last minute decided to do a bit more reading and found someone here suggested using some command line arguments via a Windows shortcut: "C:\snip\chrome.exe" --proxy-server=";socks=127.0.0.1:9051;sock4=127.0.0.1:9051;sock5=127.0.0.1:9051" --incognito check.torproject.org And that worked perfectly. Yesterday. Today it doesn't, so I'm having to post this question after all. check.torproject.org gives me a "no" with Chrome, but a "yes" with the default Tor Browser. I tried closing Chrome and restarting it (yes, with the correct shortcut) after Vidalia started, but still nothing. The port number hasn't changed or anything. What gives? EDIT: I realized I had a "non tor" instance of Chrome running and that possibly the was causing the command line args t be ignored when I started the new instance. Closed all instances of chrome and ran my Chrome Tor shortcut, and it did get rid of the "not using Tor" message -- because I got another Time Out error instead. Vidalia's bandwidth graph didn't even blink.

    Read the article

  • Installing Windows 7 over PXE, preferably with domain autojoin

    - by Ivan Vucica
    At an educational non-profit, I've inherited a previously set-up Windows domain that, after the first reinstall of the machines, we ended up not using by simply not joining machines back into the domain. Over last summer, before the annual reinstall for shipping machines to the summer school, I toyed with the idea of installing Windows 7 over network, instead of just imaging the machines. It took a bit longer than I expected to figure out the basics; honestly, I expected that Windows would be more friendly for PXE installation out of the box. What I'm interested in is best practices for installing Windows 7 over PXE with domain autojoin. I'd love it if the whole setup could optionally be hosted on a UNIX based system as well. I've had some success by preparing an ISO using Windows Deployment Kit, and loading the ISO into memory. This was needed since I wanted a menu, and I think I couldn't get PXELINUX to chainload into Windows' bootloader. Unfortunately, I couldn't figure out much about customization of the Windows setup in that timeframe nor could I get Samba to work properly; studying the stuff ended up being too lengthy, especially the portion where I edited a disk image on Windows and copied it outside. WDK didn't make things easier by mounting the disk image into RAM, and writing it in its entirety when done with it, making me a very sad boy. I've recently found a different approach, too, that appears to be closer to Microsoft's original idea for netboot deployment and does not involve ISOs. So my question boils down to the following. What exact approach do you use for netbooting Windows 7 setup? How can Windows 7 setup be best customized to be completely unattended, including installation on specific system partition and not destroying the data partition, creation of passworded admin and default user, choice of MAC-address-based hostname, and joining a domain? As much details as possible for everyone's future reference would be appreciated. WDS isn't a bad choice, but if a Linux-based install can be used, that'd be better.

    Read the article

  • How can I copy a VMware Fusion virtual machine to a FAT32 partition?

    - by Michael Prescott
    I created the virtual machine on a host running OS X. I then moved the machine to a FAT32 partition on an external drive. It moved the first time without error. Then I moved it from the external drive to a host running Ubuntu 9.10. I had to move to a FAT32 partition first because Ubuntu doesn't recognize Mac OS Extended partitions on the drive. So, the virtual machine (vm) ran on the ubuntu host for a while and then I moved it back to the FAT32 partition and from there back to the OS X host. I worked on the vm for a while on the OS X host and then attempted to move it back to the FAT32 partition. I get the following system error: The Finder can’t complete the operation because some data in “my-virtual-machine” can’t be read or written. (Error code -36) Interestingly, I can move the file to another OS X partition, just not FAT32. I also perused VMware's forums and found advice to set permissions on all files and folders to 777. I did this, but have had no success. I notice the the files within the vm package are 777 now, but there is an extended attributes symbol on their permission details "rwxrwxrwx@" Since I can copy the vm between OS X partitions, but not to non OS X partitions, and all files and folders withing the vm package and the package itself have permissions of 777, I speculate that the "@" is the problem. How can I remove the "@" or is there something else I need to modify to allow me to copy/move the vm to other hosts?

    Read the article

  • Windows and file system abstraction - how much does it matter where something comes from?

    - by deceze
    I have come across the following phenomenon and would like to know how leaky Windows' file system abstraction is or if there's something else involved. I partitioned the hard disk of my MacBook Pro and installed Windows 7 (64 bit). The Bootcamp driver package includes file system drivers (right term?) that enable Windows to access the Mac OS HFS+ partition. AFAIK it's a read-only access, but it works. Now, I have some disk images of stuff I usually install, so I grabbed a copy of Daemon Tools to mount them. When I mount an image saved on the HFS+ partition, about two out of three installers on these disks (usually InstallShield) crash with all sorts of weird errors. Most are just gibberish that lead to all sorts of non-solutions on Google, one was "This application is not the right type for your computer, check if you need 32 or 64 bit versions." When moving the image files to another Windows 7 computer on the network and mounting them from the network share, they work fine. My question now is, why do applications behave differently depending on whether the read-only image file, which should be abstracted away through the read-only virtual Daemon Tools drive, is located on a read-only HFS+ partition or on a Windows network share? And I'll just roll this into the question as well since I was wondering: Does the file system of a network share matter? Does the client system need to understand the file system of the share host or is that abstracted away in SMB?

    Read the article

  • Annoying trackpad freeze on MacBook [solved]

    - by Hafthor
    NOTE: Question marked answered because it was forced after being put up for bounty. Actual solution was to have Apple repair it. Trackpad usually works, but sometimes stops responding for around 5 seconds and then suddenly starts working again. Seems to happen when I switch between typing and moving + button-clicking and also when I do a lot of double-clicking. Tried turning off the "Ignore accidental trackpad input". Apple replaced the keyboard/mouse under warranty. Problem remains. Any ideas? Edit: White non-unibody Late-2008 13" MacBook - fully up-to-date OS. Doesn't seem to matter if it is plugged in or not. Edit: Updated to Snow Leopard - seems to have made it worse. Edit: Applying even a little pressure to the left palm rest creates this condition. Apple replaced the top case again and this time it seems to have fixed it. Although, it looks like they may have added a spacer on the left palm rest to "fix" it.

    Read the article

  • need advice on data center move, communication with both facilities during transition

    - by Brian Roden
    We are beginning the process of moving to a new facility. Office and warehouse operations will both be moving, and we must get shipping operations up and running at the new location while continuing to ship from the old location. Our contract with some third-party warehouse tenants requires two business day turnaround (only weekends and holidays excluded), so we can't have major downtime during the move. We would like to keep our 172.16.60/61.xxx internal address space in use throughout the move. Is it possible to keep using this same internal range, and have our existing WatchGuard Firebox 520 and whatever router we get for the other location (preferably the same model) just treat both locations as one network, leaving our host IPs the same throughout the move? Renumbering the servers when they move isn't a big deal, but our wireless terminals for order picking in the warehouse have fixed IPs (and a fixed IP, non-DNS reference to the host they speak with) and would be a massive undertaking to reconfigure when the servers move (each device would have to be reconfigured at least 2 times -- some when we start using them in the new building and the host is still here, all of them in both locations when the host moves to the new building, and the rest when they finally make the move to the new building). We're trying to avoid that if possible.

    Read the article

  • Which is more secure: Tomcat standalone or Tomcat behind Apache?

    - by NoozNooz42
    This question is not about performance, nor about load-balancing, etc. Which would be more secure: running Tomcat in standalone mode or running Tomcat behind apache? The thing is, Tomcat is written in Java and hence it is pretty much immune to buffer overrun/overflow (unless a buffer overrun in a C-written lib used by Tomcat can be triggered, but they're rare [the last I remember was in zlib, many many moons ago] and one heck of a hack to actually exploit), which gets rid of a lot of potential exploits. This page: http://wiki.apache.org/tomcat/FAQ/Security has this to say: There have been no public cases of damage done to a company, organization, or individual due to a Tomcat security issue... there have been only theoretical vulnerabilities found. All of those were addressed even though there were no documented cases of actual exploitation of these vulnerabilities. This, combined with the fact that buffer overrun/overflow are pretty much non-existent in Java, makes me believe that Tomcat in standalone mode is pretty secure. In addition to that, I can install both Java and Tomcat on Linux without needing to be root. The only moment I need to be root is to set up a transparent port 8080 to port 80 forwarding (and 8443 to 443). Two iptables line as root, that's all root is needed for. (I don't know for Apache). Apache is much more used than Tomcat and definitely does not have a security track record as good as Tomcat. What would make Tomcat + Apache more secure? What would make Tomcat + Apache less secure? In short: which is more secure, Tomcat standalone or Tomcat with Apache? (remembering that performance aren't an issue here)

    Read the article

  • How to avoid ugly dithering when running KDE over VNC?

    - by Chris Jester-Young
    I'm currently setting up a new Xen paravirt domain running KDE (4.2.2, from Kubuntu 9.04). As I have been unable to get the virtual framebuffer working in it, I've decided to set up VNC (from the vnc4server package), and run KDE over Xvnc. This is all fine and good, and KDE starts up okay. However, all the colours look dithered, especially on the task bar and title bar, making them impossible to see. From my web searches, it appears to be because these items are drawn using Porter-Duff. This is especially the case when using the Oxygen style, and Oxygen and Ozone window titlebars (selecting these styles generates messages about Porter-Duff being unavailable); not using those styles at least makes most of the UI widgets and window titles usable again. But this doesn't solve the problem for the task bar, nor for the desktop, where the only theme available to me is Oxygen (this is under the "Desktop Settings - Plasma Workspace" window, just for reference). So, unless I have a way to use a non-Porter-Duff theme for those, it seems that KDE would still be unusable under VNC. So if someone experienced with KDE can advise on how to work around, or even fix, these issues, I'd appreciate it very much. :-)

    Read the article

  • IIS crashes with unhandled exception in ASP.NET

    - by SnowCrash
    We had an issue recently with an unhandled exception in an ASP.NET C# application bringing down IIS and all application pools that it was hosting. IIS Manager was unable to restart or stop/start the service and I was unable start IIS again after killing w3wp.exe in the task manager. A system reboot restored IIS to a running state; as a primarily Linux admin, I generally consider an unplanned system reboot to resolve a software error to be an act of high heresy. Is there a way to "harden" IIS so that a faulting application does not affect anything but the request that exposes the fault? Some details on the server and application fault. IIS: 7.5 .NET: 4.0 Windows Server 2008 R2 Faulted on call to System.Net.Dns.Resolve() with a url pointing to a non-existant domain as the argument. (I'm aware that this method is deprecated but the point that a page code issue shouldn't bring down the server still stands) The exception generated was SocketException. The faulting module according to event viewer was KERNELBASE.dll The issue was resolved by wrapping the call in a try-catch, logging the exception and displaying some generic content on the page. I'm hoping that I missed something in the IIS config that would switch it to "production" mode or something.

    Read the article

  • nginx config woes for multiple subdomains & domains

    - by Peter Hanneman
    I'm finally moving away from Apache and I've got the latest development version of nginx running on a fully updated Ubuntu 10.04 VPS. I've got a single dedicated IP for the box (1.2.3.4) but I've got two separate domains pointing to the server: www.example1.com and www.example2.net. I would like to map the fallowing relationships between urls and document roots in the config: www.example1.com / example1.com -> /var/www/pub/example1.com/ subdomain.example1.com -> /var/www/dev/subdomain/example1.com/ www.example2.net / example2.net -> /var/www/pub/example2.net/ subdomain.example2.net -> /var/www/dev/subdomain/example2.net/ Where the name of the requested subdomain is a folder under /var/www/dev/. Ideally a request for a non-existent subdomain(no matching folder found) would result in a rewrite to the public site (eg: invalid.example1.com -- www.example1.com) however a mere "404 Not Found" wouldn't be the worst thing in the world. It would also be nice if I didn't need to modify the config every time I mkdir a new subdomain folder - even better if I don't need to edit it for a new domain either...but now I'm getting greedy... :p Although in my defense Apache did all of this with a single directive. Does anyone know how I can efficiently mimic this behavior in nginx? Thanks in advance, Peter Hanneman

    Read the article

  • How to auto mark tape as free in DPM 2012?

    - by Massimo
    I have a backup server running System Center Data Protection Manager 2012, connected to a couple of tape drives (no library). I also have, of course, some tapes. Tape rotation is manual. The tape have been used before, by DPM itself (but the server has been completely rebuilt) and by other backup softwares; they are not emtpy. But they contain no data that DPM knows and/or wants to preserve, so they can be marked as free without having to run forcefreetape.ps1. When a tape is placed into the drive, it is required to perform an inventory, have it recognized as an imported tape and then mark it as free; otherwise DPM will simply refuse to use it. How can I tell DPM to automatically treat those imported tapes as free? And, of course, I do not want to reuse real backup tapes if by chance they get put into the drives before their expiration date, so the solution should mark imported tapes as free, but should not do the same with real, non-expired tapes.

    Read the article

  • GNU screen cannot find terminfo entry on HP-UX

    - by Ency
    I am trying to make screen work on HP-UX B.11.23 U ia64 0308561483 unlimited-user license. Please notice I do not have root access. I have already compiled screen successfully, configured with LIBS=-lcurses. When I try to start screen it wrotes Cannot find terminfo entry for 'xterm'. But there ARE terminfos for the terminal type in screen-4.0.3> ls -a /usr/share/lib/terminfo/x/ . .. x-hpterm x1700 x1720 x1750 xitex xl83 xterm xterms I thing the problem may be there are in non-standard path, because according to man page standard path is /usr/lib/terminfo/?/* What I tried: But as I said I do not have root access so cant make symlink, anyway I tried run screen with filled TERMINFO_DIRS (TERMINFO_DIRS=/usr/share/lib/terminfo/x/ ./screen and TERMINFO_DIRS=/usr/share/lib/terminfo/ ./screen) but none of them work - same error. Change TERM to different values - same error Cannot find terminfo entry for <WHATEVER WHAT WAS IN TERM VAR>. Put something into screenrc and run ./screen -c screenrc screen-4.0.3> cat screenrc attrcolor b ".I" term xterm termcap xterm* LP:hs@ termcapinfo xterm 'Co#256:AB=\E[48;5;%dm:AF=\E[38;5;%dm' defbce "on" But no luck so far, have you got any suggestions? Need some additional information, let me know.

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >