Search Results

Search found 15651 results on 627 pages for 'setup'.

Page 462/627 | < Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >

  • Cannot login to Windows7 in normal mode but can in safe mode

    - by Guy
    I have a Windows 7 Ultimate computer (Shuttle) that I built myself and in it I put a Solid State Drive (SSD). It's been working well for a number of months but now when I start it there are problems. I have 2 users setup on the computer and when I try and sign in with either user it claims that the password is incorrect. I could understand the odd typo but I've had my wife try it as well and we've got the passwords correct. On top of that it will remain at the login screen for 1 minute and 20 seconds and then spontaneously reboot without shutting down. So I'm trying to work out if this is a hard disk problem or something else. Any ideas? (I have a nightly backup to a WHS so it will be easy to recover but I don't want to do that unless I have to and don't want to waste time putting in a new HD just to discover it's something else.) More info: If I start in Safe Mode I am able to login with the password and all appears as normal as it can in Safe Mode. However, normal boot continues with same problem.

    Read the article

  • Ubuntu 12.04 - Pound Reverse Proxy and Adobe Flex/Flash Auth

    - by James
    First time posting, I have a completely fresh install of ubuntu 12.04 Client as a reverse proxy gateway to our internal network. Our setup is we have one external ip but three domains we would like to point to various webservers on our internal network. It's not so much a load balancing issue or cacheing etc. Merely routing some Client browsers to a port 80 webpage (to adhere to some stricter corporate policies regarding placing port numbers after domain names). I have gone with pound and everything seems to be working fine. Static pages load etc. Everything is good with the exception of a Flash/Flex based WebClient for a Digital Asset Management program. The actual static page loads fine, it is just at the moment of entering credentials, be they correct or incorrect, and hitting login, there is no response whatsoever. Either a rejection or confirmation etc. So the request back to the internal server can't be getting through. I have googled extensively and there might be a solution in a crossdomain.xml file? Documentation isn't very clear. And we are not the authors of the DAM app, and have no control over the code on the Flash/Flex side. Questions: Is there a particular config file/solution for pound that allows Flash/Flex auth information to be forwarded? Is there another reverse proxy program (nginx?)that allows this type of config? Am I looking at this the entire wrong way, should Flash/Flex fundamentally not be allowed to have this access?

    Read the article

  • ghettoVCB issue

    - by romgo75
    I have setup a ghettoVCB script in order to backup three VM. I put it in a crontab but I have an issue. In my backup folder I have 3 different folders, one for each VM. In each folder I have the following files: -rw-r--r-- 1 root root 1263 Mar 17 01:51 vm1-2010-03-16--2.gz -rw-r--r-- 1 root root 1263 Mar 17 00:41 vm1-2010-03-16--3.gz -rw-r--r-- 1 root root 1261 Mar 18 01:22 vm1-2010-03-17--1.gz drwxr-xr-x 1 root root 980 Mar 19 23:39 vm1-2010-03-19 The problem is the last folder. It seems that a backup didn't finish the process. When I read the logs concerning this folder I get: 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/datastore1/backup/ 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3 2010-03-19 23:00:01 -- info: CONFIG - DISK_BACKUP_FORMAT = zeroedthick 2010-03-19 23:00:01 -- info: CONFIG - ADAPTER_FORMAT = buslogic 2010-03-19 23:00:01 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2010-03-19 23:00:01 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2010-03-19 23:00:01 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3 2010-03-19 23:00:01 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2010-03-19 23:00:01 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2010-03-19 23:00:01 -- info: CONFIG - LOG_LEVEL = info 2010-03-19 23:00:01 -- info: CONFIG - BACKUP_LOG_OUTPUT = stdout 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2010-03-19 23:00:01 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all http://... 2010-03-19 23:39:35 -- info: Initiate backup for vm1 2010-03-19 23:39:35 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-03-19" for vm1 Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1_1.vmdk'... ^MClone: 0% done.^MClone: 1% done.^MClone: 2% done.^MClone: 3% done.^MClone: 4% done.^MClone: 5% done.^MClone: 6% done.^MClone: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone Failed to clone disk : The file already exists (39). Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1.vmdk'... 2010-03-20 00:46:20 -- info: Removing snapshot from vm1 ... one: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone: 10% done.^MClone: 11% done.^MClone: 12% done.^MClone: 13% done.^MClone: 14% done.^MClone: 15% done.^MClone: 16% done.^MCl 2010-03-19 23:51:19 -- info: Removing snapshot from vm1 ... I can't run ghettoVCB anymore because the VM has a snapshot which has not been deleted. I know how to delete the snapshot, but I don't know why the VCB script is not able to handle rotation of the VM backups? Any ideas? Thanks!

    Read the article

  • After update, suddenly lost ability to access Windows Server 2008 R2 shares from Windows XP clients

    - by Knute Knudsen
    Today I lost the ability to see my Windows Server 2008 R2 shares from any of my 3 Windows XP machines in my small office. The 5 Win7 machines haven't been affected (they are still able to browse/access the 2008 server), but none of my WinXP machines can access the 2008R2 server anymore. Yesterday (and for the previous year) everything was working fine. I do not have a domain setup. I can still access Win7 shares from WinXP clients. Browsing the server logs, I see that the following update was installed last night: > Installation Ready: The following updates are downloaded and ready for > installation. This computer is currently scheduled to install these > updates on ?Thursday, ?November ?15, ?2012 at 3:00 AM: > - Security Update for Windows Server 2008 R2 x64 Edition (KB2761226) > - Security Update for Microsoft .NET Framework 3.5.1 on Windows 7 and Windows Server 2008 R2 SP1 for x64-based Systems (KB2729452) > - Windows Malicious Software Removal Tool x64 - November 2012 (KB890830) > - Cumulative Security Update for Internet Explorer 9 for Windows Server 2008 R2 x64 Edition (KB2761451) It seems likely that something was changed in last night's update, but so far I haven't seen anything on microsoft.com to prove it. I did hear that XP is reaching the end of the road soon. Any ideas?

    Read the article

  • Can I make two wireless routers communicate using the wireless?

    - by Dana Robinson
    I want to make a setup like this: cable modem <-cable- wireless router 1 <-wireless- wireless router 2 in another room <-cables- PCs in another room Basically, I want to extend my network access across the house and then have a bunch of network jacks available for my office PCs. Right now, I have a cable modem going to a wireless router in one room and a PC with a wireless PCI card in it in the office on the other side of the house. I use internet connection sharing with the other PCs in the office. The problem is that ICS is flaky, especially when I switch to VPN on the Windows box to access files at work. I picked up a wireless USB adapter that I thought I could share among the PCs I work on but I'm not very happy with it so I'm going to return it (NDISwrapper support for it is poor). Is this possible? My wireless experience so far has been pretty straightforward so I have no idea what kind of hardware is available. I've looked at network extenders but those just look like repeaters for signal strength. I want wired network jacks in my office.

    Read the article

  • Correctly setting up UFW on Ubuntu Server 10 LTS which has Nginx, FastCGI and MySQL?

    - by littlejim84
    I'm wanting to get my firewall on my new webserver to be as secure as it needs to be. After I did research for iptables, I came across UFW (Uncomplicated FireWall). This looks like a better way for me to setup a firewall on Ubuntu Server 10 LTS and seeing that it's part of the install, it seems to make sense. My server will have Nginx, FastCGI and MySQL on it. I also want to be allow SSH access (obviously). So I'm curious to know exactly how I should set up UFW and is there anything else I need to take into consideration? After doing research, I found an article that explains it this way: # turn on ufw ufw enable # log all activity (you'll be glad you have this later) ufw logging on # allow port 80 for tcp (web stuff) ufw allow 80/tcp # allow our ssh port ufw allow 5555 # deny everything else ufw default deny # open the ssh config file and edit the port number from 22 to 5555, ctrl-x to exit nano /etc/ssh/sshd_config # restart ssh (don't forget to ssh with port 5555, not 22 from now on) /etc/init.d/ssh reload This all seems to make sense to me. But is it all correct? I want to back this up with any other opinions or advice to ensure I do this right on my server. Many thanks!

    Read the article

  • route propogation using OSPF in a network

    - by liv2hak
    I am using Juniper J-series routers to emulate a small telco and VPN customer.The internal routing will be configured with OSPF,MPLS including a default and backup path,RSVP for distributing labels withing the telco,OSPF for distributing routes from the customer edge (CE) routers to the VRF's in the adjacent PE's and finally iBGP for distributing customer routes between VRF's in different PEs. The topology of the network is shown below. The Addressing scheme for the network is as follows. UOW-TAU ******* ge-0/0/0 192.168.3.1 TAU-PE1 ******* ge-0/0/0 10.0.1.0 ge-0/0/1 10.0.2.0 ge-0/0/2 192.168.3.2 TAU-P1 ****** ge-0/0/0 172.16.1.0 ge-0/0/1 172.16.3.1 ge-0/0/2 10.0.2.2 HAM-P1 ****** ge-0/0/0 172.16.3.2 ge-0/0/1 172.16.2.1 ge-0/0/3 10.0.3.2 ACK-P1 ****** ge-0/0/0 172.16.1.2 ge-0/0/2 172.16.2.2 ge-0/0/3 10.0.1.2 HAM-PE1 ******* ge-0/0/0 10.0.3.1 ge-0/0/2 192.168.4.2 UOW-HAM ******* ge-0/0/0 192.168.4.1 I also set up loopback address for each node. I want to setup OSPF so that path to each internal subnet and router loopback address is propogated to all PE and P nodes.I also want to select a single area for PE and P nodes,and on each node I should add each interface that should be propogated. How do I accomplish this.? With my understanding below is the procedure to achieve this.Is the below explanation correct? I set up OSPF on UOW-TAU ge-0/0/0 interface and ge-0/0/1 interface and UOW-HAM ge-0/0/0 interface and ge-0/0/1 interface. let me call this Area 100. Once I have done this I should be able to reach each node from others using ping and traceroute. Any help is highly appreciated.

    Read the article

  • Ubuntu 12.04 cloud edition on Amazon - Apache2 - /etc

    - by jdog
    I have setup a web server on Amazon with 3 Virtual hosts. For some reason I can't get any of the sites going on it, they all show a 404 error. /var/log/apache2/error.log shows "File does not exist: /etc/apache2/htdocs" I have checked: a2ensite all my virtual hosts actually checked softlinks in sites-enabled access rights in /var/www to 777, in case user is not www-data grep -r htdocs /etc/apache2 (returns nothing) ports.conf has NameVirtualHost directive exactly matching Virtual Hosts What else could this be? ports.conf # If you just change the port or add more ports here, you will likely also # have to change the VirtualHost statement in # /etc/apache2/sites-enabled/000-default # This is also true if you have upgraded from before 2.2.9-3 (i.e. from # Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and # README.Debian.gz NameVirtualHost 107.20.169.163:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> sites-available/www.seleconlight.com <VirtualHost 107.20.169.163:80> ServerName www.seleconlight.com DocumentRoot /var/www/www.seleconlight.com CustomLog /var/log/apache2/www.seleconlight.com-access.log combined ErrorLog /var/log/apache2/www.seleconlight.com-error.log </VirtualHost>

    Read the article

  • Change DPI setting in Windows 8.1 for the Logon Screen

    - by jmc302005
    How can the DPI setting be changed for the Logon Screen in Windows 8.1? Microsoft has added per-user DPI settings. But this means that there is no adjustable DPI setting for the Lock/Logon screen. You can change the DPI setting to be the same across all displays and this does affect the icons and font on the lock/logon screen. However, it does not affect any app/program that can run on the lock/logon screen. Ex. I use a 44" flat screen TV for my monitor on my desktop. Big enough for me to sit in my recliner and use my computer. I use the on-screen keyboard most of the time. (I don't want to keep a keyboard next to me.) The problem is that with the new DPI setup the on-screen keyboard takes up nearly half the screen, which is too big. I tried looking through the registry to see if I could find a setting for it. In the key HKEY_USERS\.DEFAULT\Control Panel\Desktop there is a string value named LogicalDPIOverride with a value of -1. I have a feeling this is where I can fix the issue. I tried changing the value to 0 and to 1 with no change in the result. Instead I noticed that after logging out and back in the -1 value was back in the registry. How can I change this default DPI? Can I use the LogPixels string that worked for DPI in Windows 7? Here are two Screen shots, one of the Lock Screen and one of the Logon Screen:

    Read the article

  • Can I run a mix of static addressing and NAT?

    - by aroth
    Let's say that an ISP offers a plan with 5 static IP addresses, but I have more than 5 devices, many of which (such as a networked printer, for instance) I do not want or need to have a static IP address. So the topography I'm planning goes something like: ISP -> Router -> Switch -> Computer (static address) -> Computer (static address) -> Printer (DHCP/NAT) -> Tv (DHCP/NAT) -> (...) -> Wireless Devices (DHCP/NAT) Generally speaking, is it possible to run a network like that? If not, then what sort of setup do I need so that I can assign static addresses just to the things that need them, and use DHCP/NAT for everything else? Also, what internal networking devices will consume a static IP address? I'm pretty sure the router will, correct? Does the switch also?

    Read the article

  • Biztalk 2009 logshipping with SQL 2008

    - by Manjot
    Hi, I am setting up biztalk logshipping for Biztalk 2009 database. Following http://msdn.microsoft.com/en-us/library/aa560961.aspx article, I am doing the following to setup biztalk logshipping on destination server: Enable Ad-hoc queries by: sp_configure 'show advanced options',1 go reconfigure go sp_configure 'Ad Hoc Distributed Queries',1 go reconfigure go sp_configure 'show advanced options',0 go reconfigure go Execute LogShipping_Destination_Schema & LogShipping_Destination_Logic in master on destinations server Run: exec bts_ConfigureBizTalkLogShipping @nvcDescription = '', @nvcMgmtDatabaseName = '', @nvcMgmtServerName = '', @SourceServerName = null, -- null indicates that this destination server restores all databases @fLinkServers = 1 -- 1 automatically links the server to the management database When I run this I am receiving the following error: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. After some research I found some info : Usually this error means that the SQL Server Service Principal Name (SPN) was not configured, and NTLM was not being used as an authentication mechanism. SQl services are runing under different domain accounts. So, I asked the domain admin to create SPNs for the servers, SQL service accounts for beoth source and destination using name and FQDN. enabled computer name and service accounts for delegation. When I run the following: select * from sys.dm_exec_connections I get the the same error: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON' Any help please?

    Read the article

  • InterVLAN routing on a HP V1910 series switch

    - by tintix
    Recently bought a HP V1910-16G switch (former 3com 29??) with IPv4 routing capabilities. After unpacking I did a firmware upgrade to the latest 5.20 Release 1513P06. I did set up additional VLANs (#2 and #3) and VLAN interfaces for those. The problem is that connected PCs on different VLAN's can't ping each other. Looks like VLAN routing doesn't even work. So here's my setup: VLAN ID VLAN interface 1 10.0.0.21/24 2 10.0.5.1/24 3 10.0.6.1/24 Have one PC connected to VLAN 2 (IP address 10.0.5.2, default gateway 10.0.5.1) and a second PC connected to VLAN 3 (IP address 10.0.6.2, default gateway 10.0.6.1) Routing table: Destination IP Mask Next Hop 0.0.0.0 0.0.0.0 10.0.0.1 10.0.0.0 255.255.255.0 10.0.0.21 10.0.0.21 255.255.255.255 127.0.0.1 10.0.5.0 255.255.255.0 10.0.5.1 10.0.5.1 255.255.255.255 127.0.0.1 10.0.6.0 255.255.255.0 10.0.6.1 10.0.6.1 255.255.255.255 127.0.0.1 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 255.255.255.255 127.0.0.1 The first PC can't ping the second PC one and vice versa. They only can ping their own gateways and that's all. What I'm doing wrong?

    Read the article

  • How can I erase the traces of Folder Redirection from the Default Domain Policy

    - by bruor
    I've taken over from an IT outsourcer and have found a struggle now that we're starting a migration to windows 7. Someone decided that they would setup Folder redirection in the Default Domain Policy. I've since configured redirection in another policy at an OU level. No matter what I do, the windows 7 systems pick up the Default Domain Policy folder redirection settings only. I keep getting entries in the event log showing that the previously redirected folders "need to be redirected" with a status of 0x80000004. From what I can tell this just means that it's redirecting them locally. Is there a way I can wipe that section of the GPO clean so it's no longer there? I'm hesitant to try to reset the default domain policy to complete defaults. ***UPDATE 6-26 I found that the following condition occurred and was causing the grief here. I've already implemented the new policies for clients, and for some reason, XP was working great, 7 was refusing to process. The DDP was enforced. Because of this, and the fact that the folder redirection policies were set to redirect back to the local profile upon removal, it was forcing clients to pick up it's "redirect to local" settings. Requirements for to recreate the issue. -Create a new test OU and policy. -Create some folder redirection settings, set them to redirect to local upon removal -Remove settings on that GPO -Refresh your view of the GPO and check the settings. -You'll notice that the settings show "not configured" entries for folder redirection. -Enforce this GPO -Create another sub-OU -Create a GPO linked to this sub-ou and configure some folder redirection settings. -Watch as the enforced GPOs "not configured" setting overrides the policy you just defined. I've had to relink the DDP to all OU's that have "block inheritance" enabled, and disable the "enforced" option on the DDP as a workaround. I'd love to re-enable enforcement of the DDP, but until I can erase the traces of folder redirection settings from the DDP, I think I'm stuck.

    Read the article

  • Install Python setuptools on CentOS 6

    - by Ivan
    I'm trying to install setuptools with no success so far. When I do python3.3 ez_setup.py I get the following error: Extracting in /tmp/tmp6nn4cz Traceback (most recent call last): File "ez_setup.py", line 370, in <module> sys.exit(main()) File "ez_setup.py", line 367, in main return _install(tarball, _build_install_args(options)) File "ez_setup.py", line 55, in _install tar = tarfile.open(tarball) File "/usr/local/lib/python3.3/tarfile.py", line 1571, in open raise ReadError("file could not be opened successfully") tarfile.ReadError: file could not be opened successfully I've been reading and it seems that zlib-devel was not installed when the python installation was done. However, I did uncomment line 358 on Modules/Setup to enable zlib before compiling and if I try to import zlib on python3.3 console it works. Also, in case it helps, here is the ldd python3.3: # ldd `which python3.3` linux-vdso.so.1 => (0x00007fff79fda000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00002b96092da000) libdl.so.2 => /lib64/libdl.so.2 (0x00002b96094f6000) libutil.so.1 => /lib64/libutil.so.1 (0x00002b96096fa000) libz.so.1 => /lib64/libz.so.1 (0x00002b96098fe000) libm.so.6 => /lib64/libm.so.6 (0x00002b9609b12000) libc.so.6 => /lib64/libc.so.6 (0x00002b9609d95000) /lib64/ld-linux-x86-64.so.2 (0x00002b96090bc000) What can I do?

    Read the article

  • Cisco ASA and static IPv6 tunnel endpoint?

    - by Martijn Heemels
    I recently installed a Cisco ASA 5505 firewall on the edge of our LAN. The setup is simple: Internet <-- ASA <-- LAN I would like provide the hosts in the LAN with IPv6 connectivity by setting up a 6in4 tunnel to SixXS. It would be nice to have the ASA as tunnel endpoint so it can firewall both IPv4 and IPv6 traffic. Unfortunately the ASA apparently can't create a tunnel itself, and can't port-forward protocol 41 traffic, so I believe I would have to do one of the following instead: Set up a host with it's own IP outside the firewall, and have that function as tunnel-endpoint. The ASA can then firewall and route the v6 subnet to the LAN. Set up a host inside the firewall that functions as endpoint, separated via vlan or whatever, and loop the traffic back into the ASA where it can be firewalled and routed. This seems contrived, but would allow me to use a VM instead of a physical machine as endpoint. Any other way? What would you suggest is the optimal way to set this up? P.S. I do have a spare public IP address available if needed, and can spin up another VM in our VMware infrastructure.

    Read the article

  • Way to speed up load-balanced ssl using nginx?

    - by paulnsorensen
    So the setup for our website is 4 nodes running rails 3 and nginx 1 that all use the same GoDaddy certificate. Because we are a paid site, we have to maintain PCI-DSS compliance and thus have to use the more expensive SSL ciphers -- also we force SSL using Rack. I've recently switched over to Linode's NodeBalancer (which I've read is an HACluster), and we're not getting the performance we'd ideally like. From what I've read, it looks like terminating the SSL on the nodes using the high cipher is what is causing the poor performance, but I'd like to be thorough. Is there anything I can do? I've read about other ways to terminate the SSL before the NodeBalancer (like using stud), but I don't know enough about these solutions. We certainly don't want to do anything experimental or anything that has a single point of failure. If there really isn't anything I can do to speed up the SSL handshake, my alternative would be to support certain pages on Rails using a secure and insecure subdomain. I've found a few guides that walk through that, but my resulting question is in this situation, would it be better to have nginx handle forcing ssl on the secure subdomain instead of rails? Thanks!

    Read the article

  • Cannot set up dual monitors correctly in Fedora15 with KDE.

    - by adivasile
    I have 2 monitors: 24" LCD connected via DVI(primary) 19" LCD connected via VGA(secondary) Everytime Fedora starts the second display is always set to clone the first one and they both run at 1280x1024 and I always have to disable the 19" monitor, in order for the bigger one to run at 1920x1080. I want to set them up so that my secondary monitor extends the primary one.The problem is that no matter what kind of configuration I choose it has no effect.My secondary monitor remains disabled. I've tried using both the Display manager from KDE and the ATI Control Panel and the behaviour is always the same.The moment I click apply, the screen flickers and nothing changes. I've succesfully used the extended setup in Fedora15 with Gnome3. I have a RadeonHD 4300 series videocard and I'm using the drivers downloaded from the AMD site. This is the output of xrandr -q : Screen 0: minimum 320 x 200, current 1920 x 1080, maximum 1920 x 1920 VGA-0 connected (normal left inverted right x axis y axis) 1280x1024 75.0 60.0 1280x960 60.0 1152x864 75.0 1024x768 75.0 70.1 66.0 60.0 832x624 74.6 800x600 72.2 75.0 60.3 56.2 640x480 75.0 72.8 66.7 59.9 720x400 70.1 DVI-0 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 477mm x 268mm 1920x1080 60.0*+ 60.0 1680x1050 59.9 1600x900 60.0 1280x1024 75.0 60.0 1280x960 60.0 1152x864 75.0 1280x720 60.0 1152x720 60.0 1024x768 75.0 60.0 832x624 74.6 800x600 75.0 60.3 640x480 75.0 59.9 720x400 70.1 Later edit: The problem seems to come from the ATI drivers.I managed to set up the monitors like I wanted after I uninstalled the drivers. Unfortunately I'm working on an OpenCL project so I had to reinstall them.The moment I did that, all my previous settings were forgotten and I was back to square one.

    Read the article

  • Mouse click-focus wanders in vmPlayer 3.0 dual-monitor

    - by Gary M. Mugford
    Previously, a WinXPSP3 session running on a WinXPSP3 host computer ran perfectly fine in a dual monitor setup. No issues with vmPlayer 2.x. BEFORE updating to vmPlayer 3, the following problem cropped up. When clicking in a single monitor, you would get exactly what you expected. However, when the display was stretched across two monitors, the clicking would be to the left of the mouse cursor. The farther RIGHT you were, the farther left the click would occur. In other words, if you clicked on the system menu of a window in the upper left of a window on the left monitor, you would get the system menu. Move half a screen to your right and the click would be on an item about a quarter of the way over, rather than where you were clicking. And by going all the way to the far right of the right monitor, you could bring up a right-click menu on the far right of the LEFT monitor. I Hope I have described this properly. It's confusing, even in words. In single monitor mode, everything works perfectly fine. If, instead of using either UltraMon or DisplayFusion, you run a single desktop across both monitors (3200x1600), there are no mousing issues. Unfortunately, having two 1600x1200 monitors, that depth of 1600 makes that hack less than useable. My graphic card won't offer anything resembling 3200x1200. vmPlayer 3.0 did not alleviate the situation. The microsoft mouse drivers are up to date and so are the nVidia card drivers. Any ideas?

    Read the article

  • Monit unable to start sidekiq on Opsworks server

    - by webdevtom
    I have used AWS Opsworks to create some servers. I have Sidekiq running as part of my Rails application. When I deploy Sidekiq restarts nicely. I am configuring Monit to watch the pid and start and stop Sidekiq if there are any issues. However when Monit trys to start Sidekiq I see that the wrong Ruby looks to be used. Oct 17 13:52:43 daitengu sidekiq: /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler/definition.rb:361:in `validate_ruby!': Your Ruby version is 1.8.7, but your Gemfile specified 1.9.3 (Bundler::RubyVersionMismatch) Oct 17 13:52:43 daitengu sidekiq: from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler.rb:116:in `setup' Oct 17 13:52:43 daitengu sidekiq: from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler/setup.rb:17 When I run the command from the cli Sidekiq launches correctly. $> cd /srv/www/myapp/current && RAILS_ENV=production nohup /usr/local/bin/bundle exec sidekiq -C config/sidekiq.yml >> /srv/www/myapp/shared/log/sidekiq.log 2>&1 & $> ps -aef |grep sidekiq root 1236 1235 8 20:54 pts/0 00:00:50 sidekiq 2.11.0 myapp [0 of 25 busy] My sidekiq.monitrc file check process unicorn with pidfile /srv/www/myapp/shared/pids/unicorn.pid start program = "/bin/bash -c 'cd /srv/www/myapp/current && /usr/local/bin/bundle exec unicorn_rails --env production --daemonize -c /srv/www/myapp/shared/config/unicorn.conf'" stop program = "/bin/bash -c 'kill -QUIT `cat /srv/www/myapp/shared/pids/unicorn.pid`'"

    Read the article

  • 4.00gb (3.25gb usable) in Windows 7 x64

    - by dotnetdev
    Hi, I have setup Windows 7 Ultimate x64 on my PC. I have 4gb RAM and my BIOS states the correct amount (4096mb), but I cannot Windows (System Manager) says I have 4.00gb (3.25gb usable). This seems to be a popular issue, and I have looked for an integrated video card (integrated with my chipset) to disable but haven't found anything. What else can be preventing me from seeing all 4gb? When I had Vista 32bit, it would say 3.25gb RAM not 4.00gb (3.25gb usable). I have an x64 CPU and when I brought my RAM, I used a compatibility tool from Crucial (the memory vendor) to test how much memory my PC can support and 4gb was the answer (this was a windows app I think). Chipset is Intel(R) G33/G31/P35/P31 Express Chipset PCI Express In the bios, I looked for an onboard video card (integrated) and there was no such thing, but a couple of other onboard devices. There are also no "Resource Mappings" settings. FURHTER DETAILS: Chipset North Bridge: Intel Bearlake G33 South Bridge: Intel 82801IR ICH9R Maximum Memory Amount 8 GB Graphics Controller Type Intel GMA 3100 (Enabled) I guess the first thing is, how do I disable the graphics controller? EDIT: This thread (http://forums.legitreviews.com/about23417.html) indicates the issue is with memory mapped devices, but someone on this thread says that does not apply to x64. The rest of the comments points to a mobo issue for the guy who started that thread. Thanks

    Read the article

  • Nginx all subdomain points to one subdomain (gitlab) rule

    - by Alkimake
    I have installed gitlab on my server and use nginx as http server... I simply used recipe for gitlab on nginx # GITLAB # Maintainer: @randx # App Version: 3.0 upstream gitlab { server unix:/home/gitlab/gitlab/tmp/sockets/gitlab.socket; } server { listen 192.168.250.81:80; # e.g., listen 192.168.1.1:80; server_name gitlab.xxx.com; # e.g., server_name source.example.com; root /home/gitlab/gitlab/public; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab unicorn) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } } gitlab.xxx.com works fine and i get gitlab web documents. But if i want another subdomain i use for Jira (jira.xxx.com) on port 80 (i setup jira on 8080 port normally) gets gitlab web site also. How can i restrict this rule only serving for gitlab, or may be i can redirect jira.xxx.com to jira.xxx.com:8080

    Read the article

  • AjaxControlToolkit JavaScript is not pointing correctly on IIS7 running behind Apache mod_proxy

    - by sohum
    So here's my setup. I've got a DynDNS account since I have a dynamic IP. I have Apache listening on port 80 and IIS7 on port 8080. I don't want users to have to enter in mydyndns.dyndns.com:8080 to get to IIS7, so I've added the following code to my Apache httpd.conf file to enable a proxy/reverse proxy: <VirtualHost *:80> ProxyPass / http://localhost:8080/myASPSite/ ProxyPassReverse / http://localhost:8080/myASPSite/ ServerName myaspsite.mydomain.com </VirtualHost> I've got a CNAME record set up on my DNS so that myaspsite.mydomain.com redirects to mydyndns.dyndns.com. When I type in myaspsite.mydomain.com into my browser, everything works beautifully... mostly. IIS7 serves up the ASPX pages and visitors to the site don't know any better. A problem arises, however, when I add Ajax Control Toolkit controls into my ASPX website, because these generate JavaScript and apparently mod_proxy_html isn't geared to handle the JS URIs properly. Sure enough, when I open up the source of my ASPX page, it has script elements as follows: <script src="/myASPSite/WebResource.axd?xyz" type="text/javascript"></script> <script src="/myASPSite/ScriptResource.axd?xyz" type="text/javascript"></script> Sure enough, these scripts are attempting to be resolved at http://myaspsite.mydomain.com/myASPSite/WebResource..., which through the proxy translates to localhost:8080/myASPSite/myASPSite/.... How can I solve this problem. The couple of websites I found suggested turning on ProxyHTMLExtended but when I tried doing that, the server did not start. I'm guessing I didn't know how to do it properly. Anyone has a handy couple of config lines that I can add to my Apache conf file to get this working as I need? I'm using Apache 2.2.11. Thanks!

    Read the article

  • Cisco SA520 to Adtran 1234 no DHCP transfer

    - by Grico
    I am trying to set up a Cisco SA520 to run DHCP on my network. I have a vendor provided switch, the Adtran 1234, and it provides DHCP for our phone systems on VLAN 200. I do not have access to the Adtran, but the vendor gave me a IP on port 1 for WAN and said port 2 should be for the "trust" side should go. I did setup a mini lab where, Adtran 1 went to SA520 WAN port, and SA520 trust 1 went to my laptop. Everything worked fine, I could ping and get internet using the DHCP scope I put on the SA520. I then unplugged my computer from SA520 trust 1 and plugged it into Adtran 2. I plugged my computer into Adtran 23 and I dont get DHCP or even a link light. If I restart my machine, I get a brief link and then it dies once the machine boots. I have tried several ports on the Adtran and none seem to work. Different cables as well. However, when I plug a phone into the Adtran, the phone boot immediately and shows link. Thoughts?

    Read the article

  • Active Directory FRS problems. 13508 error and other problems

    - by ITPIP
    I have 3 Domain Controllers. We will call them DC1, DC2 and DC3. DC3 and DC2 show Event ID 13508 in their FRS logs with no follow-up event(13509 I think) to say the error had been fixed. DC1's FRS log no matter what you do never shows any events besides FRS service stopped and started. DC1 holds the SYSVOL that needs to be replicated to the other DC's. The other DC's sysvol folders are empty. I have tried the burflag method of fixing this but I haven't had any luck. My procedure for that was to stop all FRS services on all DC's. Then set the burflag on DC1 to D4 and the other two DCs burflag to D2. Started FRS on DC1 and the only event's I see in DC1's FRS event logs are service stopped and service started messages. This fact is leading me to believe that something is wrong on FRS for DC1. I believe there should be events 13553 and 13516 in the FRS event logs after an authoritative sysvol restore. The other two DC's do not have anything in their SYSVOL, otherwise I would have made one of them the authoritative sysvol. DC1 is MS Server 2003 Enterprise Edition SP2 DC2 is MS Server 2003 Standard Edition SP1 DC3 is MS Server 2003 R2 Standard Edition SP2 I did not setup this domain originally but I am now the administrator of it, so I don't have a lot of background on why certain things may have been done in the past. My main goal is to try and fix these issues to get myself better prepared to decommision DC1 and add a DC running Server 2008 to my domain. Thanks.

    Read the article

  • Installed Paragon HFS+ for Windows 8, now my pc won't recognize the external firewire drive

    - by Steve
    I'm not incredibly knowledgeable about computers and I really need some help. Just got a Seagate external firewire drive this morning. I downloaded the necessary pc driver (Paragon HFS+ for Windows 8) through their website per the instructions that came with the drive. After installation, I restarted and the pc recognized the firewire drive just fine. About three hours into copying files from my pc to the firewire drive, it gave me an error and told me the files couldn't be copied. When I clicked to get out of the message, the computer crashed. After an hour of it trying to repair itself in safe mode, it restored me to an earlier version before the system crashed. Here's my current dilemma: The Paragon HFS+ is still showing up in my programs as installed, but the Device Manager is not recognizing the drive. When I try to uninstall and reinstall Paragon, it interrupts me with a message saying "The setup must update files or services that cannot be updated while the system is running" and basically gives me the finger. I have no idea what to do now, as it won't let me uninstall and reinstall Paragon, and I have no idea why it crashed my computer in the first place. Is there possibly another Mac - PC firewire driver I can try downloading instead? I really don't know what I'm doing and any help would be greatly appreciated.

    Read the article

< Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >