Search Results

Search found 20281 results on 812 pages for 'software engineer'.

Page 714/812 | < Previous Page | 710 711 712 713 714 715 716 717 718 719 720 721  | Next Page >

  • Windows 8.1 in a VMWare Workstation 10 guest's mouse is missing, but only sometimes

    - by Rob Perkins
    I have VMWare Workstation 10 running on a Windows 7 machine, hosting a k guest OS. Before upgrading to WS 10 I was using version 9, and the Win8 guest OS ran without difficulty or error conditions. Since upgrading and installing the most current VMWare Tools inside the guest after upgrading to version 10, there are circumstances where the mouse pointer is not visible; the mouse position appears stuck at a screen location which is not the center of the virtualized display; and mouse click and scrolling events still get processed. Once this begins happening I have to reboot the host machine to get it to stop. (VMWare Tools 9.6.1 build-1378637 is what the WS 10 software installed) The problem seems to correlate with whether the mouse is captured during Win 8.1's bootup process, before control is passed to the login screen. If I explicitly click the mouse into the guest OS and move it slowly around while the system is booting, then I see the mouse after clicking to lift the first screen and expose the password prompt, and there is never a problem within the guest. If I don't do this during bootup, there is no mouse pointer, with the symptoms listed above. I have tried removing and reinstalling VMWare tools, and the other steps published for "mouse problems" from VMWare's chaotic troubleshooting database. The problem persists. Is there a setting in the virtual machine's configuration which could prevent this behavior?

    Read the article

  • Can Internet data be used by malware when PC off?

    - by Val
    I have noticed over the last month that my off peak data has been used at a rate of approx 350MB per hour - this has meant that I have gone over my quota and slowed down by my ISP to 256k. There is no one in the house using it (2am-8am is my ISPs off peak hours) at that time. My PC and other wireless devices (ipad and iphone) are turned off. I have changed the wireless password on my modem 3 times and it is now 30 digits long. So I don't think someone else is using my wireless access between 2-8am. It has been suggested by my ISP that I may have malware/spyware on my computer. Sorry for my ignorance, but can malware still run if the PC is off? I did look at my modem's log and followed an IP address to a service called Amazon Simple server Storage. Could this company possibly be the culprit? I am not too tech savvy, so any assistance appreciated. I have run a barrage of spyware cleaning software eg malware bytes; spy bot etc.... Cheers Val

    Read the article

  • LAMP Setup, PHP's session_start permission denied

    - by Andrew
    I'm trying to set up a development environment for a legacy system that runs CentOS 4.8, PHP 4.3.9, and MySQL 4.1.22. I'm matching OS and software versions to keep the development server as close to the production server as possible. When I fire up PHPMyAdmin's setup script (version 2.11.10.1, of course) the installation errors out and I see these errors in my error log: [client 172.18.141.74] PHP Warning: session_start(): open(/var/lib/php/session/sess_b5b90f86bd3dcfad315ff24cb7483a79, O_RDWR) failed: Permission denied (13) in /home/www/intranet/phpmyadmin/libraries/session.inc.php on line 87 [client 172.18.141.74] PHP Warning: Unknown(): open(/var/lib/php/session/sess_b5b90f86bd3dcfad315ff24cb7483a79, O_RDWR) failed: Permission denied (13) in Unknown on line 0 [client 172.18.141.74] PHP Warning: Unknown(): Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/lib/php/session) in Unknown on line 0 I've done some searching on ServerFault and on teh Googles and I see that a common reason for this error is that the session.save_path isn't writable by the www user. I also found where in /etc/php.ini this URL is set: session.save_path. My session.save_path is set to: session.save_path = /var/lib/php/session I've since changed the owner and the group of /var/lib/php/session and still have the same error. Here's the result of ls -la for /var/lib/php [root@localhost php]# ls -la total 24 drwxrwxr-x 3 www www 4096 Oct 23 20:21 . drwxr-xr-x 17 root root 4096 Oct 23 20:31 .. drwxrwx--- 2 www www 4096 Jun 1 2009 session ...But I'm still getting the same error. Is there another possibility for why I'm getting this error?

    Read the article

  • Internet Troubles - PPPoE vs PPPoA?

    - by AkkA
    I have been having some internet troubles at home (ADSL2+ connection in Australia). We get random drop-outs from the authentication connection. It will keep the connection to the DSL service, but we lose authentication and either have to restart the router/modem (its combined, a Belkin one, not sure on model number) or unplug the phone cable, wait about 30 seconds and plug it in again. I've called the ISP (Telstra) a few times, but they only offer limited support when we dont use their supported hardware. Apparently something had happened on their side, they checked the box again (at least it sounded that simple), and told me it would be fine. It wasnt. I've replaced all the filters around the house, but that didnt help either. We do live a little bit away from the exchange (get a sync speed of about 3000/900), so I thought it could be due to line noise but that hasnt helped. Telstra allow both PPPoE and PPPoA connections (which I'm configuring through my router, dont have software on the PC side). I've been running PPPoA the whole time, would it make any difference changing it to PPPoE? If not, are there any other theories as to why we would be experiencing these drop-outs? It has been fine for at least 12 months, then suddenly started about 2 months ago.

    Read the article

  • How to correctly deploy Adobe Reader 9.1

    - by Ben Gillam
    Hi I have recently tried to deploy Adobe Reader 9.1 onto our network here. (SBS 2003 server and XP Workstations) I followed the instructions for the extraction of the installer and .msi and then creating a .mst transform file to set custom options. (Suppress EULA, dont create desktop icon etc) I then added the package to my deployment GPO applied the relevant .mst file and preceded to deploy accross the network. The software package is computer assigned to be installed prior to logon, to avoid user permissions issues. The package deploys correctly to computers and will run perfectly fine if you run from a shortcut, however when trying to view a pdf from within a web browser it fails with the following message. "The adobe acrobat/reader that is running can not be used to view PDF files in a web browser. Adobe Acrobat/Reader version 8 or 9 is required. Please exit and try again" I have found many pages on google refering to this problem, but none appear to be in relation the problems I have found. http :// kb2.adobe.com/cps/405/kb405461.html These fixes recommend correcting a registry entry (which i should mention is missing after the deployed installation. However this does not work. Switching off display in a browser - Seems to defeat the object of fixing the problem Removing old versions - There arent any. Trying with a different user - This affects all users of all privalige levels on all computers. On my workstation I uninstalled Acrobat Reader 9.1 then reinstalled manually using the same installation source files and it works fine. has anyone sucsessfully deployed AR9.1 on their domain and if so how? For the time being I have downloaded the older 8.1.3 release and deployed this in the same way which works fine, but would like to be using the up to date version. Thanks

    Read the article

  • Creating security permissions for a non-domain-member user in Windows Server 2008

    - by Overhed
    Hello everyone, I apologize in advance for incorrect use of terminology, as I'm not an IT person by trade. I'm doing some remote work via a VPN for a client and I need to add some DCOM Service security permissions for my remote user. Even though I'm on the VPN, the request for access to the DCOM service is using my PCs native user (and since I'm running Vista Home Premium it looks something like: PC-NAME\Username). The request for access comes back with access denied and I can not add this user to the security permissions as it "is not from a domain listed in the Select Location dialog box, and is therefore not valid". I'm pretty stuck and have no clue what kind of steps I need to do here. Any help would be appreciated, thanks in advance. EDIT: I have no control over what credentials are being passed in to the server by my computer. This scenario is occurring in an installation wizard that has a section which requests you point it to the machine running the "server" version of the software I'm installing (it then tries to invoke the relevant COM service, but my user does not have "Remove Activation Permissions" on that service, so I get request denied).

    Read the article

  • How to repair damaged repository (which has a centralized .svn directory)?

    - by Heinrich Ulbricht
    I recently upgraded my TortoiseSVN installation to version 1.7.1. This forced me to upgrade my working copy as well. The upgrade removed all (but one) of the .svn directories from all subdirectories leaving only one in the root. Now out of the blue (of course; I suspect my antivirus software) there is an error when I for example try to clean up the working copy. I am also not able to commit anything. The error message when cleaning up is: Cleanup failed to process the following paths: C:\svn Can't open file 'C:\svn.svn\pristine\73\73bcc5fa7819f84f56b81dfa0236f0aac7b7d404.svn-base': The system cannot find the file specified. I traced the error to be related to the presence of one directory within the working copy. If I rename it then everything works. When it is present I get the error. I also deleted it and checked it out again. No change, the error persists. With previous versions I could repair damages in the .svn easily: just delete the offending folder and check out again. I cannot do this anymore because now the .svn dir is centralized. What could I do to repair my working copy?

    Read the article

  • Freebsd jail for an small company - checklist - what shouldn't forget

    - by cajwine
    Looking for an checklist for an "small company freebsd/jail server". Having pretty common starting point: FreeBSD jail (remote/headless) for the company: public web, email, ftp server, and private (maybe in the future partially public) wiki (foswiki) 4 physical persons, (6 email addresses) + one admin - others will never use ssh) have already done usual hardening on the host side (like pf, sshguard etc). my major components are: dovecot, exim, apache22, proftpd, perl5.14. Looking for an checklist, what I shouldn't forget. My plan: openssl self-signed certificates for exim, dovecot and proftpd (wildcard keys) openssl self-signed certificate for apache (later will go for "trusted-signed" key) My questions are: is is an "good practice" having one pair of wildcard SSL-certificates for many programs? (exim, dovecot, proftpd) - or should I generate one key for each service? should I add all 4 persons as standard (unix) users, or I should go with virtual users? Asking because: have only small count of users, and it is more simple to configure everything (exim, dovecot) for local users ($HOME/Maildir), plus ability to set $HOME/.forward/vacation and etc. is here some (special) things what I should consider? (e.g. maybe, in the future we want setup our own webmail - will make this any difference?) any other recommendation? Thank you, hoping that this question fit into the http://serverfault.com/faq under the: Server and Business Workstation operating systems, hardware, software Operations, maintenance, and monitoring Looking for an checklist, but please explain why you're recommending it. See Good Subjective, Bad Subjective. related: What's your suggested mail server configuration for a FreeBSD server?

    Read the article

  • How do I set up an email server that automatically maintains a list of previous recipients?

    - by hsivonen
    I want to set up an email server with the following characteristics. What software (besides bogofilter and clamav that I'm naming) should I use and what HOWTOs should I read? The server should run some flavor of Linux that's as low-maintenance as possible and self-updates for security patches in a timely fashion. (Debian stable?) When email is sent, all the recipients are stored in the list of previous recipients maintained by the server. Scan incoming messages with clamav and treat as spam if it contains viruses. When email arrives (if it passed clamav), if the sender is on the list of previous recipients, bypass spam filter. If the List-Id header names a mailing list on a manually maintained list of known-clean mailing lists, bypass spam filter and deliver into a mailbox depending on the mailing list name. Email that wasn't from previous recipients, manually white listed domains or mailing lists gets filtered by bogofilter. Spam goes into a spam mailbox. Email considered to be ham should automatically be fed to bogofilter training as ham. Email considered to be spam (incl. messages with viruses) should be automatically fed to bogofilter training as spam. There should be mailboxes for false ham and false spam that an IMAP client can move email into so that the server retrains bogofilter appropriately. Email sending requires SMTP over SSL. Email reading requires IMAPS. Should I also want to use SpamAssassin in addition to bogofilter?

    Read the article

  • How can I avoid my web browser from redirecting to localhost using WAMP in Windows7?

    - by Josh
    I'm currently using Windows 7 with WAMP to try and work on some software, but my web browsers will not accept cookies from the "localhost" domain. I tried creating a few bogus domains in my hosts file by pointing them to 127.0.0.1 but when I type them in I am automatically redirected back to localhost. I have also configured virtualhosts in apache to correspond with the domains I added to the hosts file and it still redirects back to localhost. Is there anything special I must do on Windows 7 to get around this localhost redirect? Thanks for looking :) I'll include my host file here: # Copyright (c) 1993-2009 Microsoft Corp. # # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host # localhost name resolution is handled within DNS itself. # 127.0.0.1 localhost # ::1 localhost 127.0.0.1 magento.localhost.com www.localhost.com Thanks for looking :)

    Read the article

  • Windows 7 ignores F6/F8 and will not boot

    - by P.Brian.Mackey
    I have a work PC with sophos safeguard encryption on it. Windows failed to start. When I bootup I receive an error saying a recent hardware or software change might be the cause. File: \Boot\BCD Status: 0xc0000098 Info: The windows boot configuration data file does not contain a valid OS entry. This began after the PC forced me to run a system recovery. My machine had powered down improperly (power outage?) and simply would not respond to my keyboard input to cancel the option to scan my system. After the scan "repaired" a boot file, my system crashed. Now it tells me I can insert my windows 7 disk and run recovery. I can't simply do this because of Safeguard. The system recovery can't see my encrypted drive. I tried hitting F2 to manually login to Safeguard and then selected the option to boot from media. The computer prompts me to hit any key to boot from disk...which I do, but once again it is not reading my keyboard input. I can't get F8/F6 to bypass startup files and get me to a command prompt like the old days. If I could get to a command prompt I might could recover the file windows jacked up from its backup location...though I may need to use the windows recovery disk UI to do this..??? In the past I've been able to slap in a PS/2 keyboard when the USB keyboards stop responding like this. I have no PS/2 keyboard available. Anyone have any idea how I can undo the damage windows system recovery has done with safeguard installed?

    Read the article

  • How to reproduce the behavior of Mac OS X's dead keys on Windows 7?

    - by Pascal Qyy
    I'm French, but I've chosen to take a QWERTY keyboard for my MacBook Pro for many reasons: first of all, the AZERTY keyboard is not at all ergonomic because it has no numeric keypad, and I must use MAJ or CAPS LOCK to access to the numeric keys ; secondly, I've bought this mac for development ; and chars {, }, etc., are not directly accessible on the Apple AZERTY keyboard the last thing is that: the diacritics are VERY easy to produce on an Apple keyboard with Mac OS X : ? + c for a ç, for example, and many dead keys easy to use (e.g. ? + e, then e give you an é. So, I have no difficulties to write in my native language with this keyboard under Mac OS X. BUT, when I boot on Windows 7's Boot Camp partition, or when I use applications from it through VMware Unity, it is no longer the same comfort! Without numeric keypad, it's impossible to use it for produce specials characters (e.g.: Alt + 0231 for the ç) I've tried many solutions, like auto replacement in Microsoft Office (e.g.: ,,c being replaced by ç), but for all my diacritics, I must type a space, then a back space before the replacement work. I've also tried third party software, as Texter, but it is very buggy and don't work properly (or don't work at all) in many case! So, is there a solution somewhere, to have this Mac OS X's nice and comfortable way of producing diacritics for Windows 7? Thank in advance for your help and your time!

    Read the article

  • Failing to load rootfs: Ubuntu 10 + grub2 + rootfs ext4 w/ RAID1

    - by James
    I am having problems booting a new Ubuntu 10 (server) install. My primary HD (/dev/sda) is laid out as follows: Device Boot Start End Blocks Id System /dev/sda1 * 1 18 144553+ 83 Linux <-- /BOOT /dev/sda2 19 182401 1464991447+ 5 Extended /dev/sda5 19 2207 17583111 fd Linux raid autodetect /dev/sda6 2208 11934 78132096 fd Linux raid autodetect <-- / (ROOTFS) /dev/sda7 11935 182401 1369276146 fd Linux raid autodetect The rootfs is part of a RAID1 (software) array (currently degraded): # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md2 : active raid1 sda6[1] 78132032 blocks [2/1] [_U] The UUIDs for the partitions are as follows: # blkid /dev/sda1 /dev/sda1: UUID="b25dd301-41b9-4f4d-9b0a-0e31713dd74c" TYPE="ext2" # blkid /dev/sda6 /dev/sda6: UUID="af7b9ede-fa53-c0c1-74be-31ec752c5cd5" TYPE="linux_raid_member" # blkid /dev/md2 /dev/md2: UUID="a0602d42-6855-482f-870c-6f6ecdcdae3f" TYPE="ext4" Finally, I have my grub2 menuentry setup as follows: ### BEGIN /etc/grub.d/10_linux ### menuentry 'Ubuntu, with Linux 2.6.32-25-server' --class ubuntu --class gnu-linux --class gnu --class os { insmod ext2 insmod raid insmod mdraid set root='(hd0,1)' search --no-floppy --fs-uuid --set b25dd301-41b9-4f4d-9b0a-0e31713dd74c linux /vmlinuz-2.6.32-25-server root=UUID=a0602d42-6855-482f-870c-6f6ecdcdae3f ro nosplash noplymouth initrd /initrd.img-2.6.32-25-server } When I attempt to boot, grub loads OK, however I eventually get the following error message: Gave up waiting for root device. ALERT /dev/disk/by-uuid/a0602d42-6855-482f-870c-6f6ecdcdae3f does not exist. Dropping to a shell! If from the grub bootloader I open a grub command line, I can ls (hd0,) and it lists the correct partitions with the UUIDs as shown above - sda6 shows 'a0602d42-6855-482f-870c-6f6ecdcdae3f' (the RAID UUID). If I ls (md2)/ it properly lists all the files on the RAID1 filesystem (ext4) so it doesn't appear to be an issue accessing the raid device. Does anyone have any suggestions as to what the problem might be? I can't figure this one out.

    Read the article

  • How configure 2 Lan cards in Windows 7/8 pc one to connect to Internet and other to Local Network

    - by Maharshi Raval
        I am about to install a dedicated VOIP server in our office. It is a 3CX pbx system on Windows 7/8 machine. The environment currently is a Windows SBS 2011 with 8 client machines. I want to use a dedicated broadband connection for the PBX (3CX) box, but the box also needs to be accessible in the local network as we will be using IP Phones and software IP phones. How configure two network cards on PBX box, so that one will be always used to connect to our SIP host over the Internet and the other will be connected to local network accessible from other client pc to connect to the pbx system. It must be noted that currently the Windows SBS 2011 acts as the Primary Domain Controller and gateway for all the client machines.     I cannot use a load balancer as it will conflict and cause issues within the current setup of our SBS2011 as it is also our Exchange Server. Any input is much appreciated. thanks in advance

    Read the article

  • How configure 2 Lan cards in Windows 7/8 pc one to connect to Internet and other to Local Network

    - by Maharshi Raval
        I am about to install a dedicated VOIP server in our office. It is a 3CX pbx system on Windows 7/8 machine. The environment currently is a Windows SBS 2011 with 8 client machines. I want to use a dedicated broadband connection for the PBX (3CX) box, but the box also needs to be accessible in the local network as we will be using IP Phones and software IP phones. How configure two network cards on PBX box, so that one will be always used to connect to our SIP host over the Internet and the other will be connected to local network accessible from other client pc to connect to the pbx system. It must be noted that currently the Windows SBS 2011 acts as the Primary Domain Controller and gateway for all the client machines.     I cannot use a load balancer as it will conflict and cause issues within the current setup of our SBS2011 as it is also our Exchange Server. Any input is much appreciated. thanks in advance

    Read the article

  • Adjust output Brightness/Gamma/Colors in Gnome

    - by Mikee
    We have a desktop system running Ubuntu 8.04.4, and it is connected to a standard desktop LCD monitor. Unfortunately, in 8.04.4, the brightness of the image is cranked way up. It appears to be a graphics driver issue. Unfortunately, installing a newer GPU driver for this Intel GPU is very difficult to do. So, I am looking for a software (or config file?) solution to achieve this. Note: Ubuntu 9.10 and higher do not exhibit this issue, so this is not a hardware problem. Note: VNC-ing to this machine from another does not exhibit this issue either. Also, I installed "DisplayCalibrator.app", and it does not work very well (the app comes up, but the contents of the window are blank). Is there anything that I can add to the xorg.conf file to correct this issue? Also, this solution: http://superuser.com/questions/96539/adjust-contrast-and-brightness-in-ubuntu did not resove my issue. Thank you all for the help!

    Read the article

  • Mac, VNC and multiple monitors

    - by MarqueIV
    I asked a similar question here before but apparently I wasn't as clear as I had expected by the responses. That said, I'll try again. I have a Mac Pro with quad monitors which I would like to access remotely. I've been using VNC for this (either via screen sharing or a dedicated VNC client), which works, but the VNC protocol matches the physical layout/resolutions of attached monitors. One of the things I like about Microsoft's Remote Desktop (Terminal Server) client is that when you connect, it blanks out the local screens and sets the resolution to a client-specified setting. In other words, when natively running Windows, even though I'm running a physical 30" monitor flanked by 2 24" monitors as well as a 21" Cintiq monitor, I can set the Remote Desktop resolution to match my notebook's screen giving me a native, single-monitor configuration. As soon as I disconnect (and you log back in locally), the desktop un-blanks and the resolution resets back to the four physically attached monitors. Again, VNC works and yes I know I can use 5901, 5902...n to attach VNC to a specific monitor as opposed to the entire desktop, but I'm still at the mercy of trying to look at a 2560x1600 resolution on a 1280x800 screen. I'm left with either scaling (everything's too small) or panning/scrolling (it's like playing hide-and-seek with your documents!) SO... anyone know of any Mac-based remote software (client and server) that will let me connect to my Mac Pro and reset the resolution by the client, just like you can in Windows, or am I SOL?

    Read the article

  • Synchronize Active Directory to Database

    - by Tommy Jakobsen
    We are in a situation where we would like to offer our customers to be able to manage their users themselves. It is around 300 customers with up to a total of 10.000 users. Besides creating, updating and removing users, they will very often read information about users for statics and other useful informations available. All this functionality, should be available from an Intranet web page (.NET Framework 4) that the users will access through Citrix or similar. Now the problem is that we would really like the users not to query AD directly for each request, but rather make them hit a database that is synchronized with AD. It would be sufficient to run this synchronization a few time each day (maybe every 5. hour). When they create a user, it should not be available right away, but reviewed and then created within two days (the next step would be to remove this manual review, but that's out of scope for this question). What do you think about this synchronization of AD? Does anyone have any experience with it and is it something that is done in other organizations, where you will have lots of requests which is better handled by a database than AD (I presume)? Are there any techniques out there for writing such a script that synchronizes AD with database tables? My primary concern is the groups/members relations which can be rather complicated. Or are there software that synchronizes AD with a database? Any comments will be much appreciated. Thank you.

    Read the article

  • No Microsoft Security Essentials for Windows 8. So, how to access similar Defender features/settings?

    - by Chris W. Rea
    I just installed Windows 8 Pro. One of the first things I went to do is install Microsoft Security Essentials, thinking I still needed add-on security software, but I've learned here that it isn't required for Windows 8. Witness: Got Windows 8 or Windows RT? Windows Defender for Windows 8 and Windows RT provides the same level of protection against malware as Microsoft Security Essentials. You can't use Microsoft Security Essentials with Windows 8, but you don't need to — Windows Defender is already included and ready to go. [...] All well and good. However, on Windows 7, once you installed Microsoft Security Essentials, you got a tray icon, and from there you could access the features of MSE, such as perform custom scans, turn off real-time protection (temporarily, of course), check for updates, etc. However, Defender on Windows 8 doesn't display a tray icon – and yes, I've already made sure I'm displaying all icons in the notification area. So, how to access the similar specific features of Windows Defender on Windows 8?

    Read the article

  • LDAP Account Locked Out Sporadically after Password change - Finding the source of invalid attempts

    - by CityView
    On a small network of machines (<1000) we have a user whose account is being locked out after an indeterminate interval following a password change. We are having severe difficulties finding the source of the invalid logon attempts and I would appreciate it greatly if some of you could go through your thought process and the checks you would perform in order to fix the problem. All I know for sure is that the account is locked out several (5+) times a day, I can't even be sure it's due to failed login attempts as there is no record of failure until the account is locked. So far I have tried; Logging the account out of everything we can think of and back in with the new password Scanning the user's box for any non standard software which might perform an LDAP lookup Checking all installed services on our production boxes to check none are attempting to run under the account Changing the user back to their old password (Problem persists so perhaps password change is a red herring) Wireshark on a box where lots of LDAP authentication is performed - Rejects only occur after account is already locked out Clearing the credential cache in - Control Panel - User Accounts - Advanced Looking at the local I'm at a loss for what to try. I am happy to try any suggestions you have in order to diagnose the issue. I think my question boils down to a simple request; I need a technique for deriving the source (Application/Host) of the invalid login attempts which are causing the account to be locked. I'm not sure if that's even possible but I suspect there must be more I can try. Many thanks, CityView

    Read the article

  • Large, high performance object or key/value store for HTTP serving on Linux

    - by Tommy
    I have a service that serves images to end users at a very high rate using plain HTTP. The images vary between 4 and 64kbytes, and there are 1.300.000.000 of them in total. The dataset is about 30TiB in size and changes (new objects, updates, deletes) make out less than 1% of the requests. The number of requests pr. second vary from 240 to 9000 and is dispersed pretty much all over, with few objects being especially "hot". As of now, these images are files on a ext3 filesystem distributed read only across a large amount of mid range servers. This poses several problems: Using a fileysystem is very inefficient since the metadata size is large, the inode/dentry cache is volatile on linux and some daemons tend to stat()/readdir() it's way through the directory structure, which in my case becomes very expensive. Updating the dataset is very time consuming and requires remounting between set A and B. The only reasonable handling is operating on the block device for backup, copying, etc. What I would like is a deamon that: speaks HTTP (get, put, delete and perhaps update) stores data it in an efficient structure. The index should remain in memory, and considering the amount of objects, the overhead must be small. The software should be able to handle massive connections with slow (if any) time needed to ramp up. Index should be read in memory at startup. Statistics would be nice, but not mandatory. I have experimented a bit with riak, redis, mongodb, kyoto and varnish with persistent storage, but I haven't had the chance to dig in really deep yet.

    Read the article

  • How to setup a hyper-v domain with internet access

    - by fynnbob
    First off let me say that I'm not a network admin or server guy, I know very little about that stuff. What I'm trying to do is setup a virtualized domain using hyper-V. Here is the configuration: Physical Server: 4Mb RAM Windows Server 2008 R2 running Hyper-V Virtual Environment: One Domain Controller running Windows Server 2008 R2 One Client running Windows Server 2008 R2 I have been successful in setting up a virtual domain controller and adding a virtual client to that domain controller but I'm stuck at trying to give the virtual Environment Internet access. I can give the client VM Internet access if I remove them from the virtual domain but once I add them back to the virtual domain, Internet access is gone. I've read articles describing many different ways this can be done (using RRAS with NAT, using a wireless connection, etc...) but all of those articles only cover a small piece of the setup and also seem to be geared towards people who know there way around networking and servers which I don't. I'd like to know more but my thing is software development and I have my hands full trying to keep up with everything in that realm. I simply want to setup a virtual domain with Internet access for testing. Can anyone point me to any "for Dummy's" type information on how to setup this type of environment or can anyone provide this kind of step-by-step help. Any help would be very much appreciated.

    Read the article

  • Nginx + PHPBB3 reverse proxy images problem

    - by siberiano
    Hello all I have a problem with my Nginx Frontend + Apache2 backend + PHPBB3 software. It doesn't load the CSS and the images neither. I get constant errors like these: 2010/04/14 16:57:25 [error] 13365#0: *69 open() "/var/www/foo/styles/styles/coffee_time/theme/large.css" failed (2: No such file or directory), client: 83.44.175.237, server: www.foo.com, request: "GET /styles/coffee_time/theme/large.css HTTP/1.1", host: "www.foo.com", referrer: "http://www.foo.com/viewforum.php?f=43" This is my config of the site: server { listen 80; server_name www.foo.com; access_log /var/log/nginx/foo.access.log; # serve static files directly location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico)$ { access_log off; expires 30d; root /var/www/trasteando/; } location / { root /var/www/foo/; index /var/www/foo/index.php; } # proxy the PHP scripts to predefined upstream .apache. # location ~ .php$ { proxy_pass http://apache; } location /styles/ { root /var/www/foo/styles/; }

    Read the article

  • Mac updated just now, postgres now broken

    - by user52224
    I run postgres 9.1 / ruby 1.9.2 / rails 3.1.0 on a maxbook air for local dev. It's all been running smoothly for months, (though this is the first time I've done development on a mac.) It's a macbook air from last year, and today I got the mac osx software update message as I have a few times before, and my system downloaded approx 450mb of updates and restarted. It now says it's on OSX 10.7.3. Point is, postgres has stopped working, when I start my thin server (mirror heroku cedar) as normal, and then browse to my rails app I get: PG::Error could not connect to server: Permission denied Is the server running locally and accepting connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"? What happened? After browsing around a few questions I'm still confused, but here's some extra info: Running psql from command line gives same error I can run pgadmin 3 and connect via it and run SQL no problems Running which psql shows the version as /usr/bin/psql I created a PostgreSQL user back when I got the mac (it's always been on lion) I've no idea why, almost certainly I was following a tutorial which I neglected to store in my notes. Point is I am aware there is a _postgres user as well. I know it's rubbish, but apart from a note on passwords, I don't have any extra info on how I configured postgres - though the obvious implication is that I did not use the _postgres user. Anyone have suggestions or information on what might have changed / what I can try to debug and fix? Thanks. Edit: Playing around based on this question and answer: http://stackoverflow.com/questions/7975414/check-status-of-postgresql-server-mac-os-x, see this string of commands: $ sudo su postgreSQL bash-3.2$ /Library/PostgreSQL/9.1/bin/pg_ctl start -D /Library/PostgreSQL/9.1/data pg_ctl: another server might be running; trying to start server anyway server starting bash-3.2$ 2012-04-08 19:03:39 GMT FATAL: lock file "postmaster.pid" already exists 2012-04-08 19:03:39 GMT HINT: Is another postmaster (PID 68) running in data directory "/Library/PostgreSQL/9.1/data"? bash-3.2$ exit

    Read the article

  • Display stretches 4:3 ratios; Adds scrolling to other ratios

    - by Matt
    I have a dual monitor setup. Normally, they both display at 1680x1050. They have been setup this way for about a year. I'm using Windows XP Professional 2003 x64 SP2. Today, out of nowhere, one of the monitors kicked back to a lower resolution. I was not playing with any configuration at the time.. in fact all I had done was close a window (maybe a browser). But the thing is that the resolution is still preserved partially by the fact that the screen will scroll when you move the mouse. So it's like looking through a 1024x768 window into a 1680x1050 world. The monitor itself does not appear to be damaged, because I also have it connected to my netbook (via KVM) and higher resolutions work fine. I tried uninstalling/reinstalling the drivers to no avail. System restore doesn't help either. I'm unsure of the exact ATI card I'm using.. Device Manager lists it as "Radeon X300/X550/X1050". There is no Catalyst Control Center software installed. I tried to install it, but there doesn't seem to be a way to install it by itself ... it forces you to install another driver, which breaks both of my displays, forcing me to go into safe mode and run system restore again. Any ideas? Thanks EDIT: After playing around more, I discovered that the "scrolling" behavior is only present for aspect ratios that are not 4:3. For 4:3 ratios, it just stretches out to fit the wide screen. My monitor's native ratio is 16:9 .. what could be causing it to think it needs to scroll?

    Read the article

< Previous Page | 710 711 712 713 714 715 716 717 718 719 720 721  | Next Page >