Search Results

Search found 5378 results on 216 pages for 'spell checking'.

Page 93/216 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Can I increase the link speed of the RAS Server on our MS Win2k3 box?

    - by Ducain
    We are running a Win2K3 Server box, and I'm a remote employee that connects via VPN. I've been frustrated for some time by the connection speed over the VPN (the office HQ has a decent speed and I have a biz class connection here), and decided to do some checking today. This morning, I was dialed in and looked at the networking tab of the task manager, and I see that the adapter for the RAS Server (the box has 4 Gigabit adapters) has a speed that seems far too low. The speed for the RAS Server link hovers between 300 - 600 Kbps. The local connection (and others) all say 1 Gbps. Can I set this to a higher speed? Is this information accurate? Thanks for the input.

    Read the article

  • How can we monitor a HTTPS URL?

    - by Animesh
    A couple of our recent customers have had their applications configured for HTTPS only. Currently we are using a tool which does a good job of monitoring customers' app-server state. For the existing customers, HTTP URLs also work, so the tool can monitor the health. But the recent one have only the HTTPS enabled so the tool fails automatically. To this end, I am looking for a tool which would monitor the app-server state and send email to the group. Simple monitoring like checking to see if the app-server is up or not is all I would need. But more features are also definitely helpful. Thanks!

    Read the article

  • Hiding "Syntax OK" from apache2ctl output

    - by Oscar Barrett
    I am checking whether a particular apache module is installed using apache2ctl -M. When listing the modules, apache runs a syntax check on the configuration files which prints out "Syntax OK" if everything is fine. However, this message doesn't seem to be coming from STDOUT or STDERR as it shows even if all output is redirected to /dev/null. i.e. $ sudo apache2ctl -M Loaded Modules: core_module (static) log_config_module (static) ... Syntax OK $ sudo apache2ctl -M >/dev/null Syntax OK How is this being outputted, and is it possible to hide?

    Read the article

  • Server suddenly running out of entropy

    - by Creshal
    Since a reboot yesterday, one of our virtual servers (Debian Lenny, virtualized with Xen) is constantly running out of entropy, leading to timeouts etc. when trying to connect over SSH / TLS-enabled protocols. Is there any way to check which process(es) is(/are) eating up all the entropy? Edit: What I tried: Adding additional entropy sources: time_entropyd, rng-tools feeding urandom back into random, pseudorandom file accesses – netted about 1 MiB additional entropy per second, problems still persisted Checking for unusual activity via lsof, netstat and tcpdump – nothing. No noticeable load or anything Stopping daemons, restarting permanent sessions, rebooting the entire VM – no change in behaviour What in the end worked: Waiting. Since about yesterday noon, there are no connection problems anymore. Entropy is still somewhat low (128 Bytes peak), but TLS/SSH sessions have no noticeable delay anymore. I'm slowly switching our clients back to TLS (all five of them!), but I don't expect any change in behavior now.

    Read the article

  • How to automatically set default quota limits for users on XFS filesystem, when the new account is created

    - by acidburn2k
    I guess the title explains the problem pretty well. Do you have an idea for a mechanism, which will automatically assign default quota values for every new account created (sort as the skel scheme works, but in this area)? Now, I am looking for a generic clean solution, not some ugly cron based scripts, or wrapper scripts for creating users. I would also like to avoid any external, unmaintained stuff (like forgotten pam modules, and such). Anything what could lead to overhead and extra work in future isn't really the solution, nor is checking for new accounts every minute.

    Read the article

  • Computer Locks Up Completely When Resuming From Standby

    - by kcoppock
    Okay, so I'm having a strange issue lately with my desktop, and off the top of my head I'm not sure what the problem is, so I'm hoping for some ideas on how to track down the cause. First, my PC: AMD Phenom 9600 Quad-Core 2.4 GHz 4 GB DDR2 160 GB boot HDD 1 TB & 120 GB secondary HDD Nvidia GeForce 8600 GTS Windows 7 Professional 64-bit I have my power settings as follows: Turn off the display: 10 minutes Turn off hard disk: 20 minutes Sleep after: Never After a while, I'll come back to use my computer again, wiggle the mouse, and the screen comes back on like normal, but the desktop is unresponsive. I can move the cursor, but nothing responds to clicks or keyboard entry. If I click a few times with the mouse, the computer eventually locks up completely and the mouse cursor stops moving as well. After I restart, everything works just fine. Is there a log you recommend checking or really any suggestions would be welcome at this point. Thanks!

    Read the article

  • Persistent Spot Instance Request with CloudFormation

    - by PapelPincel
    Is it possible to create "Persistent Spot Instance" with AWS CloudFormation ? I'm going through the Autoscale and EC2 CloudFormation's template references but there is no mention how to set a property so the Spot requests stay persistent. When the price bid lower than the actual spot price AWS brings the instances down. I would like the instances to be started automatically when the instance price is cheaper again. This can be set manually when creating a new spot instance request by checking the option "Persistent Request" in the "Request Instances Wizard".

    Read the article

  • Laptop connects to other network but not to my home wireless

    - by Nilesh
    My home network's wireless SSID is say "XYZ" I also have an ethernet wire from the same router. I have two laptops A and B Earlier both A and B were able to connect to my home internet through the ethernet and wireless. Suddenly, the laptop B can no longer connect to XYZ or through ethernet. When I do plug the wire, i get the connection icon all green but when I try to access any web page it errors out (page not found) But strangely laptop B connects to my neighbours wireless SSID "ABC". I have also tested laptop B with other networks and it connects fine. Laptop A and many other devices still connect fine with my home wireless "XYZ" Strange thing is when my laptop B connects wireless through XYz, it gets the IP address but then none of the browsers (chrome,firefox, IE) can show any web pages. What settings should I be checking on laptop B that is preventing it to connect to my home internet. Thank you

    Read the article

  • What are the advantages of DKIM?

    - by duff
    After making my server sign outgoing email I started to wonder what the benefits are. This is the opposite of a previously asked question. Naively I see two benefits: We can throw away all emails which don’t carry a valid signature: Wrong! Mail forwarders (like Mailman) will produce emails from someone at «domain which signs» which are not signed correctly (in their forwarded shape). We can skip spam checking on signed email: Wrong! A spammer can send a single email through e.g. gmail.com and then resend that email as-is (w/o changing headers) to a million people. So what are the selling points of DKIM?

    Read the article

  • Installing qt headers and libraries

    - by ant2009
    Hello, Ubuntu 9.10 2.6.31-20-generic I am trying to install some software. When I do the ./configure. I get the following error: checking for Qt... configure: error: Qt (>= Qt 3.0.2) (headers and libraries) not found. Please check your installation! So I install the following package thinking this would solve the problem. sudo apt-get install libqt4-dev I was thinking that the libqt4-dev would install the headers and libraries. However, I am still getting the same problem. Any ideas? Many thanks,

    Read the article

  • IE8/IE7/IE6/IE5 on WinXP Use The Wrong Certificate

    - by Marco Calì
    For some reason IE8/IE7/IE6/IE5 on Windows XP, instead to use the certificate that is listed on the nginx website config, is using another certificate that is used from other websites. Checking the nging config file for the website everything is fine. A confirm of this is that all the other browsers (Chrome/Firefox/Safari/IE9) are using the correct certificate. This is the nginx configuration for the app: server { listen 80; listen 443 ssl; server_name mydomain.com; ssl_certificate /root/certs/mydomain.com/mydomain.bundle.crt; ssl_certificate_key /root/certs/mydomain.com/mydoamin.key; access_log /opt/webapps/cs_at/logs/access.log; location / { add_header P3P 'CP="CAO PSA OUR"'; proxy_pass http://127.0.0.1:20004; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Real-IP $remote_addr; } }

    Read the article

  • ZFS, dedupe and PST files

    - by Unreason
    I am interested to know what would be expected maximum dedupe ratio for a set of PST files. I have ~40G of pst files from ~15 usres with high level of duplication of attachments. I am running tests to see if I can have significant space savings if I store the data on ZFS with dedupe. For this purpose I have installed a test setup of Nexenta, but was wondering if someone here had already done this and what level of deduplication I might expect (or in another words how sensitive are pst files to block alignment and what are the parameters that can influence the ratio?). Initial test show very low dedupe ratio and I did find explanation that block level dedupe would not be efficient here and that byte level dedupe would be much better (and that it should be performed by application that is aware of internal organization), so I am just double checking here if someone have some more input. Otherwise I will probably be converting PST files to IMAP.

    Read the article

  • Is it better to check if file exists before deleting it

    - by Kevin Fegan
    Sometimes when I want to delete a file (from within a script), I will just delete it rather than checking if it exists first. So I do this: $ rm "temp.txt" 2>/dev/null Instead of this: [ -f "temp.txt" ] && rm "temp.txt" I just feel it's a waste of time to go and check if the file exists and return an exit code. So, perhaps it's quicker to do it the first way, especially if most of the time, the file is likely to be present. Are there any other advantages (or downsides) to do it one way or the other? Am I wrong to think it will ever be quicker?

    Read the article

  • IRC Services with failover support?

    - by insertjokehere
    I run a single server (call it 'server A') IRC 'network', and thank to the generosity of some friends, I have been given a second server ('server B') that I can run an IRCd on in order to provide redundancy in case server A crashes. This is fine, I can set up a round-robin DNS with the servers linked. The problem I have is what to do about services? Does anyone know of a way to get the services to 'fail over' in case of a server failure? Eg, Server A starts off running the services, but suddenly crashes. Server B detects this and starts its own copy of the services (ideally with the same configuration and data as the services on Server B) One solution that comes it mind is to write a bot that each server runs, that sit in a channel periodically checking if the bot from the other server is in the channel. If it is, then all is well. If not, then failover. I would prefer not to have to code this myself though We are currently using Unreal IRCd and Anope services on Linux

    Read the article

  • Installing libssh2 on debian

    - by Ajay
    Hi I'm trying to install libssh2 for 64-bit. I'm using the following code for installation: cd libssh2-1.4.3/ export CFLAGS="-mabi=64 -march=mips64r2" ./configure --with-libz --with-libgcrypt --with-libz-prefix=`pwd`/../../support__ libs/libs/zlib/ --with-libgcrypt-prefix=`pwd`/../../support_libs/``libs/extra --wii thout-libssl-prefix --prefix=`pwd`/../../support_libs/libs/extra_2 make && make install without using "export CFLAGS="-mabi=64 -march=mips64r2" it compiles the binaries in 32-bit and whem i use #export for 64-bit, it gives this configuration error:: checking for libgcrypt... no configure: error: cannot find OpenSSL or Libgcrypt, try --with-libssl-prefix=PATH or --with-libgcrypt-prefix=PATH make: *** No targets specified and no makefile found.

    Read the article

  • Powershell and DfsrConfigurationFromAD - how to check all servers are updated?

    - by user57792
    I have two Win2012R2 servers (SERVER1 and SERVER2) that uses DFS Replication for keeping a couple of folders synchronized. Using the following Powershell-script that i run on SERVER1 I disconnect it from the group: Set-DfsrConnection -GroupName "Group1" -SourceComputerName "SERVER1" -DestinationComputerName "SERVER2" -DisableConnection $True; And to update both the servers with the configuration I run the command: Update-DfsrConfigurationFromAD -ComputerName "SERVER1","SERVER2" Now the question is, how can check on SERVER1 that SERVER2 has gotten the updated information from the AD so that I can be sure that the connection has been disabled? Usually it takes around 10-30 seconds after the Update-DfsrConfigurationFromAD command has been run before the DFS Management on SERVER2 gets updated but I need some kind of a "check and sleep loop" in my code. I've tried checking event logs and using Invoke-Command {Get-DfsrConnection} from PM-SERVER01 but nothing seems to work.

    Read the article

  • How to change subversion working copy UUID?

    - by Ioan
    I've recently updated Subversion repositories from an old 1.2.3 version to 1.6.0 via svnadmin dump/load. The old repositories all used the same UUID (repositories were created using by copying a template repository). I've changed the UUID on a couple of the new repositories via svnadmin setuuid to be unique. I can't just relocate my existing working copies of those repositories because the UUIDs are different. I know about exporting the working copy and checking out from the new repository, but I was wondering whether there was a way to just change the UUID of the working copy in-place, like what svnadmin setuuid does for repositories.

    Read the article

  • Google Chrome doesn't stay logged in to Google sites when using pinned tabs

    - by Nick T
    Despite checking "stay logged in" or the like on Gmail or Docs, Chrome refuses to do so when I close and re-open it with Google sites pinned. If they're not pinned, it works fine. The "Clear cookies and other site and plug-in data when I close my browser" checkbox in the settings is not checked, and I don't have any cookie exceptions. All settings are defaults. Nor is the incognito mode being used. This occurs on all my computers using Chrome. I have deleted my cookies file (%userprofile%\AppData\Local\Google\Chrome\User Data\Default\Cookies) with no effect (other than losing the logins that ordinarly work fine). Of note is that when I relaunch Chrome with Gmail pinned and it asks me to log in, doing so once will fail (does nothing; no errors), then it will work on the second attempt. If I refresh the window before doing so, it will work on the first attempt.

    Read the article

  • Linux Software Raid runs checkarray on the First Sunday of the Month? Why?

    - by mgjk
    It looks like Debian has a default to run checkarray on the first Sunday of the month. This causes massive performance problems and heavy disk usage for 12 hours on my 2TB mirror. Doing this "just in case" is bizzare to me. Discovering data out of sync between the two disks without quorum would be a failure anyway. This massive checking could only tell me that I have an unrecoverable drive failure and corrupt data. Which is nice, but not all that helpful. Is it necessary? Given I have no disk errors and no reason to believe my disks have failed, why is this check necessary? Should I take it out of my cron? /etc/cron.d# tail -1 /etc/cron.d/mdadm 57 0 * * 0 root [ -x /usr/share/mdadm/checkarray ] && [ $(date +\%d) -le 7 ] && /usr/share/mdadm/checkarray --cron --all --quiet Thanks for any insight,

    Read the article

  • lighttpd silently stops logging

    - by Max Cantor
    I'm on a Slicehost 256MB VPS with Ubuntu 9.04 (Jaunty). lighttpd is the only web server process running; it listens on port 80. My lighttpd.conf can be found here. I'm using Ubuntu's default logrotate setup for lighty. At seemingly random times, lighttpd will stop logging. It is not correlated with log rotation--that is, the errors do not occur when logrotate kicks in. What happens is, I will verify that the server is serving files by hitting a URL with my browser, and I will verify that it is not logging by checking access.log and seeing that the GET request I just made is not there. Using init.d to restart the process starts logging again, without truncating or rotating the log file. That is, new requests will be logged at the end of the existing access.log file. There are no cron jobs running on this box. Any ideas?

    Read the article

  • Asterisk: forward if peer unreachable

    - by Cedric H.
    I would like to respond to incoming calls by checking is a specific peer is reachable, and dial the appropriate number accordingly. Presently I did this: exten => 1200,1,Answer() same => n,Set(reachable=${SHELL(asterisk -rx "sip show peers" | grep ^cedrich-phone.*OK)}) same => n,GotoIf($["${LEN(${reachable})}" = "0"]?extoffline) same => n,Dial(SIP/cedrich-phone,20) same => n(extoffline),Dial(SIP/another-phone,20,tr) same => n,Hangup() Could you tell me if this is acceptable and if it can be improved ?

    Read the article

  • Ubuntu doesn't start and I can't login [migrated]

    - by Meph00
    My ubuntu 13.04 doesn't boot anymore. Eternal black screen. If I press ALT+CTRL+F1 I see that it's stucked on "Checking battery state [OK]." I'd like to try to go with sudo apt-get install gdm but I can't login on terminal tty2, tty3 etc. They correctly ask for my nickname, then they make me wait a lot, ask for password and make me wait again. After a lot of time (... a lot) the best I could achieve was visualizing "Documentantion https://help.ubuntu.com". I can never reach the point where I can give commands. Plus, during the long pauses, every 2 minutes it gives a messagge like this: INFO: task XXX blocked for more than 120 seconds. Any suggestion? Sorry for my bad english and thanks everyone for the attention.

    Read the article

  • How to fix 0x800CCC0E Error Codes?

    - by greenber
    I recently started receiving the above-mentioned error which is apparently a Winsock error message it is preventing me from checking my e-mail with Gmail, although there is no problem with my e-mail was ATT e-mail and MSN mail. I found a number of supposed fixit programs which found a great number of errors in my registry (although Wyse and Glary did not find anything wrong with my registry?) And offered to fix them for a fee. I would much rather not pay! :-) Does anybody here know what is causing this error and how to fix it? oh – I am using Windows 7/ultimate and Live Mail as my e-mail reader. Thank you. Ross

    Read the article

  • Windows Server 2003 (as workstation) unable to write to Samba fileshares

    - by remyhorton
    Setup is a Samba fileserver under Linux, which i am trying to access from a Windows Server 2003 box which has been reconfigured as a workstation. I can log onto the fileshares and can copy/delete files, but trying to open a file then write to it fails. Renaming files also fails with an error about requiring a filename. Drag/dropping files onto Xemacs gives me a message about copying from the network zone, and once open the file is read-only. Any ideas of what is wrong? I suspect it is a miscommunication of security details, as folder security options are all unchecked (checking them has no effect). I know it is not a problem with Samba itself, as Window2000, WindowsXP, and Nautulas (under Linux) can all access/edit fileshare files fine using the same userid/password. I am not using domain logins.

    Read the article

  • How can I rewrite a URL and pass on the original URL as a parameter?

    - by Bobby Jack
    I'm building a site that needs to include a 'check' procedure, to do several initiation tasks for a user's session. Examples include checking whether they're accepting cookies, determining if their IP address grants them specific privileges, etc. Once the check is complete, I need to redirect the user back to the page they originally requested. The plan is to use RewriteCond and map all URLs to an 'initiator' if the user doesn't have a specific cookie set. Let's say I want to rewrite all URLs (ultimately, with some conditions, of course) to: /foo?original_url=... Where the ... is the original URL requested, URL-encoded. The closest I've got is this: RewriteRule ^(.*)$ http://localhost/php/cookie.php$1 [R=301] I can then inspect the original URL, captured in the backreference, via PATH_INFO. However, this is pretty messy - I would much prefer to pass that value as a URL parameter

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >