Search Results

Search found 10384 results on 416 pages for 'plan cache'.

Page 314/416 | < Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >

  • System Idle Process network traffic?-Updated

    - by Moab
    I was using NetBalancer and noticed network traffic on an unidentified service, but when I highlight it and then go to the lower center pane and click the parent process it says it is the System Idle process, it is showing incoming and outgoing traffic in the upper pane, anyone know why this Windows System Idle Process is talking on the network? Windows 7 HP 64bit . . . Edit, after blocking the traffic for that unidentified Service I checked my event viewer (Windows LogsSystem) and found 3 new events that were never recorded before and matched the time I blocked the traffic. So is this part of the Windows local DNS cache? Event ID 1014 DNS Client Events Name resolution for the name dns.msftncsi.com timed out after none of the configured DNS servers responded. dns.msftncsi.com Name resolution for the name wpad.home timed out after none of the configured DNS servers responded. wpad Name resolution for the name mscrl.microsoft.com timed out after none of the configured DNS servers responded. mscrl.microsoft.com . Then My Web Browser refused to work, I re-enabled the traffic and all returned to normal. .

    Read the article

  • SharePoint Backup/Restore without stsadm

    - by Kevin
    Due to problems we found with the restore of sites/site collections using stsadm (our tasks generated from workflows were not restored), we've taken a different route for backup/restore. We plan a major customization to our SP site and want to take a backup so we can rollback in case the install fails. In our System Testing (not production) environment, we've backed up the 12 hive, the virtual dir's that the IIS points to SharePoint, and the SharePoint databases in SQL (using SQL server to do the db backups). We have custom event handlers and workflows built with Visual Studio, and deploy the dlls to the GAC as version 2 (signed and versioned in Visual Studio). So when we deploy, the GAC will contain 2 versions of the workflows - version 1 and version 2. During the deploy we use SP stsadm features to install/activate the WF's. We also go to each library and add the new, version 2 WFs. This automatically sets the version 1 WF's to "Not Allow" new instances (which is what we want) and the version 2 as active - perfect so far. When we've completed the install, we then assume a failure and attempt to restore to the same machines (SharePoint on one server, SQL on another). We start by uninstalling the version 2 WF's from the GAC, reset IIS (to clear cache of these ver. 2 WF dlls'), restore the 12-hive and virtual directory folders, then restore the SQL dbs. This is all just as manual as you read it - no stsadm here. All seems to work after our restore, it appears the restore was successful - the mods we made to column names, data changes, etc during the install are all reverted back to the original pre-install state. With one exception. When we run a workflow, it always fails and the Logs in the 12-hive indicates the WF is still trying to use the version 2 of the dll (System.IO file not found error) We think we've backed up and restored all the moving pieces of Sharepoint but we're missing something here, does anybody have any ideas why the version 2 WF dlls are still being referenced eventhough we restored all the folders and db's of SharePoint? Thanks, Kevin

    Read the article

  • Setting up Squid -> VPN connection

    - by Nedlinin
    I recently purchased a VPS and am wanting to use it as a VPN server. However, it has bandwidth limitations. So, I figured since I already have a local Squid proxy caching things for me, I could have users connect to the proxy and the proxy connect to the VPN. Then when someone hits the web, Squid will serve it from cache if available and, if not, it will use the VPN to download it. My issue is, I have no idea how to set this up :p - Essentially I want Machine - Squid - VPN. My VPN is running on Ubuntu Server with pptpd. Squid is running on a local Arch Linux box. Squid and the VPN are both working perfectly independently. Any help on how to have Squid push traffic through the VPN would be greatly appreciated! Also: I don't actually want to use the VPN for all traffic. Otherwise, I'd just connect my router to the VPN and be happy. I only want to use it for web traffic from specific machines on the network.

    Read the article

  • Reinserted a RAID disk. Defined as foreign. Is import or clear the correct choice?

    - by Petrus
    I have re-inserted a RAID disk, on a DELL server with Windows Server 2008. The drive-status indicator was changing between a green and amber light, and the monitor gave the following message: There are offline or missing virtual drives with preserved cache. Please check the cables and ensure that all drives are present. Press any key to enter the configuration utility. I pressed a key and the PERC 6/I Integrated BIOS Configuration Utility showed that the RAID Status for that disk was Offline. After reinsertion of the disk the monitor is giving the following message: Foreign configuration(s) found on adapter. Press any key to continue or ‘C’ load the configuration utility, or press ‘F’ to import foreign configuration(s) and continue. After checking around on the net I am uncertain if I should choose import or clear. I cannot find out if an import means importing information from the array/system to the now foreign disk or the other way, i.e. importing information from the foreign disk to the array/system that was actually working fine. Also; if clear is a necessary thing to do ahead of a rebuild of that disk, or if clear means to clear the system to somehow make it ready to import the information from the foreign disk to the array/system, which is not what I want. I imagine that making the wrong choice here might be fatal. Please help clearing this out by telling what to choose and why.

    Read the article

  • Can I run a mix of static addressing and NAT?

    - by aroth
    Let's say that an ISP offers a plan with 5 static IP addresses, but I have more than 5 devices, many of which (such as a networked printer, for instance) I do not want or need to have a static IP address. So the topography I'm planning goes something like: ISP -> Router -> Switch -> Computer (static address) -> Computer (static address) -> Printer (DHCP/NAT) -> Tv (DHCP/NAT) -> (...) -> Wireless Devices (DHCP/NAT) Generally speaking, is it possible to run a network like that? If not, then what sort of setup do I need so that I can assign static addresses just to the things that need them, and use DHCP/NAT for everything else? Also, what internal networking devices will consume a static IP address? I'm pretty sure the router will, correct? Does the switch also?

    Read the article

  • Dell PE2950 - slow IO rates for writing and reading locally

    - by OrenM
    I'm having a serious issue with dell server PE2950. The server has really slow IO rates, so slow that I'm not able to use it anymore I tried few things to solve this: changing disks to new disks (configured them as raid1) changing perc card + perc cables reinstalling the OS of course, had to cause of changing of disks, centos 5.5 x64bit firmware update to everything virtual disks policy: No Read Ahead,Write Back, disk cache policy disabled. openmanage doesn't alert about anything, also i ran dell's diag tests, everything passed, also dell didn't see anything in deset log. dell offered to reseat everything, including the cpu, we did that as well, still io rates are slow I have several PE2950 servers, and I never had such a thing with any of those. All have similar or exact hardware as this one, all configured the same, with the same os centos 5.5 x64, same disks, same raid, same policy. Just for comparison: the problematic PE2950 server: [root@bad ~]# time sh -c "dd if=/dev/zero of=/tmp/ddfile bs=8k count=200000 && sync" 200000+0 records in 200000+0 records out 1638400000 bytes (1.6 GB) copied, 27.7946 seconds, 58.9 MB/s real 0m33.968s user 0m0.531s sys 0m26.000s good PE2950 server (with the exact same hardware): [root@good ~]# time sh -c "dd if=/dev/zero of=/tmp/ddfile bs=8k count=200000 && sync" 200000+0 records in 200000+0 records out 1638400000 bytes (1.6 GB) copied, 3.19999 seconds, 512 MB/s real 0m7.694s user 0m0.053s sys 0m4.057s Hopefully you will have an idea what can cause the problem.

    Read the article

  • Apache2 MPM-prefork MPM-worker multiple instances on same Ubuntu host machine possible?

    - by user60985
    I have a live Apache2/MPM-Worker instance running Django. I want to also run an Apache2/MPM-prefork instance to run some Drupal6 applications on the same host machine and utilize a vast selection of PHP modules that run on the prefork model. I plan to use my MPM-worker instance to reverse proxy to the Apache2-prefork instance for URLS starting with myhost.com/drupal6/. It seems theoretically doable/configurable by having the second Apache2-prefork instance configured to listen on an internal port, say 127.0.0.1:8080 and having my current Apache2-worker configured to proxy pass and reverse pass to it for the 'drupal6' URLs. However, how do I compile or install the apache2-prefork version so it has a different executable name than /usr/sbin/apache2, for example /usr/sbin/apache2p, and so apache2ctl has a different name, say apache2pctl, and that apache2pctl invokes the /usr/sbin/apache2p instead of /usr/sbin/apache2... and so on down the line (eg /etc/apache2p) so I can start and restart my two instances independently? As I understand it, no one executable of 'apache2' can be compiled with both the MPM-prefork and MPM-worker modules, so it seems I need two separate versions of the apache2 MPM flavors. But then I need to invoke and control them by separate names, I assume. I looked at the configuration options for apache2 and I am a bit queasy about compiling a second apache2 version with prefork because I am not sure I can set all the options so that none of my current apache2 files is overwritten. Is there a way? Is there a standard solution to separately installing and controlling prefork and worker apache2 executables on the same machine without them stepping on each other during installation or operation?

    Read the article

  • How can I download a file directly to a web server?

    - by matt
    I'd like to download files directly to a hosted server, whether it's one I set up myself or a hosted service like Dropbox. For example, when I download a podcast, instead of downloading it to my computer then uploading it to the server, how can I have it download directly to the cloud. My interest here is reducing the traffic I'm using over a metered data plan on my laptop, so I don't want my computer acting as a physical intermediary caching the file. Ideally, there would be some way for me to have a download link and tell it to go directly to my server. How can I accomplish this? I realize that this question is potentially involving a "webapp" and it is potentially involving "server administration" and since my goal is to cut my computer out of the loop I can see people saying this is off-topic and should be on another site. My issue is this: I don't know if this is going to be a webapp solution or a server solution but I do know regardless I'm going to be using a computer to get it done and I am replacing a function that's currently done on my computer so I figured I'd ask it here. If I was wrong and this definitely should be at webapps feel free to let me know or just migrate it.

    Read the article

  • Trying to use Nginx try_files to emulate Apache MultiViews

    - by Samuel Bierwagen
    I want a request to http://example.com/foobar to return http://example.com/foobar.jpg. (Or .gif, .html, .whatever) This is trivial to do with Apache MultiViews, and it seems like it would be equally easy in Nginx. This question seems to imply that it'd be easy as try_files $uri $uri/ index.php; in the location block, but that doesn't work. try_files $uri $uri/ =404; doesn't work, nor does try_files $uri =404; or try_files $uri.* =404; Moving it between my location / { block and the regexp which matches images has no effect. Crucially, try_files $uri.jpg =404; does work, but only for .jpg files, and it throws a configuration error if I use more than one try_files rule in a location block! The current server { block: server { listen 80; server_name example.org www.example.org; access_log /var/log/nginx/vhosts.access.log; root /srv/www/vhosts/example; location / { root /srv/www/vhosts/example; } location ~* \.(?:ico|css|js|gif|jpe?g|es|png)$ { expires max; add_header Cache-Control public; try_files $uri =404; } } Nginx version is 1.1.14.

    Read the article

  • How to configure fastcgi to work with ligttpd in ubuntu

    - by michael
    I am able to run lighttpd on ubuntu 9.10. But when i tried to setup fastcgi with lighttpd by putting this in the ligttpd.conf file: #### fastcgi module fastcgi.server = ( "/fastcgi_scripts/" => (( "host" => "127.0.0.1", "port" => "9098", "check-local" => "disable", "bin-path" => "/usr/local/bin/cgi-fcgi", "docroot" => "/" # remote server may use # it's own docroot )) ) This is what I get in the error.log in ligttpd: 2010-03-07 21:00:11: (log.c.166) server started 2010-03-07 21:00:11: (mod_fastcgi.c.1104) the fastcgi-backend /usr/local/bin/cgi-fcgi failed to start: 2010-03-07 21:00:11: (mod_fastcgi.c.1108) child exited with status 1 /usr/local/bin/cgi-fcgi 2010-03-07 21:00:11: (mod_fastcgi.c.1111) If you're trying to run your app as a FastCGI backend, make sure you're using the FastCGI-enabled version. If this is PHP on Gentoo, add 'fastcgi' to the USE flags. 2010-03-07 21:00:11: (mod_fastcgi.c.1399) [ERROR]: spawning fcgi failed. 2010-03-07 21:00:11: (server.c.931) Configuration of plugins failed. Going down. I do have cgi-fcgi in /usr/local/bin: $ which cgi-fcgi /usr/local/bin/cgi-fcgi '/usr/local/bin/cgi-fcgi' is the executable after I download and compile fast-cgi. Here is my lighttpd conf file: $ more lighttpd.conf # lighttpd configuration file # # use it as a base for lighttpd 1.0.0 and above # # $Id: lighttpd.conf,v 1.7 2004/11/03 22:26:05 weigon Exp $ ############ Options you really have to take care of #################### ## modules to load # at least mod_access and mod_accesslog should be loaded # all other module should only be loaded if really neccesary # - saves some time # - saves memory server.modules = ( # "mod_rewrite", # "mod_redirect", # "mod_alias", "mod_access", # "mod_trigger_b4_dl", # "mod_auth", # "mod_status", # "mod_setenv", "mod_fastcgi", # "mod_proxy", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) ## A static document-root. For virtual hosting take a look at the ## mod_simple_vhost module. server.document-root = "/srv/www/htdocs/" ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" # files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm" ) ## set the event-handler (read the performance section in the manual) # server.event-handler = "freebsd-kqueue" # needed on OS X # mimetype mapping mimetype.assign = ( ".pdf" => "application/pdf", ".sig" => "application/pgp-signature", ".spl" => "application/futuresplash", ".class" => "application/octet-stream", ".ps" => "application/postscript", ".torrent" => "application/x-bittorrent", ".dvi" => "application/x-dvi", ".gz" => "application/x-gzip", ".pac" => "application/x-ns-proxy-autoconfig", ".swf" => "application/x-shockwave-flash", ".tar.gz" => "application/x-tgz", ".tgz" => "application/x-tgz", ".tar" => "application/x-tar", ".zip" => "application/zip", ".mp3" => "audio/mpeg", ".m3u" => "audio/x-mpegurl", ".wma" => "audio/x-ms-wma", ".wax" => "audio/x-ms-wax", ".ogg" => "application/ogg", ".wav" => "audio/x-wav", ".gif" => "image/gif", ".jar" => "application/x-java-archive", ".jpg" => "image/jpeg", ".jpeg" => "image/jpeg", ".png" => "image/png", ".xbm" => "image/x-xbitmap", ".xpm" => "image/x-xpixmap", ".xwd" => "image/x-xwindowdump", ".css" => "text/css", ".html" => "text/html", ".htm" => "text/html", ".js" => "text/javascript", ".asc" => "text/plain", ".c" => "text/plain", ".cpp" => "text/plain", ".log" => "text/plain", ".conf" => "text/plain", ".text" => "text/plain", ".txt" => "text/plain", ".dtd" => "text/xml", ".xml" => "text/xml", ".mpeg" => "video/mpeg", ".mpg" => "video/mpeg", ".mov" => "video/quicktime", ".qt" => "video/quicktime", ".avi" => "video/x-msvideo", ".asf" => "video/x-ms-asf", ".asx" => "video/x-ms-asf", ".wmv" => "video/x-ms-wmv", ".bz2" => "application/x-bzip", ".tbz" => "application/x-bzip-compressed-tar", ".tar.bz2" => "application/x-bzip-compressed-tar", # default mime type "" => "application/octet-stream", ) # Use the "Content-Type" extended attribute to obtain mime type if possible #mimetype.use-xattr = "enable" ## send a different Server: header ## be nice and keep it at lighttpd # server.tag = "lighttpd" #### accesslog module accesslog.filename = "/var/log/lighttpd/access.log" ## deny access the file-extensions # # ~ is for backupfiles from vi, emacs, joe, ... # .inc is often used for code includes which should in general not be part # of the document-root url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## bind to port (default: 80) server.port = 9090 ## bind to localhost (default: all interfaces) server.bind = "127.0.0.1" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts #server.pid-file = "/var/run/lighttpd.pid" ###### virtual hosts ## ## If you want name-based virtual hosting add the next three settings and load ## mod_simple_vhost ## ## document-root = ## virtual-server-root + virtual-server-default-host + virtual-server-docroot ## or ## virtual-server-root + http-host + virtual-server-docroot ## #simple-vhost.server-root = "/srv/www/vhosts/" #simple-vhost.default-host = "www.example.org" #simple-vhost.document-root = "/htdocs/" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/usr/share/lighttpd/errors/status-" #server.errorfile-prefix = "/srv/www/errors/status-" ## virtual directory listings #dir-listing.activate = "enable" ## select encoding for directory listings #dir-listing.encoding = "utf-8" ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" #debug.log-file-not-found = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't care) #server.username = "wwwrun" ## change uid to <uid> (default: don't care) #server.groupname = "wwwrun" #### compress module #compress.cache-dir = "/var/cache/lighttpd/compress/" #compress.filetype = ("text/plain", "text/html") #### proxy module ## read proxy.txt for more info #proxy.server = ( ".php" => # ( "localhost" => # ( # "host" => "192.168.0.101", # "port" => 80 # ) # ) # ) #### fastcgi module fastcgi.server = ( "/fastcgi_scripts/" => (( "host" => "127.0.0.1", "port" => 1026, "check-local" => "disable", "bin-path" => "/usr/local/bin/cgi-fcgi", #"docroot" => "/" # remote server may use # it's own docroot )) ) ## read fastcgi.txt for more info ## for PHP don't forget to set cgi.fix_pathinfo = 1 in the php.ini #fastcgi.server = ( ".php" => # ( "localhost" => # ( # "socket" => "/var/run/lighttpd/php-fastcgi.s ocket", # "bin-path" => "/usr/local/bin/php-cgi" # ) # ) # ) #### CGI module #cgi.assign = ( ".pl" => "/usr/bin/perl", # ".cgi" => "/usr/bin/perl" ) # #### SSL engine #ssl.engine = "enable" #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" #### status module #status.status-url = "/server-status" #status.config-url = "/server-config" #### auth module ## read authentication.txt for more info #auth.backend = "plain" #auth.backend.plain.userfile = "lighttpd.user" #auth.backend.plain.groupfile = "lighttpd.group" #auth.backend.ldap.hostname = "localhost" #auth.backend.ldap.base-dn = "dc=my-domain,dc=com" #auth.backend.ldap.filter = "(uid=$)" #auth.require = ( "/server-status" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "user=jan" # ), # "/server-config" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "valid-user" # ) # ) #### url handling modules (rewrite, redirect, access) #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### both rewrite/redirect support back reference to regex conditional using %n #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} # # define a pattern for the host url finding # %% => % sign # %0 => domain name + tld # %1 => tld # %2 => domain name without tld # %3 => subdomain 1 name # %4 => subdomain 2 name # #evhost.path-pattern = "/srv/www/vhosts/%3/htdocs/" #### expire module #expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "ac cess plus 1 seconds 2 minutes") #### ssi #ssi.extension = ( ".shtml" ) #### rrdtool #rrdtool.binary = "/usr/bin/rrdtool" #rrdtool.db-name = "/var/lib/lighttpd/lighttpd.rrd" #### setenv #setenv.add-request-header = ( "TRAV_ENV" => "mysql://user@host/db" ) #setenv.add-response-header = ( "X-Secret-Message" => "42" ) ## for mod_trigger_b4_dl # trigger-before-download.gdbm-filename = "/var/lib/lighttpd/trigger.db" # trigger-before-download.memcache-hosts = ( "127.0.0.1:11211" ) # trigger-before-download.trigger-url = "^/trigger/" # trigger-before-download.download-url = "^/download/" # trigger-before-download.deny-url = "http://127.0.0.1/index.html" # trigger-before-download.trigger-timeout = 10 #### variable usage: ## variable name without "." is auto prefixed by "var." and becomes "var.bar" #bar = 1 #var.mystring = "foo" ## integer add #bar += 1 ## string concat, with integer cast as string, result: "www.foo1.com" #server.name = "www." + mystring + var.bar + ".com" ## array merge #index-file.names = (foo + ".php") + index-file.names #index-file.names += (foo + ".php") #### include #include /etc/lighttpd/lighttpd-inc.conf ## same as above if you run: "lighttpd -f /etc/lighttpd/lighttpd.conf" #include "lighttpd-inc.conf" #### include_shell #include_shell "echo var.a=1" ## the above is same as: #var.a=1 Thank you for your help.

    Read the article

  • Skype Connect as SIP/Trunk for Asterisk

    - by Kaurin
    First off: I'm not sure if this should be on superuser or here. I have recently built a few Asterisk boxes with OpenVOX FXO/FXS ports little or no trouble. My current project is building an Asterisk box with SIP trunks. My current employer insisted on getting Skype Business/Skype connect for that purpose. After reviewing the Skype Connect plan, I agreed, because I thought it is going to be straightforward: Purchase G729 licences and setup SIP trunk/trunks. Boy was i wrong :) Here is the setup: The setup is for calling US numbers only via skype (we got skype US minute bundles in skype connect) AsteriskNOW - Asterisk 1.4 + asterisk-gui Trunks: SIP Trunk configured with Skype Connect - shows as registered Users: 2 test extensions. Both work fine when calling each other, voicemail etc works fine too The asterisk box is behind a Mikrotik router which i configured to forward all relevant ports: 5060-5090 UDP, 10000-20000 UDP. When trying out an extension outside of my LAN, it worked. I could make calls to the other extension. Outgoing rule: _NXXXXXXXXX Strip:0 Prepend:+1 Use skype trunk Inbound rule: Trunk: Skype Pattern: s Destination: Extension1 (6210) Here is the output of asterisk CLI (-rvvvvv) with outgoing calls: http://pastebin.com/eWVpL72e you can see the circuit-busy response when using trunk1 (skype) When calling my Skype Connect number from the outside, I get nothing in the logs. Can anyone with Skype Connect / Asterisk experience help out? :)

    Read the article

  • time sync with ntpd

    - by guthrie
    I run Debian on several systems, and their times do not seem to stay in sync. I can run ntpdate manually, but I thought that I should have an ntpd running that would automate that. I did check with apt and apt-cache but don't find any ntpd (or associated ntpq), not any such names in my system (locate...), but ntp-doc does still describe them. Looking around I see that there is an ntpdate-debian command, and it uses /etc/default/ntpdate for servers (instead of the standard /etc/ntp.conf), but even thought that file is there and has "yes" indicated to use ntp.conf, it fails with "no servers can be used", although ntpdate works fine. Is this just a layer over ntpdate, any reason to use it instead? So, why are they missing, do I need them, how do I automate time updates? Associated, two of my machines are virtualized on a MSoft VM, how is it that their clocks drift, and both to different values? (The underlying Windows machine clock seems stable). I see a few old notes about time & ntp problems on VMware, didn't find anything either current or relating to MSoft VMs. Anything I did see says just to use ntpd, but as above, ...?!

    Read the article

  • Symantec Endpoint Protection Virus Definitions

    - by Gus Denton
    I have done some Googling but I cannot get a definitive answer certainly not from the Symantec KB. I have a Virtualised Win 2003R2 server 32bit. It has been provisioned to me with Symantec Endpoint Protection 11.0.62xxx CLIENT (not a definitions server) the directory C:\Program Files\Common Files\Symantec Shared\VirusDefs is 750MB IT doesn't contain .tmp directories so it is NOT a corrupt definitions server. IT does contain directories named with a date pattern YYYYMMDD.xxx Some of these folders are 12 months old and I would like to recover the space. The sysmantect forums are full of this stuff but a lot of the postings contain links back to documents that are not specific to End Point Protection Client. It appears that I should be able to delete the older folders and all will be OK. with a service restart however there is a warning about having Live Update Administrator Installed Firstly I have no idea if I have this installed how to I check and secondly can I just ditch these old files and restart ? Regards Gus Denton Learning and Teaching Uni of New South Wales Sydney Australia For those trying to assist me I thankyou. I have followed some instructions found on the Symantec site and assumed that the response from Nixphoe would resolve my issue. It appears that as I am on a provisioned VM from a central IT unit I cannot run the Symantec commands from the Run prompt as my admin creds to get me in. (smc -stop) Basically I need to claw back some Diskspace from the c: drive which is being filed up with WSUS patches and Symantec files. I have managed to delete one symantec cache through the live update control panel and recovered 470Mb I suppose my last question for those more experienced than myself is, can I simply remove say the two oldest virus definition folders without completely foobaring the End Point protection and the server ? Regards Gus

    Read the article

  • IIS7 Session ID rotating with Classic ASP

    - by ManiacZX
    I am trying to migrate a Classic ASP app onto a Windows 2008 R2 server. The application features run fine, but I am having issue with session. The application keeps the logged in user information in session and I am constantly getting knocked out as if the session had expired. While debugging I have discovered the sessions are not expiring but instead I am getting 2-3 different Session IDs in use by one browser. I am outputting Response.Write(Session.SessionID) on various pages in the application and I can sit there and hit refresh over and over and watch the number changed between these 2-3 SessionIDs randomly. The sessions are still valid because when I refresh and get the Session ID that I logged in under the page is displayed (because the security check was successful) and when I get one of the other Session IDs I get the "you aren't logged in, you need to log in" message. If I close and re-open the browser, same story just the set of IDs are new. This happens with IE8, Firefox and Chrome from multiple computers. Things I've tried: - AppPool set to No Managed Code and Classic - Output Caching set .asp to never cache - ASP Session Properties enabled and disabled asp session state and confirmed it affected page (error trying to read Session.SessionID when disabled) Things I've tried just in case but shouldn't have anything to do with ASP Session: - Disabled compression - Changed ASP.Net Session State properties (InProc, StateServer, SQLServer, Cookies, URI, etc) -

    Read the article

  • A separate user for each task?

    - by Mark Tomlin
    I just got a VPS sver the other day, I'm new to server administration, but not that new to Ubuntu (11.04). I use it in my living room as the HTPC, and I had a previous VPS that I used on and off for a team speak server. This one I'm setting up for long term use. So I would like to know the best practice when it comes to websites and tasks that I have the server proforming. I understand that it could be beneficial to separate each website into it's own usergroup or under its own username. I would setup nginx so that it could read all of the users directors (and thus each website) but could not touch anything else. The same with the TeamSpeak, should I make a user for TeamSpeak so that it operates within its own confined area or is this overkill? I do have access to root on the sever and my current plan is to run about 4 websites and a TeamSpeak server. My stack is Linux (Ubuntu 11.04 LTS), nginx, and PHP 5.4.3 (using the PDO SQLite 3 built in driver for the database). Should PHP have it's own user group or is it ok to place it in with nginx?

    Read the article

  • Rack layout for future growth

    - by bleything
    We're getting ready to move to a new colo facility and I'm designing the rack layout. While we have a full rack, we only have 12U worth of hardware right now: 1x 1U switch 7x 1U servers 1x 2U server 1x 2U disk shelf The colo facility requires us to front-mount the switch and use a 1U brush strip, so we'll be using a total of 13U of space. Regarding growth, I'm reasonably sure we'll be adding another 4U in servers, 1-2U of network gear, and 2-4U of storage in the mid-term. Specific questions I'm hoping to get help with: where should I mount the switch? the LEDs are on top... should I group the servers by function with space for adding new machines? as an alternative, should I group servers based on whether they are production or staging? where in the rack should I start? in the middle? at the top? at the bottom? equally spaced? Here's a silly little ASCII diagram of what I'm thinking right now. Please feel free to tear my design apart, I've really no idea what I'm doing :) Any advice is very welcome. edit: to be clear, the colo is providing redundant power with UPS and generator, so that's why there's no power gear in the plan, except for the 0U PDU that I didn't diagram. 42 | -- switch ---------------------- 41 | -- brush strip ----------------- 40 | ~~ reserved for second switch ~~ 39 | ~~ reserved for firewall ~~~~~~~ 38 | 37 | -- admin01 --------------------- 36 | 35 | -- vm01 ------------------------ 34 | -- vm02 ------------------------ 33 | ~~ reserved for vm03 ~~~~~~~~~~~ 32 | ~~ reserved for vm04 ~~~~~~~~~~~ 31 | ~~ reserved for vm05 ~~~~~~~~~~~ 30 | 29 | -- web01 ----------------------- 28 | -- web02 ----------------------- 27 | ~~ reserved for web03 ~~~~~~~~~~ 26 | ~~ reserved for web04 ~~~~~~~~~~ 25 | 24 | 23 | 22 | 21 | 20 | 19 | 18 | 17 | 16 | -- db01 ------------------------ 15 | +- disks ----------------------+ 14 | +------------------------------+ 13 | ~~ reserved for more ~~~~~~~~~~~ 12 | ~~ db01 disks ~~~~~~~~~~~~~~~~~~ 11 | 10 | +- db02 -----------------------+ 9 | +------------------------------+ 8 | ~~ reserved for db02 ~~~~~~~~~~~ 7 | ~~ disks ~~~~~~~~~~~~~~~~~~~~~~~ 6 | ~~ reserved for more ~~~~~~~~~~~ 5 | ~~ db02 disks ~~~~~~~~~~~~~~~~~~ 4 | 3 | 2 | 1 |

    Read the article

  • Amazon EC2 Reserved Instances: "Heavy Utilization" clarification

    - by gravyface
    Should be another easy one here, but I need clarification on what they define as "heavy utilization" for Reserved Instance types. From their Website: Heavy Utilization RIs – Heavy Utilization RIs offer the most absolute savings of any Reserved Instance type. They’re most appropriate for steady-state workloads where you’re willing to commit to always running these instances in exchange for our lowest hourly usage fee. With this RI, you pay a little higher upfront payment than Medium Utilization RIs, a significantly lower hourly usage fee, and you’re charged that lower hourly rate for every hour in the Reserved Instance term you purchase. Using Heavy Utilization RIs, you can save up to 41% for a 1-year term and 58% for a 3-year term vs. running On-Demand Instances. If you’re trying to find a break-even utilization, you’re economically advantaged using Heavy Utilization RIs (vs. On-Demand Instances) if you plan to use your instance more than 43% of a 1-year term or 79% of a 3-year term. I'm assuming that, if I'm planning on running a 24/7 Web Server, then regardless of how many resources I consume (bandwidth, cpu cycles, memory), I would want to go with a Heavy Utilization Reserved Instance? This one Web Server in particular will likely barely budge the cpu, but it needs to be up and running 24/7. Not 100% on what they're defining as "heavy".

    Read the article

  • btrfs: can i create a btrfs file system with data as jbod and metadata mirrored

    - by Yogi
    I am trying to build a home server that will be my NAS/Media server as well a the XBMC front end. I am planning on using Ubuntu with btrfs for the NAS part of it. The current setup consists of 1TB hdd for the OS etc and two 2TB hdd's for data. I plan to have the 2TB hdd's used as JBOD btrfs system in which i can add hdd's as needed later, basically growing the filesystem online. They way I had setup the file system for testing was while installing the OS just have one of the HDD's connected and have btrfs on it mounted as /data. Later on add another hdd to this file system. When the second disk was added btrfs made as RAID 0, with metadata being RAID 1. However, this presents a problem: even if one of the disk fails I loose all my data (mostly media). Also most of the time the server will be running without doing any disk access, i.e. the HDD's can be spun down, when a access request comes in this with the current RAID 0 setup both disks will spin up. in case I manage a JBOD only the disk that has the file needs to be spun up. This should hopefully reduce the MTBF for each disk. So, is there a way in which I can have btrfs setup such that metadata is mirrored but data stays in a JBOD formation? Another question I have is this, I understand that a full drive failure in JBOD will lose data on the drive, but having metadeta mirrored across all drives, will this help the filesytem correct errors that migh creep in (ex bit rot?) and is btrfs capable of doing this.

    Read the article

  • Options for synchronizing Palm Desktop calendars

    - by Al Everett
    My wife and I each have Palm Centro phones and very full calendars. We've been using Palm products for years and are happy with them and are not looking to switch (and don't have the budget for it even if we wanted to). What is a viable way we can synchronize our calendars? Back when we had Palm Z22s we used AirSet. It worked great in synchronizing our desktop calendars. Unfortunately, their sync software does not support Palm Desktop v6 and there is nothing in the pipeline to support it. (The third party vendor is apparently not interested in updating for the newer Palm desktop.) I would love to be able to get back to having each other's appointments appear on the other's calendar. What can we do? Some limitations: A data plan is not an option (so this precludes over-the-air synchronization options) No Microsoft Outlook Edit: Syncing my Palm to my Palm Desktop is not the issue. Being able to sync, in some way, the Palm calendar databases of my wife's and my calendars is what's desired.

    Read the article

  • Setup of high-end web server and DB server cluster on Amazon EC2: Is this how it's done?

    - by user1086584
    Amazon is so technical, I want to confirm that my understanding is correct. We have a large 500 GB database. (OrientDB.) We will have it mirrored to one another in the same Availability Zone. We believe the database size will grow rapidly. The plan is: Get 4 large instances that are compatible types with Placement Groups (as well as ideally, Enhanced Networking) (2 for web, 2 for DB.) We use an EBS-backed instances to store our operating system. Discussion here: http://alestic.com/2012/01/ec2-ebs-boot-recommended We can set up ephemeral SSD instance storage as swap space. (But it is lost after even a reboot. I hear its hard to add ephemeral storage if booting from EBS, but possible.) For offsite backup, we will take periodic snapshots and store them on S3. Obviously we need to ensure the database is in a safe state when that snapshot happens to avoid corruption. (Any hints here, aside from shutting down the DB?) If the database gets too big, we need to create a EBS volume that's larger. We can use RAID to break the 1 TB limit: http://alestic.com/2009/06/ec2-ebs-raid Static assets on web servers will be stored on S3. Is that correct? Or am I missing something?

    Read the article

  • How do I resolve BSOD: PAGE_FAULT_IN_NONPAGED_AREA?

    - by Burnzy
    I have been trouble shooting this for a few days and cannot fix this anyhow. Computer specifications Mobo: ASUS Sabertooth X58 LGA 1366 Intel X58 SATA 6Gb/s USB 3.0 ATX Intel Motherboard CPU: Intel(R) Core(TM) i7 CPU 920 (Bloomfield) @ 2.67 ( no OC ) RAM: 6144MB RAM GPU: 2x NVIDIA GeForce GTS 250 1Go in SLI (sli is not enabled anyway at the moment anyway) Drives: OCZ RevoDrive OCZSSDPX-1RVD0120 PCI-E x4 120GB PCI Express MLC Internal SSD [RAID-0]. (I know this could potentilly cause trouble but I had the BSOD before using this drive) Seagate Barracuda 7200.11 ST31500341AS 1.5TB 7200 RPM 32MB Cache SATA 3.0Gb/s 3.5" Internal Hard Drive - Bare Drive Click here for a log of a crash I just had. Click here for a log of a crash I had 30 minutes later, note that it's another driver. Some info Occurence: It seems pretty random so far, haven't noticed any kind of pattern I tried: Windows memory diagnostic (went smoothly at 1066mhz) As I said, it was still happening on my HDD, so when I bought the revodrive I install a new OS on there and still got the error, I believed it happened and I had no drivers installed at that point (not 100% sure) Change the following registry value to 1 (true): HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SessionManager \MemoryManagement\ClearPageFileAtShutdown Tried to lower even more ram clock Made sure ram timing was set to recommended by manufacturer Verified if motherboard was in good physical condition (yes and its brand new) There is one thing to note, when I got the new motherboard, I installed the new drivers WITHOUT formatting and the I removed the motherboard drivers that I could remove from the control panel (pretty much the first things that have been installed). Could this cause an issue even ON THE OTHER drive (revodrive). Hopefully someone can help me, I am getting tired of this, spending so much money and cannot get this to work correctly. If you need any other information let me know, thank you!

    Read the article

  • Bash wonkyness on Ubuntu versus RHEL

    - by d34dh0r53
    Fellow faulters, I'm playing around with a one liner that I've developed on a RHEL 5.4 box and I have it working perfectly: TOTAL_RAM=`free | grep Mem: | awk '{ print $2 }'`; \ ps axo rss,comm,pid | awk -v total_ram=$TOTAL_RAM \ '{ proc_list[$2] += $1; } END { for (proc in proc_list) \ { proc_pct = (proc_list[proc]/total_ram)*100; printf("%d\t%s\t%0.2f%\n", proc_list[proc],proc,proc_pct); }}' \ | sort -n | tail -n 10 Which outputs something like the following on my RHEL box: 3736 logmon 0.01% 4156 EvMgrC 0.01% 4692 hald 0.01% 5020 ntpd 0.02% 6252 sshd 0.02% 7784 cvd 0.02% 9224 snmpd 0.03% 13068 dsm_sa_datamgr3 0.04% 23320 dsm_om_connsvc3 0.07% 4249864 mysqld 12.90% However on my Ubuntu 9.04 slice I get this: awk: run time error: not enough arguments passed to printf("%d %s %0.2f% ") FILENAME="-" FNR=104 NR=104 33248 console-kit-dae 3.17 I think it has to be bash that is borking something, but I'm really not doing anything that should be that bash specific. The RHEL box is running: # yum info bash | grep -e Version -e Release Version : 3.2 Release : 24.el5 And the Ubuntu box: # apt-cache show bash | grep -e Version Version: 3.2-5ubuntu1 I haven't dug into this super deeply, and thought I'd ping my fellow johnnys to see if you've ever run across this before. /bow

    Read the article

  • Connecting to RDS database from EC2 instance using bind9 CNAME alias

    - by mptre
    I'm trying to get internal DNS up and running on a EC2 instance. The main goal is to be able to define CNAME aliases for other AWS services. For example: Instead of using the RDS endpoint, which might change over time, an alias mysql.company.int can be used instead. I'm using bind9 and here's my config files: /etc/bind/named.conf.local zone "company.int" { type master; file "/etc/bind/db.company.int"; }; /etc/bind/db.company.int ; $TTL 3600 @ IN SOA company.int. company.localhost. ( 20120617 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS company.int. @ IN A 127.0.0.1 @ IN AAAA ::1 ; CNAME mysql IN CNAME xxxx.eu-west-1.rds.amazonaws.com. The dig command ensures me my alias is working as excepted: $ dig mysql.company.int ... ;; ANSWER SECTION: mysql.company.int. 3600 IN CNAME xxxx.eu-west-1.rds.amazonaws.com. xxxx.eu-west-1.rds.amazonaws.com. 60 IN CNAME ec2-yyy-yy-yy-yyy.eu-west-1.compute.amazonaws.com. ec2-yyy-yy-yy-yyy.eu-west-1.compute.amazonaws.com. 589575 IN A zzz.zz.zz.zzz ... As far as I can understand a reverse zone isn't needed for a simple CNAME alias. However when I try to connect to MySQL using my newly created alias the operation is giving me a timeout. $ mysql -uuser -ppassword -hmysql.company.int ERROR 2003 (HY000): Can't connect to MySQL server on 'mysql.company.int' (110) Any ideas? Thanks in advantage!

    Read the article

  • S5000XVN Intel Motherboard - Only sees 12 out of 16GB of RAM

    - by Richie086
    So I have an older Intel S5000XVN motherboard, here are the specs CPU Arch : 2 CPU - 2 Cores - 2 Threads CPU PSN : Intel Xeon CPU 5160 @ 3.00GHz CPU EXT : MMX, SSE (1, 2, 3, 3S), EM64T, VT-x CPUID : 6.F.6 / Extended : 6.F CPU Cache : L1 : 2 x 32 / 2 x 32 KB - L2 : 4096 KB Core : Woodcrest (65 nm) / Stepping : B2 Freq : 2985.21 MHz (331.69 * 9) MB Brand : Intel MB Model : S5000XVN NB : Intel 5000X rev 31 SB : Intel 6321ESB rev 09 RAM : 16384 MB FB-DDR2 RAM Speed : 331.7 MHz (1:1) @ N/A Slot 1 : 2048MB (5300) Slot 1 Manufacturer : Kingston Slot 2 : 2048MB (5300) Slot 2 Manufacturer : Qimonda Slot 3 : 2048MB (5300) Slot 3 Manufacturer : Kingston Slot 4 : 2048MB (5300) Slot 4 Manufacturer : Qimonda Slot 5 : 2048MB (5300) Slot 5 Manufacturer : Kingston Slot 6 : 2048MB (5300) Slot 6 Manufacturer : Qimonda As you can clearly see, CPU-Z says I have 16GB of PC2-5300 RAM installed. For some reason in both BIOS and in Windows the maximum usable RAM is 12GB instead of 16GB. I have a dedicated video card connnected, so it can't be stealing RAM to use for the GPU (the S5000XVN does not have any onboard video). I am running Windows Server 2008 R2 x64 as the primary OS - so there should not be a memory limitation imposed on me from the OS. Has anyone experienced this before? Any ideas on how I can actually use all 16GB of RAM? I plan on using this machine to use for a Hyper-V server and have x2 Quad Core Xeon CPUs on the way as I write this post.

    Read the article

  • How do I configure PHP5 and Apache2 on Ubuntu Server?

    - by rofls
    I'm trying to follow these instructions (under the Troubleshooting PHP 5 heading). I have PHP installed and when when I run a2enmod php5 it says "Module php5 already enabled". The problem is I created a file, test.php, that's just this: <?php phpinfo(); ?> and put it in /var/www, like the instructions tell me to, but running curl http://localhost/test.php produces an Apache made 404 that says it can't find that file. I have: ServerName localhost DocumentRoot /var/www in one of the sites-available in the /etc/apache2 directory. I should probably figure this out on my own, but the instructions say for troubleshooting do: "If the problem persists, check your PHP file authorisations (it should be readable at least by Ubuntu user "apache"), and check if the PHP code is correct. For instance, copy your PHP file, replace your whole PHP file content by "" (without the quotation marks): if you get the PHP test page in your web browser, then the problem is in your PHP code, not in Apache or PHP configuration nor in file permissions. If this doesn't work, then it is a problem of file authorisation, Apache or PHP configuration, cache not emptied, or Apache not running or not restarted." And I don't know where the PHP file authorisations are or how to do that.

    Read the article

< Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >