Search Results

Search found 41598 results on 1664 pages for 'segmentation fault'.

Page 537/1664 | < Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >

  • Terminal Server 2008 not issuing Volume device CAL's

    - by Pieter
    We've a lot of Volume Licences left, but the License Server apparantly doesn't use them. Instead it issues Temporary Per Device CAL's. Which is a bit odd off course... There are two licensing servers installed on terminal servers, not on a domain controller (these are pushed by SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\LicenseServers registry setting) EDIT: http://img180.imageshack.us/img180/8927/ss20090909135547.png

    Read the article

  • Backup Exec 2010 Service hangs on "starting"

    - by Rob
    trying to get backup exec 2010 running on a win 2k3 server. The service just hangs on "starting" during the boot up phase. this make the server really slow to boot and the backup exec service does not start at all. i ran a hotfix that was supposed to fix it but it has not. Has anyone else ran into this issue or know what I should do?

    Read the article

  • Tracking Security Vulnerability remediation

    - by Zypher
    I've been looking into this for a little while, but havn't really found anything suitable. What I am looking for is a system to track security vulnerability remdiation status. Something like "bugzilla for IT" What I am looking for is something pretty simple that allows the following: batch entry of new vulnerabilities that need to be remediated Per user assignment AD/LDAP Authentiation Simple interface to track progress - research, change control status, remediated, etc. Historical search ability Ability to divide by division Ability to store proof of resolution for the Security Team to access Dependency tracking Linux based is best (that's my group :) ) Free is good, but cost doesn't matter so much if the system is worth it The systems doesn't have to have all of these features, but if it did that would be great. yes we could use our helpdesk software, but that has a bunch of pitfalls such as triggering SLA alerts and penalties as well as not easily searchable outside of a group. Most of what I have found are bug tracking systems that are geared towards developers, and are honstely way overkill for what I am looking for. Server Faults input is greatly appreciated as always!

    Read the article

  • AWS document on number of databases allowed on an Amazon RDS instance

    - by user35042
    At the Amazon RDS FAQ there is the question "What is a database instance (DB Instance)?". The entire answer (as of mid-June 2012) is: You can think of a DB Instance as a database environment in the cloud with the compute and storage resources you specify. You can create and delete DB Instances, define/refine infrastructure attributes of your DB Instance(s), and control access and security via the AWS Management Console, Amazon RDS APIs, and Command Line Tools. Multiple MySQL databases or SQL Server databases (up to 30) or Oracle database schemas can be created on a given DB Instance. The last part of that quote, "Multiple MySQL databases or SQL Server databases (up to 30) Oracle database schemas" I interpret to mean that you can have an "unlimited" number of databases on an RDS MySQL or Oracle instance but only 30 databases on an MS SQL Server instance ("unlimited" meaning not limited by the RDS infrastructure itself). This was asked in the Stackoverflow question Does Amazon RDS support multiple databases per instance?. The answer quoted an older version of the FAQ. What I am looking for is an Amazon document that clarifies this question, or else someone who has experience using Amazon RDS who can attest what the situation actually is.

    Read the article

  • Output Caching with IIS7 - How To for an dynamic aspx page?

    - by Lieven Cardoen
    I have a RetrieveBlob.aspx that gets some query string variables and returns an asset. Eeach url corresponds to a unique asset. In the RetrieveBlob.aspx a Cache Profile is set. In Web.Config the profile looks like (under system.web tag: <caching> <outputCache enableOutputCache="true" /> <outputCacheSettings> <outputCacheProfiles> <add duration="14800" enabled="true" varyByParam="*" name="AssetCacheProfile" /> </outputCacheProfiles> </outputCacheSettings> </caching> Ok, this works fine. When I put a breakpoint in the code behind of RetrieveBlob.aspx, it gets triggered the first time, and all the other times not. Now, I throw away the Cache Profile and instead I'm having this in my Web.Config under System.WebServer: <caching> <profiles> <add extension=".swf" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".flv" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".gif" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".png" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".mp3" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpeg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> </profiles> </caching> Now the caching doesn't work anymore. What am I doing wrong? Is it possible to configure under Caching tag of System.WebServer a Caching Profile for a Dynamic aspx page?

    Read the article

  • Run Rails 3 app on a Rails 2 server/machine?

    - by chucknelson
    I'm trying to run a Rail 3 (3.0.10) app on a shared joyent smartmachine server (I don't have root access) which has Rails 2 (2.3.11) installed , and I'm not sure what to do after I freeze my Rails 3 app with bundle install --deployment. It seems like with the Rails 3 and bundler gems not being installed on the server locally, my app isn't even recognizing the local version of Rails I have frozen with my app. Has anyone gotten this to work, or have any advice? The server runs Apache, and I think I can get lighttpd installed too - but I'd rather stay with Apache if I can. Also, if it matters, Passenger is not an installed gem either...and I'm not sure I can freeze that with my app. Update 11/30/2011 12:30 PM EST Bundler is not installed on this server, either. Not sure if having that would enable the new Rails 3 "freeze" (bundle --deployment) to work or not...

    Read the article

  • Setting up Red Hat Enterprise Linux Server as a mail exchange server

    - by Syedur
    I am a Unix/Linux/Windows Server noob. So, keep that in mind before you throw your stones at my glass house. :P I have a Windows Server 2008 R2 machine that's acting as domain controller, Server A. It's also running a DNS server. I have a Red Hat Enterprise Linux Server 5.3, Server B that is intended for mail server. In order for the mail delivery to happen, I understand that I have to set an MX record on Server A and point it to Server B. Well, I did. I manually added a host name on Server A and pointed to Server B's IP address. Then I added an MX record and pointed it to the host name. That didn't do the trick. After taking the above steps, I used the "dig" command on Server B to lookup the MX record coming back from Server A and it wasn't what I was expecting. What am I doing wrong here? I have noticed that... my Windows machines that are joined to the domain (Server A) are listed under the host names. The machines that are not joined to the domain are not list. This is fine, I am not worried about this. What does concern me, do I have to join the Server B to domain in order for Server A to recognize as a valid host and forward the MX properly? If so, some simple steps on how to join Server B to the domain would also help.

    Read the article

  • Sendmail : forwarding emails problem

    - by Chayan
    So here is my situation. I have this 2 DNS (say a.org and b.org) with the same IP address (both A record) tied to this box. We have Google mail handing our a.org emails . I want b.org emails to forward to users at a.org emails . My sendmail is getting confused and is delivering all b.org emails locally instead of forwarding them to a.org (which is specified in the .forward) . What do I do from the sendmail side to force external delivery . Thanks Much

    Read the article

  • Error 550 Mail Relay Message Could Not Be Sent

    - by mraliks
    I'm not sure if this is the right area to ask, but here we go. I have a client who has a couple websites on a Windows server with MailEnable as the mail manager. Any emails being sent from the server work great, except when sending to some domains, the message does not go through due to the following error: 12/13/11 01:15:56 ME-I0026: [86476200E5834227A819E6E63E0EFDA2.MAI] Sending message 12/13/11 01:15:57 ME-IXXXX: [86476200E5834227A819E6E63E0EFDA2.MAI] Remote server returned a response indicating a permanent error. Server Response: (550 Relaying not allowed**) 12/13/11 01:15:57 ME-E0036: [86476200E5834227A819E6E63E0EFDA2.MAI] MAIL FROM command Failed. Can anyone give me some leads on how to correct the settings to allow emails to go through properly? In particular, the emails are not going through to a Network Solutions email account and Network Solutions has not been very helpful thus far. In addition, can a domain's DNS settings affect this error? Currently, the domains are hosted by Network Solutions and use the Network Solutions email service to send/receive email. The server is located with a different company and the Domains' DNS points to it. Which mail related DNS entries are allowed or not allowed in this scenario? Thanks

    Read the article

  • Web-based disk space visualizer

    - by Martijn
    I have a number of Linux webservers for which I'd like to track where disk space is going and keep disk space to a minimum. Typically I login on SSH and use du to find out where disk space is wasted but this is cumbersome and slow. A visualisation tool like KDirStat would be ideal, but it requires installing an X server at the very least, which kind of defeats the purpose. Is there any web-based disk space visualizer? I'm open to alternative solutions.

    Read the article

  • Log - Server kernel: INFO: task httpd:000000 blocked for more than 120 seconds

    - by valter
    Almost everyday my server is crashing due to hight server load, and even restarting apache or mysql can't solve the problem. I need to reboot the server to solve, or it crash again due to the high load. The log system records something like this when it crashes: Aug 11 18:33:53 server kernel: INFO: task httpd:20008 blocked for more than 120 seconds. Aug 11 18:33:53 server kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 11 18:33:53 server kernel: httpd D ffffffff801538ac 0 20008 5816 20066 19809 (NOTLB) Aug 11 18:33:53 server kernel: ffff81025a299dc8 0000000000000082 ffff81033b4c0740 ffffffff80009a14 Aug 11 18:33:53 server kernel: ffff8101063f8d80 0000000000000009 ffff8100b758f7e0 ffff8101c57187e0 Aug 11 18:33:53 server kernel: 00009436d4100b6c 000000000001d50f ffff8100b758f9c8 000000083b531588 Aug 11 18:33:53 server kernel: Call Trace: Aug 11 18:33:53 server kernel: [<ffffffff80009a14>] __link_path_walk+0x173/0xfb9 Aug 11 18:33:53 server kernel: [<ffffffff8002cc16>] mntput_no_expire+0x19/0x89 Aug 11 18:33:53 server kernel: [<ffffffff80063c4f>] __mutex_lock_slowpath+0x60/0x9b Aug 11 18:33:53 server kernel: [<ffffffff80023908>] __path_lookup_intent_open+0x56/0x97 Aug 11 18:33:53 server kernel: [<ffffffff80063c99>] .text.lock.mutex+0xf/0x14 Aug 11 18:33:53 server kernel: [<ffffffff8001b21f>] open_namei+0xea/0x712 Aug 11 18:33:54 server kernel: [<ffffffff8002768a>] do_filp_open+0x1c/0x38 Aug 11 18:33:54 server kernel: Firewall: *UDP_IN Blocked* IN=eth1 OUT= MAC=ff:ff:ff:ff:ff:ff:00:30:48:9e:6e:99:08:00 SRC=208.43.135.158 DST=255.255.255.255 LEN=151 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=38354 DPT=6112 LEN=131 Aug 11 18:33:54 server kernel: [<ffffffff8001a061>] do_sys_open+0x44/0xbe Aug 11 18:33:54 server kernel: [<ffffffff8005d28d>] tracesys+0xd5/0xe0 I googled a lot trying to find a solution. But it looks that the solution is just to update the kernel or disk driver, thinks that I don't know how to do. In this url http://bugs.centos.org/view.php?id=4515 a lot o people report similar problems, except the fact that they are not related to httpd like mine. According to one member, one solution would be to add "elevator=noop " to /etc/grub.conf like in this example: title CentOS (2.6.18-238.12.1.el5xen) root (hd0,0) kernel /vmlinuz-2.6.18-238.12.1.el5xen ro root=/dev/VolGroup00/LogVol00 elevator=noop initrd /initrd-2.6.18-238.12.1.el5xen.img Would this really solve the problem? My disk are working in RAID. Can this cause some problem to my server? Is there any other solution?

    Read the article

  • DEBIAN minimum hard disk footprint

    - by user41072
    Hi, I found this http://serverfault.com/questions/29071/red-hat-server-minimal-install question while searching google for debian minimal install. User shylent wrote thet he uses really basic debian install so small that processes can count on one hand fingers :D. So Im searching and asking for starting point to create basic linux distro but not from scratch like LFS but make linux distro based on debian for example. I used debootstrap but still it is 150M large.

    Read the article

  • what web based tool, to allow a non-technical user to manage authorized keys files on a Linux (fedora/centos/ubuntu/debian) server

    - by Tom H
    (Edit: clarification below) We have a number of groups of developers that change frequently, and a security policy to require individual logins to servers using rsa or dsa public keys, which is achieved via the standard method of adding id_dsa.pub to their authorized keys file. I am using chef to sync the user accounts across machines, however our previous method of using webmin to manage the user passwords is not designed for key based auth, and hence is not easy to use for non-technical users. The developers are logging in from the WAN using ssh, they can either provide their own key, or an administrator will send them a private key. The development machines are located in the cloud and we have a single server available to host the master set of accounts. Obviously I could deploy ldap or other centralised authentication system, but that seems a bit over blown when webmin worked well for the simple case. It is easy to achieve synchronised users, groups and passwords across a bunch of low security development boxes using webmin clustered users and groups. However looking at the currently installed webmin it is not so easy to create the authorized keys as it is to create user accounts and passwords. (its possible, but its not easy - some functionality is in the usermin module, or would required some tedious steps) Ideally I'd like a web interface that is pretty much dedicated to creating users and groups, and can generate key pairs on the fly, and can accepted pasted in public keys to add to the users authorized keys file. If the tool sync'ed the users and keys as well, that would be great, but I can use chef to do that part if the accounts are created correctly on the "master" server.

    Read the article

  • Puppet master fails to run under nginx+passenger configuration as rack app, works when run as system service

    - by Anadi Misra
    I get the error [anadi@bangda ~]# tail -f /var/log/nginx/error.log [ pid=19741 thr=23597654217140 file=utils.rb:176 time=2012-09-17 12:52:43.307 ]: *** Exception LoadError in PhusionPassenger::Rack::ApplicationSpawner (no such file to load -- puppet/application/master) (process 19741, thread #<Thread:0x2aec83982368>): from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from config.ru:13 from /usr/local/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/builder.rb:51:in `instance_eval' from /usr/local/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/builder.rb:51:in `initialize' from config.ru:1:in `new' from config.ru:1 when I start nginx server with passenger module configured, puppet master configured to run through rack. here is the config.ru [anadi@bangda ~]# cat /etc/puppet/rack/config.ru # a config.ru, for use with every rack-compatible webserver. # SSL needs to be handled outside this, though. # if puppet is not in your RUBYLIB: #$:.unshift('/usr/share/puppet/lib') $0 = "master" # if you want debugging: # ARGV << "--debug" ARGV << "--rack" require 'puppet/application/master' # we're usually running inside a Rack::Builder.new {} block, # therefore we need to call run *here*. run Puppet::Application[:master].run and the nginx configuration for puppet master is as follows [anadi@bangda ~]# cat /etc/nginx/conf.d/puppet-master.conf server { listen 8140 ssl; server_name bangda.mycompany.com; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; access_log /var/log/nginx/puppet/master.access.log; error_log /var/log/nginx/puppet/master.error.log; root /etc/puppet/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangda.mycompany.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangda.mycompany.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } however when I run puppet through the ususal puppetmasterd daemon it works perfect with no errors. I can see somehow the nginx+passenger+rack setup fails to initialize while the same works when running the natvie puppetmaster daemon. Any configuration that I am missing?

    Read the article

  • Getting a "403 access denied" error instead of serving file (using django, gunicorn nginx)

    - by Finglish
    Getting a "403 access denied" error instead of serving file (using django, gunicorn nginx) I am attempting to use nginx to serve private files from django. For X-Access-Redirect settings I followed the following guide http://www.chicagodjango.com/blog/permission-based-file-serving/ Here is my site config file (/etc/nginx/site-available/sitename): server { listen 80; listen 443 default_server ssl; server_name localhost; client_max_body_size 50M; ssl_certificate /home/user/site.crt; ssl_certificate_key /home/user/site.key; access_log /home/user/nginx/access.log; error_log /home/user/nginx/error.log; location / { access_log /home/user/gunicorn/access.log; error_log /home/user/gunicorn/error.log; alias /path_to/app; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_pass http://127.0.0.1:8000; proxy_connect_timeout 100s; proxy_send_timeout 100s; proxy_read_timeout 100s; } location /protected/ { internal; alias /home/user/protected; } } I then tried using the following in my django view to test the download: response = HttpResponse() response['Content-Type'] = "application/zip" response['X-Accel-Redirect'] = '/protected/test.zip' return response but instead of the file download I get: 403 Forbidden nginx/1.1.19 Please note: I have removed all the personal data from the the config file, so if there are any obvious mistakes not related to my error that is probably why. My nginx error log gives me the following: 2012/09/18 13:44:36 [error] 23705#0: *44 directory index of "/home/user/protected/" is forbidden, client: 80.221.147.225, server: localhost, request: "GET /icbdazzled/tmpdir/ HTTP/1.1", host: "www.icb.fi"

    Read the article

  • Copy a harddrive from a failed desktop machine using a second working one. [closed]

    - by MrEyes
    Heres the scenario: I have PC-A, an old PC that runs Windows XP but now refuses to boot due to a failed motherboard (or maybe PSU). This PC has a single 80gb IDE drive. I also have PC-B, running Windows Vista, this is working fine. I want to copy all the data off PC-As HDD onto PC-B. To do this I have taken the HDD out of PC-A and connected it as a slave to PC-B. PC-B now boots and sees the additional drive. However, when I attempt to access/copy user folders (i.e. Documents and Settings/[username]/*) I am told that I cannot access the folders due to user permissions. I am doing this under an adminstrator account on PC-B. So the question is, how can I "backup" the data? Preferably without making any changes to the drive contents. The reason for this is that it is possible that PC-A is failing due to a bad PSU, so I intend to replace it before writing off the machine. However I would feel much happier if I had a backup of the data on the HDD.

    Read the article

  • Access writes issue in MAC

    - by user594738
    Hi All, I have a MAC machine running MAC OS Version 10.6.6. I have added my mac machine to a domain controller (which runs Windows version of server) and allow users from domain controller to login. Because of our server migration, we removed the mac machine from old domain controller and added it to the new domain controller. Users in the old domain controller are copied to new domain controller. When I login into mac (which is added to new domain controller) with the domain controller user credentials, I was not able to access my desktop folders and any other folders in my users directory. It says "The folder Desktop can't be opened because you don't have permission its contents" Any idea why I was not able to access my home directory and how can I resolve this issue...!

    Read the article

  • Are there any problems migrating users from OSX 10.4.11 server to 10.6.6. server using WGM?

    - by Geoff Hardman
    I have just set up a new Mac server running 10.6.6. Installation went smoothly and all appears ok. Can create a new user through WGM and logon to the server from one of our local client systems. Imported the old users from the 10.4 system using WGM but the new system will not create home directories for these users and will not let them log on. I have read that there are issues using this method but can not find any detail as to why. Not wishing to recreate over 350 users from scratch I am looking for an easier solution. Can anyone help? The new server is 10.6.6. We use Open Directory and AFP. The paths are the same as the old server.

    Read the article

  • multi-user rvm gem install failure when called from CloudFormation::Init

    - by Peter Mounce
    I've taken an Amazon Linux AMI (based on CentOS) and installed RVM (1.10.3) to it in multi-user fashion (see {1} below). I used that to install ruby 1.9.3-p125, rubygems 1.8.17, and bundler 1.1 as the baseline requirements for most things I'm going to be using the instances for. I've captured that instance to an AMI, and am now launching it via CloudFormation, with some CloudFormation::Init commands. One of them is to use s3cmd to pull down a private gem from S3, and the next one, the one that fails, is to install that gem. It fails with an error message 2012-03-15 16:53:20,201 [ERROR] Command 20_install_gems (/usr/local/rvm/rubies/ruby-1.9.3-p125/bin/gem install ./*.gem) failed 2012-03-15 16:53:20,202 [DEBUG] Command 20_install_gems output: /usr/local/rvm/rubies/ruby-1.9.3-p125/bin/gem:12:in `require': no such file to load -- rubygems (LoadError) from /usr/local/rvm/rubies/ruby-1.9.3-p125/bin/gem:12 Now, that happens during the cfn-init execution - I assume, but haven't checked yet, that cfn-init is being run with an environment different from that of ec2-user (there are no other users on the instance). If I run gem install mygem.gem in an interactive session then that works fine. So, my question really, is what should I do to make this work for cfn-init? Have I correctly set up rvm as multi-user? I've confirmed that cfn-init is being run as the root user, with his restricted environment. How should I source the /etc/profile.d/rvm.sh into root's sessions? {1} My semi-automated rvm installation steps (run in interactive session as ec2-user): sudo bash -s stable < <(curl -s https://raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer ) sudo gpasswd -a ec2-user rvm # iconv-devel is baked into centos' glibc sudo yum install -y autoconf automake bison bzip2 gcc-c++ git libffi-devel libtool libxml2-devel libxslt-devel libyaml-devel make openssl-devel patch readline readline-devel zlib zlib-devel source /etc/profile.d/rvm.sh rvm list known # in a new session: rvm install ruby-1.9.3-p125 rvm use 1.9.3 --default gem update --system # gems required by public_web-awareness gem install aws-sdk bundler cocaine sinatra echo -e "gem: --no-ri --no-rdoc\n" > /home/ec2-user/.gemrc # delete unnecessary documentation files rm -rf `gem env gemdir`/doc sudo -s sudo echo -e "gem: --no-ri --no-rdoc\n" > /etc/skel/.gemrc sudo echo -e "gem: --no-ri --no-rdoc\n" > /etc/gemrc # ctrl + d out of the sudo session Some environment information: [ec2-user@ip ~]$ echo $PATH /usr/local/rvm/gems/ruby-1.9.3-p125/bin:/usr/local/rvm/gems/ruby-1.9.3-p125@global/bin:/usr/local/rvm/rubies/ruby-1.9.3-p125/bin:/usr/local/rvm/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/aws/bin:/home/ec2-user/bin [ec2-user@ip ~]$ echo $GEM_HOME /usr/local/rvm/gems/ruby-1.9.3-p125 [ec2-user@ip ~]$ echo $GEM_PATH /usr/local/rvm/gems/ruby-1.9.3-p125:/usr/local/rvm/gems/ruby-1.9.3-p125@global [ec2-user@ip ~]$ echo $BUNDLE_PATH [ec2-user@ip ~]$ gem list *** LOCAL GEMS *** aws-sdk (1.3.6) bundler (1.1.0) cocaine (0.2.1) httparty (0.8.1) json (1.6.5) multi_json (1.1.0) multi_xml (0.4.1) nokogiri (1.5.1, 1.5.0) rack (1.4.1) rack-protection (1.2.0) rake (0.9.2) sinatra (1.3.2) tilt (1.3.3) uuidtools (2.1.2) yamler (0.1.0)

    Read the article

  • SBS 2003 stops to respond often due to limited memory

    - by Sanoj
    I have a Windows SBS 2003 Std that regularly stops to respond (crashes), in about every 20th day. The only thing I can see in the logs (the one that are mailed to the administrator) is that used memory increases with about 30MB/day. The process that uses more and more memory is sqlservr. We don't have much installed on the server; a Point-Of-Sale-system that uses Pervasive SQL as database and an Accounting application. We just have 2GB of RAM and I could upgrade to 4GB but I think that this just delay the problem. When the server stops to respond, the screen saver cannot be deactivated, no DNS-look-ups is working so the client's can't access Internet. And applications on the server do not reply. And we have to press the power-button to restart the server. For the moment it has an uptime of 19 days and have 2 345MB in memory use (idle) and sqlservr is using 819 MB. So I guess it will crash soon. Is there any solution to this problem? Could I limit sqlservr to some memory?

    Read the article

  • Continuous outbound connection from QNAP NAS

    - by user192702
    I notice on my firewall that my QNAP NAS is continuously sending UDP sessions out to the Internet. Every second I have 5 - 7 connections out to addresses like the following: 2013-11-10 23:17:54 Deny 192.168.60.5 93.215.212.162 6881/udp 6881 6881 2013-11-10 23:18:05 Deny 192.168.60.5 87.76.0.83 29872/udp 6881 29872 2013-11-10 23:18:05 Deny 192.168.60.5 5.164.188.224 6881/udp 6881 6881 2013-11-10 23:18:05 Deny 192.168.60.5 80.61.45.206 6881/udp 6881 6881 2013-11-10 23:18:34 Deny 192.168.60.5 37.117.204.129 6881/udp 6881 6881 2013-11-10 23:18:34 Deny 192.168.60.5 71.67.101.30 51413/udp 6881 51413 2013-11-10 23:18:34 Deny 192.168.60.5 89.28.92.191 8621/udp 6881 8621 2013-11-10 23:18:34 Deny 192.168.60.5 94.244.157.85 28221/udp 6881 28221 2013-11-10 23:18:34 Deny 192.168.60.5 213.241.61.240 9089/udp 6881 9089 2013-11-10 23:18:45 Deny 192.168.60.5 88.163.28.100 52721/udp 6881 52721 2013-11-10 23:18:45 Deny 192.168.60.5 37.55.190.20 10027/udp 6881 10027 2013-11-10 23:18:45 Deny 192.168.60.5 62.72.188.146 14306/udp 6881 14306 2013-11-10 23:19:14 Deny 192.168.60.5 85.53.244.205 51413/udp 6881 51413 2013-11-10 23:19:14 Deny 192.168.60.5 67.163.18.215 52130/udp 6881 52130 2013-11-10 23:19:14 Deny 192.168.60.5 86.172.105.140 9089/udp 6881 9089 2013-11-10 23:19:14 Deny 192.168.60.5 99.28.56.121 52383/udp 6881 52383 2013-11-10 23:19:14 Deny 192.168.60.5 109.60.184.249 46217/udp 6881 46217 2013-11-10 23:19:25 Deny 192.168.60.5 121.107.144.174 21135/udp 6881 21135 2013-11-10 23:19:25 Deny 192.168.60.5 84.39.116.180 48446/udp 6881 48446 2013-11-10 23:19:25 Deny 192.168.60.5 183.238.254.62 openvpn/udp 6881 1194 ......... This is frightening as it seems like it's been hacked to send information out. Has anyone observed this behaviour from their QNAP NAS?

    Read the article

  • Apache mod_deflate not compressing javascript and css files?

    - by user34295
    "GET /Symfony/web/app.php/app/dashboard HTTP/1.1" 4513/37979 (11%) "GET /Symfony/web/css/application.css HTTP/1.1" -/- (-%) "GET /Symfony/web/js/application.js HTTP/1.1" -/- (-%) "GET /Symfony/web/js/highcharts.js HTTP/1.1" -/- (-%) "GET /Symfony/app/Resources/public/img/logo.png HTTP/1.1" -/- (-%) Don't know if there is something wrong with my configuration, but the no compression for css and js seems strange to me. However both css and js are already minified. Here is Apache relevant section in cong/httpd.conf: # Deflate AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/xml AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE application/rss+xml AddOutputFilterByType DEFLATE application/javascript AddOutputFilterByType DEFLATE application/x-javascript DeflateCompressionLevel 9 BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/html # IE5.x and IE6 get no gzip, but allow 7+ BrowserMatch \bMSIE\s7 !no-gzip Header append Vary User-Agent env=!dont-vary DeflateFilterNote Input instream DeflateFilterNote Output outstream DeflateFilterNote Ratio ratio LogFormat '"%r" %{outstream}n/%{instream}n (%{ratio}n%%)' deflate CustomLog logs/deflate.log deflate

    Read the article

  • opennms postgres connection slow

    - by krisdigitx
    i am running the opennms application server on a physical server and the database on an ESXi VM. Recently the opennms webconsole has been very slow to load as such i deleted most of the events from the database table, now both servers have no load at all, and the psql connection from the application server to the database server is also very fast, but somehow opennms webconsole is still slow. this is the strace from the opennms process id: 18629 futex(0x2aaac77d8a84, FUTEX_WAIT_PRIVATE, 453, NULL <unfinished ...> 3015 futex(0x2aaabc4a2ee4, FUTEX_WAIT_PRIVATE, 323, NULL <unfinished ...> 10863 futex(0x2aaabbebaa94, FUTEX_WAIT_PRIVATE, 395, NULL <unfinished ...> 25260 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 10859 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 10982 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 3011 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 25260 futex(0x2aaae098fc28, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 10982 futex(0x2aaac0eaf928, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 3011 futex(0x2aaab0cb1728, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 10859 futex(0x2aaac062c328, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 25260 <... futex resumed> ) = 0 10982 <... futex resumed> ) = 0 3011 <... futex resumed> ) = 0 10859 <... futex resumed> ) = 0 25260 futex(0x2aaabc38b6b4, FUTEX_WAIT_PRIVATE, 443, NULL <unfinished ...> 10982 futex(0x2aaabc5d7b94, FUTEX_WAIT_PRIVATE, 99, NULL <unfinished ...> 3011 futex(0x2aaac7c55334, FUTEX_WAIT_PRIVATE, 183, NULL <unfinished ...> 10859 futex(0x2aaabbb8c9d4, FUTEX_WAIT_PRIVATE, 347, NULL <unfinished ...> 10846 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 10846 futex(0x2aaae9022428, FUTEX_WAKE_PRIVATE, 1) = 0 10846 futex(0x2aaabe0030b4, FUTEX_WAIT_PRIVATE, 251, NULL <unfinished ...> 20281 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 14100 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 2925 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 10843 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 20281 futex(0x2aaac7e93628, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 14100 futex(0x2aaac04e8c28, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 2925 futex(0x2aaaec085528, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 10843 futex(0x2aaab20b0528, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> and shows lots of connection timeout??? i think its the connection between the java application and database which is causing issues. any ideas how to troubleshoot this???

    Read the article

  • Sonicwall TZ210 - Set up public wifi on separate subnet & interface

    - by thomasjbarrett
    I want to set up a public wifi by connecting another router to the X6 interface, and put it on a separate subnet (192.168.10.0/24) & in the DMZ Zone to keep it away from the regular LAN. I believe I have the network settings correct: the router has acquired the IP and DNS information from the TZ210, and the TZ210 shows it as an active DHCP lease. X6 is in the DMZ. I now have a routing/NAT/firewall problem, since I can't get any traffic to travel from the subnet to the internet. I can't get to any external websites and can't ping the TZ210 from the subnet. X0 is the regular LAN, and X1 is the WAN. Looking for any tips or tutorials on this. Here's my current relevant rules: Routing Source: X6 Subnet Destination: Any Service: Any Gateway: Default Gateway Interface: X6 Source: Any Destination: X6 Subnet Service: Any Gateway: 0.0.0.0 Interface: X6 NAT Policies Source Original: Any Translated: WAN IP Destination Original: Any Translated: Original Inbound: X6 Outbound: X1 Source Original: Any Translated: U0 IP Destination Original: Any Translated: Original Inbound: X6 Outbound: U0 Firewall DMZ LAN : Deny All DMZ WAN : Allow All LAN DMZ : Allow All WAN DMZ : Allow All

    Read the article

< Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >