Search Results

Search found 41598 results on 1664 pages for 'segmentation fault'.

Page 368/1664 | < Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >

  • Win 2003 Terminal Server, Logoff Domain Users after 30 mins

    - by Kabir Rao
    We have a terminal server in our environment, where Domain\Domain Users can login through Remote Desktop. We want to force log off these users after half an hour. However Administrators (this includes Domain\Domain Admin also) should not be affected by this. They should be able to connect to terminal server with out any interruption. Can somebody guide us about it. Thanks upfront Kabir

    Read the article

  • High RAM usage, not seen in task manager

    - by r4dk0
    Hi! I am using Windows 7 64-bit 7600, with 4Gb of RAM. I have a serious problem, since something uses a lot of RAM(3.94Gb) and I see "stairs" in taskmanager, it rises to +3Gb RAM and it drops to about 2Gb and then rises slowly again, and suddenly drops. I tryed installing this version again and other versions, newer ones, but no effect. Ive even tryed disconnecting other harddrives while I installed it, and then installed NOD32 and updated it. How could I know what is using that much RAM? P.S.: I was suspecting superfetch service, I disabled it, restarted pc, and it didnt work, since the memory is the highest point when I login with password, it is really annoying since I need about 1minute to see my desktop, neither alone try anything else. After loging in it slowly drops and after random time it starts rising again. That doesnt happen immediately after a fresh windows install. And how the drivers go, I tryed older drivers for GPU, and newest ones.

    Read the article

  • Domino nchronos.exe multiple instances causing server to die, and Sametime problems

    - by Kevin
    I've had this problem for a few months now. I thought it started when I installed the Traveller software on the server to add ActiveSync support, but I removed that and the problem still persists. Basically new instances of "nchronos.exe" keeps spawning (and not ending), so over a period of a few days the server eventually gets drowned in nchronos.exe processes, stops responding and I need to kill Domino. My process count the last time was up at about 330, and when I killed it and restarted the Domino my process count went to 160. I'm running Domino 8.5.1 with Fix Pack 2. I don't know if it's relevant, but my Domino server was also acting as a Sametime server. At around the same time that nchronos started playing up sametime also stopped working. None of my users can connect to sametime and in the domino log it keeps telling me "stpolicy.exe" has terminated. I've googled for that and tried a few things, but nothing seems to make sametime work again. Any thoughts?? Cheers, Kevin

    Read the article

  • Doubts about Cloud Infrastructure

    - by Pravin
    Maybe a little more of the same questions that others have asked but wanted to clarify my doubt, for some years run my hosting company (reseller of esds) and I've done well so far, but I am determined to bring quality and server technology to offer another level. So far I have understood that there is a difference between cloud and cluster servers because the cluster function as load balancers that distribute in different servers roles and use the servers less overloaded in the cloud is the union of multiple servers and then the same is vitualized unlike the cluster that is allowed to use the resources of the CPU and RAM servers in the virtualized environment. My approach is to use 3 dedicated servers to create a cloud server, My doubts: Does this type of cloud servers are only reserved for big companies? (Either because the union of the servers is done by hardware or software with high price) What characteristics should these servers meet? Possibly through software which should be used? Available? Thanks for your time, Cheers!

    Read the article

  • TeamCity sends inadequate responses after Selenium tests

    - by Dmitriy Sukharev
    I have a TeamCity 7.0.2 at CentOS 6.2 server without X Server. I've installed x11-fonts*, xvfb, firefox, xauth, extracted env. variable DISPLAY=localhost:1, and started xvfb. After that I could start Selenium tests using maven. Tests are executed, but there's an issue with TeamCity. Usually TeamCity starts hehaves absolutely inadequate (it confuses images at the page, sends xml or strange text ampersants and numbers in responses and is a bit slower), also tests are executed 4 times slower (1h 15m) at server than at tester Windows 7-based machine (25m). It worth to notice that tests launch two Jetty servers for tested application (one for REST-services application and another for client). In TeamCity I set JVM command line parameters: -Xms256m -Xmx1224m -XX:MaxPermSize=320m, and Additional Maven command line parameters ends with "-DMAVEN_OPTS=-Xmx1024m" (without quotes). Also both web-services and TeamCity uses the same Oracle server (but different Oracle users). Finally TeamCity and its build agent is at the same server. Server has only 4GB of RAM, but during testing there're 400MB of RAM and 1.2GB of swap. TeamCity and Firefox uses about 65% of CPU during testing. There's no firefox process after end of testing. My knowledge about Selenium is weak. I only know that we use 2.20.0 version of selenium-java maven dependency. Please help me to determine why TeamCity sends wrong responces after Selenium tests. I've tried to give you all information I have, but feel free to ask me for more information.

    Read the article

  • exchange 2007 - non email users in the GAL

    - by prolix21
    We have a fairly new Exchange 2007 SP2 install with some GAL issues - basically when you browse the GAL from Outlook 2007 there are users listed that are not email users. If you look in Exchange under recipient configuration these users don't exist. They're AD users, but were never configured in Exchange, yet for some reason they show up in the GAL. The GAL seems to update correctly if new users are added or existing accounts are modified. I was wondering if anyone had any insight on this? I have other Exchange 2007 installs that are fine and don't have this issue. This install was completely clean, no migration or anything of that nature.

    Read the article

  • Strongswan and OpenVPN together

    - by cmorgia
    I have an host in Amazon EC2 which is configured with an OpenVPN Access Server. The only client to this server is acting as a gateway from a private network. I installed StrongSWAN 5 on the same host to allow windows 7 and iOS clients to connect using IPSEC. Both services works but what I cannot figure out is how to configure StrongSWAN to consider the OpenVPN tunnel endpoint as the only gateway available to clients. Basically I want all the traffic that comes from IPSEC clients to be entirely forwarded to the OpenVPN tunnel. The remote OpenVPN client that is exposing the private network has forwarding enabled and appropriate masquerading configured. The only missing point is to have the OpenVPN tunnel as the gateway for IPsEC clients

    Read the article

  • Migrating to new dovecot server; Dovecot fails to authenticate using old password database

    - by Ironlenny
    I am migrating my companies intranet from a OS X server to an Ubuntu 12.04 server. We use a flat file to store user names and passwords hashs. This file is used by Apache and Dovecot to authenticate users. The Ubuntu server is running Dovecot 2.0 while the OS X server is running Dovecot 1.2. I have already migrated WebDav which uses Apache for authentication. Authentication works. I'm in the process of migrating our Prosody server which uses Dovecot for authentiation. Dovecot is up and running, but when I test authentication using either telnet a login username password or doveadm sudo doveadm auth username, I get dovecot: auth: passwd-file(username): unknown user dovecot: auth: Debug: client out: FAIL#0111#011user=username in my log file. I can use sudo dovecot user username to perform a user lookup and it will return the user's info. I can generate a password hash locally and Dovecot will authenticate the test password just fine.

    Read the article

  • Sharing information between nodes in Beowulf Cluster

    - by Alejandro Sazo
    I am setting up a beowulf cluster and I've been reading that it might be necessary to make the home directory of the cluster users shared between them (assuming this users are local to each machine). The other case is leave each user with its own home and the communication is up to the master node. Another idea that came up was to use an LDAP unique user logged on each machine in the cluster, that keeps the idea of the shared home between nodes (but is only one home of one user). Which approach is better for this kind of cluster? Edit: The cluster is running openmpi and it will support cuda and opencl

    Read the article

  • Windows 2008 Standard upgrade to Windows 2008 Enterprise failure

    - by Archit Baweja
    Sidestory, I was in the process of setting up a second Exchange 2010 server for DAG support, when I realized that my box needed Windows 2008 Enterprise edition. The box currently has Windows 2008 Standard Windows update including SP2 Exchange 2010 with CAS, HT, Mailbox roles Domain Services role File Services role. When I try to upgrade to Windows 2008 Enterprise, I initially got a "your current version of windows is more recent than the intallation media", something to that effect. My first guess was it may be SP2 related, so I uninstalled SP2, restarted and tried again. This time it gave me an error to the effect Windows could not configure one or more windows components. Please restart and try the update again. This was at the last stage of the Windows 2008 Enterprise install when it says "Completing installation". So I removed Domain Services role (including demoting it as a DC). However I get the same error again. Anyone see something like this before and have any suggestions? Also , is there a log file the windows upgrade program spits out that I can consult to see what component exactly is interfering? Update 1 Based on some googling I finally found the setup log file, and it seems that Windows setup had an issue determining the .Net 3.0 "feature" being installed or uninstalled. So based of of a win7/vista technet article I'm going to retry the upgrade after removing the .Net 3.0 feature.

    Read the article

  • Cloud hosting for windows domain controller, possible?

    - by Preet Sangha
    We currently host our own domain controller (small company) locally on dedicated h/w. However to mitigate disaster recovery we're considering the use of virtualisation and cloud hosting. One thought is Virtual primary domain controller hosted in the cloud + a local (secondary) virtualised server running in the office as a cache? Is this possible or should I consider something else? We're happy to pay for the decent hosting and DR but this is really out of my experience.

    Read the article

  • apache permission errors

    - by Wilduck
    I'm trying to set up Apache on a arch-linux box as a testing environment (I'm only using the localhost, not trying to serve anything to the greater web). When setting up Django with mod_wsgi, it recommended that I set up a WSGIScriptAlias from / to /usr/local/django/mysite/apache/django.wsgi . I've done this, as well as added the /usr/.../apache directory to my httpd.conf. When I try to access http://localhost I get a 403 forbidden error. I have no idea why this is happening. Things I've tried so far: 1) chown -R http .../apache 2) chmod -R 777 .../apache 3) using a simple Alias directive to host a static file from that directory. None of these have worked. I'm at a loss for what I'm doing wrong. Below is a relevant excerpt from my httpd.conf: Alias / /usr/local/django/mysite/apache <Directory "/usr/local/django/mysite/apache"> Order deny,allow Allow from all </Directory> So my question is: what am I doing wrong?

    Read the article

  • Canonical Redirect on Dynamic Mass Virtual Hosts on Apache

    - by Josh
    I have a Web app on Apache that allows users to point their domain to the server. Right now I'm using Apache's dynamic mass virtual hosts with an entry VirtualDocumentRoot /www/hosts/%0/docs So with www.companydomain.com it points to /www/hosts/www.companydomain.com/docs The problem is when the user goes to companydomain.com it will point to /www/hosts/companydomain.com/docs Is there an easy way to automatically have Apache check to see if a directory exists for the virtual host, and if not, look for the host name with "www." in front of it? Other subdomains are fine (i.e. abc.domain.com should point to a diff. directory than def.domain.com) but the whole "www" issue is a mystery to me. I am using dynamic mass virtual hosts so the server does not have to restart after each registration for the application. If there is a different way that is fine as long as apache isn't restarted each time. How can I accomplish this? Worst case scenario if there were a way to redirect to a "default" location on the server if not found I could always do a check via PHP or something but I feel like that is a bit hacked together and there has to be a more efficient way. Thanks in advance!

    Read the article

  • MySQL: creating a user that can connect from multiple hosts

    - by DrStalker
    I'm using MySQL and I need to create an account that can connect from either the localhost of from another server, 10.1.1.1. So I am doing: CREATE USER 'bob'@'localhost' IDENTIFIED BY 'password123'; CREATE USER 'bob'@'10.1.1.1 IDENTIFIED BY 'password123'; GRANT SELECT, INSERT, UPDATE, DELETE on MyDatabse.* to 'bob'@'localhost', 'bob'@'10.1.1.1; This works fine, but is there any more elegant way to create a user account that is linked to multiple IPs or does it need to be done this way? My main worry is that in the future permissions will be updated form 'bob' account and not the other.

    Read the article

  • Getting the error "SMTP server cannot create a file in the queue directory C:\Inetpub\mailroot\Queue\"

    - by Glenn Slaven
    We're using the default SMTP server for our websites to send mail with, but in the last day sending messages started getting this error: Insufficient system storage. The server response was: 4.3.1 Out of memory Further digging found this message in the System event log: SMTP server cannot create a file in the queue directory C:\Inetpub\mailroot\Queue\ I've since given the Everyone account full control of the mailroot folder but it's still happening. There's enough space on the server and to the best of my knowledge nothing on the server has been changed

    Read the article

  • correct routing for multiple devices

    - by helmi
    I have Debian Lenny machine with 3 interfaces enabled (eth0-2), and I have problems as follow. eth1 is connected to a router and this router has portforwarding for port80. eth2 is connected direct to the internet If I open a website hosted on my system via the router it works fine. If I try to open the same via the eth2 connetion it does not! tshark shows incomming trafic on eth2 but nothing goes out there. iptabes accepts all My routing table: Ziel Router Genmask Flags Metric Ref Use Iface 10.9.0.2 * 255.255.255.255 UH 0 0 0 tun0 212.236.24.128 * 255.255.255.224 U 0 0 0 eth2 192.168.1.0 * 255.255.255.0 U 0 0 0 eth1 192.168.0.0 * 255.255.255.0 U 0 0 0 eth0 10.9.0.0 10.9.0.2 255.255.255.0 UG 0 0 0 tun0 default 192.168.1.1 0.0.0.0 UG 0 0 0 eth1 default 212.236.024.129 0.0.0.0 UG 0 0 0 eth2 default 192.168.0.1 0.0.0.0 UG 0 0 0 eth0

    Read the article

  • How do I speed up load times on roaming profiles with Active Directory?

    - by user65712
    I've done some reading and apparently the main obstacle to fast remote profile syncing is that Samba takes a long time to transfer lots of little files like cookies. After reading Roaming Profiles: Best Practices , I plan to use Folder Redirection, but I want my users to be able to login as rapidly as possible, even if it means that their data is still coming in when they reach their desktop. Is there a way to do this with GPO or a third party add-on that can load user profile data faster/speed up the login process for users?

    Read the article

  • mod_rewrite to capture subdomain name

    - by Ricky
    I want to write a rewrite scheme such that: user1.example.net will redirect to example.net/user/user1 user2.example.net will redirect to example.net/user/user2 vise versa this is what i have in my .htaccess code. but it always redirects to example.net RewriteCond %{http_host} ^[^.]+.example.net [NC] RewriteRule ^([^.]+).example.net(.*) http://example.net/user/$1 [R=301,L] can someone please tell me what i did wrong? thank you.

    Read the article

  • Which browser does my computer use to open a Web page? [on hold]

    - by msh210
    I know little about networking the Internet, but, from what I understand, it works — very approximately — as follows: I, sitting at the computer example.com, send a message saying, roughly, "get http://s.tk" to my ISP, which passes the message along, eventually to the machine at s.tk. The s.tk machine gets "example.comhas sent 'gethttp://s.tk'", so sendssomefileto its ISP which passes the file along, eventually to the machine atexample.com`. When the file gets back to example.com, my computer, how does my computer know what to do with it? I'm sure the headers (or something else) indicate it's a Web page rather than, say, a Usenet post — that's not my question. My question is: how does it know whether to display the Web page in my open Opera window or my open Firefox window, or my other open Firefox window, or, heck, to open a new browser instance?

    Read the article

  • rsync to multiple destinations using same filelist?

    - by Dylan B.
    I'm wondering if it's possible for rsync to copy one directory to multiple remote destinations all in one go, or even in parallel. (not necessary, but would be useful.) Normally, something like the following would work just fine: $ rsync -Pav /junk user@host1:/backup $ rsync -Pav /junk user@host2:/backup $ rsync -Pav /junk user@host3:/backup And if that's the only option, I'll use that. However, /junk is located on a slow drive with quite a few files, and rebuilding the filelist of some ~12,000 files each time is agonizingly slow (~5 minutes) compared to the actual transfer/updating. Is it possible to do something like this, to accomplish the same thing: $ rsync -Pav /junk user@host1:/backup user@host2:/backup user@host3:/backup Thanks for looking!

    Read the article

  • Confusion over terminology SSH, Shell, Terminal, Command Prompt and Telnet

    - by byronyasgur
    I don't usually use SSH if I can get away with it, but if I have to I do of course, and I've seemingly done this for years while still managing to remain slightly confused about these different terms ... from my basic research, this is my understanding, could someone verify/correct this? Telnet ... before SSH, not secure SSH ... ( secure shell ) the general name of the system/protocol Shell ... short name for SSH Command Line/Command Prompt ... the windows version Terminal ... the Unix version, also used by apple. Two further questions: What is the Linux version commonly called, is it just called SSH ? What is bash ?

    Read the article

  • How can I connect to a Windows server using a Command Line Interface? (CLI)

    - by HopelessN00b
    Especially with the option to install Server Core in Server 2008 and above, connecting to Windows servers over a CLI is increasingly useful ability, if not one that's very widespread amongst Windows administrators. Practically every Windows GUI management tool has an option to connect to a remote computer, but there is no such option present in the built-in Windows CLI (cmd.exe), which gives the initial impression that this might not be possible. Is it possible to remotely management or administer a Windows Server using a CLI? And if so, what options are there to achieve this?

    Read the article

  • How to bypass AllowTCPFowarding=no by installing own forwarder?

    - by Eric B.
    In the man pages for sshd_config, for the AllowTCPForwarding option, it states: AllowTcpForwarding Specifies whether TCP forwarding is permitted. The default is “yes”. Note that disabling TCP forwarding does not improve security unless users are also denied shell access, as they can always install their own forwarders. How do I install my own forwarder? I have a remote server in which I disabled TCPForwarding a long while ago. I would like to "enable" it for myself only, by using my own forwarder, while keeping the forwarding closed to the other users. I've looked around, but cannot seem to find the right pkgs to accomplish this. Can anyone please elaborate? Thanks! Eric

    Read the article

  • How do I remove Xen kernel and put normal kernel on RHEL 5

    - by yan bellavance
    I have 3 identical machines (hardware wise) that all have RHEL 5.3 installed. 2 of those machines have the Xen kernel and one doesnt. I cannot install nvidia drivers on the ones that have the xen kernel and so I was wondering how I managed to do this and how to replace them with normal kernels. Could this of happened during install time when for example I was queried on certain components to install? (development,virtualization, webserver)

    Read the article

< Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >