Search Results

Search found 41497 results on 1660 pages for 'fault'.

Page 215/1660 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • SSH broken after hostname change on EC2-hosted Ubuntu

    - by dimadima
    I changed my instance's hostname using the hostname utility and then set it in /etc/hostname so that the new name survives reboot. My main motivation was for differentiating between instances at the prompt using the \h format in PS1. EDIT I also changed permissions on my home directory. I made my home directory group writeable. END EDIT Now I can no longer SSH into the machine. The short of it is the error Permission denied (publickey). Running ssh -v, the more verbose output is: debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/dmitry/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /Users/dmitry/.ssh/ec2key.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). Should I have done something after changing the hostname? Now I can't get into the instance! :(

    Read the article

  • Applying memory limits to screen sessions

    - by CollinJSimpson
    You can set memory usage limits for standard Linux applications in: /etc/security/limits.conf Unfortunately, I previously thought these limits only apply to user applications and not system services. This means that users can by bypass their limits by launching applications through a system service such as screen. I'd like to know if it's possible to let users use screen but still enforce application limits. Jeff had the great idea of using nohup which obeys user limits (wonderful!), but I would still like to know if it's possible to mimic the useful windowing features of screen. EDIT: It seems my screen sessions are now obeying my hard address space limits defined in /etc/security/limits.conf. I must have been making some mistake. I recently installed cpulimit, but I doubt that's the solution.Thanks for the nohup tip, Jeff! It's very useful. Link to CPU Limit package

    Read the article

  • How do I Setup Multiple Sites in HostGator Shared Hosting?

    - by cillosis
    I recently decided to consolidate all of my random projects into a single hosting account as it was starting to get very expensive to run each on an individual hosting plan. I purchased the HostGator Baby plan which allows hosting of multiple domains. You have to set it up with a root domain name which is fine (I used my portfolio domain name). As far as file structure, I wanted a folder for each site in /public_html so the structure looks like this: - public_html/ - myportfolio.com/ - ... my files ... - anothersite.com/ - ... my files ... - thirdsite.com/ - ... my files ... I setup add-on domains and pointed them to their respective folders which works fine. My problem is the root domain ex. myportfolio.com expects it's files to be contained at the root of /public_html rather than within it's folder I created. I setup a redirect to point requests for myportfolio.com to myportfolio.com/myportfolio.com/ which works initially except (at least in my WordPress installation) it still references it's root folder as public_html. TL;DR; What is the best way to go about setting up multiple site hosting in a shared hosting environment (i.e. I can't setup vhosts). Does anybody know of any tutorials or videos that walk through this more clearly? Thanks.

    Read the article

  • Online Storage and security concerns

    - by Megge
    I plan to set up a small fileserver. I already own a small server at HostEurope (VirtualServer L, 250GB space), but they don't offer enough space (there is the HostEurope Cloud, but paying for bandwidth isn't an option here, video-streaming should be possible) Requirements summarized: Storage: 2TB, Users: ~15, Filesizes: < 100GB, should be easily reachable (Mount as a networkdrive or at least have solid "communication" software) My first question would be: Where can I get halfway affordable online storages? And how should I connect them to my server? Getting an additional server is a bit overkill, as I know no hoster which allows 2 TB on a small 2 Ghz Dual Core 2 GB RAM thingy (that would be enough by far, I just need much space), and connecting it via NFS or FTP over Internet seems a bit strange and cripples performance. Do you have any advice where I could get that storage service from? (I sent HostEurope a custom request today, but they didn't answer till now. If they can provide me with that space, this question will be irrelevant, but the 2nd one is the more important one anway, don't do much more than recommend me some based on experience, you don't have to crawl hours through hosting services) livedrive for example offers 5 TB for 17€ / month, I'd be happy with 2 TB for 20 €, the caveat is: It doesn't allow multiple users, which leads me to my second question: Where are the security problems? Which protocol is sufficient (I want private and "public" folders etc. the usual "every user has its own and a public space"-thing), secure and fast? (I'd tend to (S)FTP, problem with FTP is: Most of those hosting services don't even allow FTP with mutliple users and single users lead me into "hacking" a solution (you could map the basic folder structure on the main server and just mount every subfolder from the storage, things get difficult with a public folder with 644 permissions though) Is useing something like PKI or 802.1X overkill for private uses?

    Read the article

  • Vmware Player 3.0 - cannot ping 32 bits guest from 64 bits (guest or host)

    - by npmj
    I'm stuck with what seems a bug in VmWare Player (build 203739). I'm using W7 Ultimate 64bits as host and have a CentOS 5.4 (64 bits) as a guest and a Windows XP Professional SP3 (32 bits) as another guest. From the 64 bits machines (the host and the linux guest) I cannot ping the windows XP. Off course, I already turned off the windows firewall in the guest and also in the host. The network is pretty basic, I'm using Vmnet8 (NAT), with DHCP and port forwarding (to the windows XP's IP). Everything is working ok, I have internet access from host and from both guests. Port forwarding to the XP guest is working ok too. The only problem is that I cannot access the XP guest through the Vmnet8. I monitored the traffic using wireshark (in the host and in the windows guest). If I try to ping the XP guest from the host, what I see is the ARP request leaving the host, being answered by the guest and, after that, there is no echo request leaving the host. The same occurs if I try to ping the XP from the CentOs guest. From the windows XP guest I can ping both the host and the CentOs guest. From the XP guest I can access the host shares. Obviously, from the host I cannot see the XP shares (as I cannot even ping the guest). I want to maintain this setup (using NAT to share the host's internet connection). Any suggestions?

    Read the article

  • A special user does not appear in Windows login screen

    - by shayan
    In the list of users (under Local Users and Groups in my Windows 7) I have an ASPNET user (its description says "Account used for running the ASP.NET worker process (aspnet_wp.exe)" and its full name is "ASP.NET Machine Account") The thing is this user does not appear in Windows login screen. What I have found: It is not a "Built-in security pricipal" user It belongs only to Users group I don't have HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\SpecialAccounts key in my registry In every sense I can see that it is a normal user yet it does not appear in login screen. Now the questions are: What does make it special? How can I create a user like this?

    Read the article

  • The procedure entry php_mb_check_encoding_list could not be located

    - by BlackFire27
    I get this error. The procedure entry php_mb_check_encoding_list could not be located in the dynamic link library php_mbstring.dll The error is related to the php.ini settings, I think.. It finds it difficult to locate the extension I suspect, I put an environment path to the php.exe and its extensions folder, but when I run the command line and call php.exe, that error is being thrown. I use windows. I have got this set in my php.ini. ; On windows: extension_dir = "C:\xampp\php\ext" I installed xampp.. the php version is the latest.. Another thing which is weird is when I try to print out: phpinfo..nothing is being printed out <?php echo phpinfo(); ?> nothing comes out?!?! The question is why and how can I prevent it from being thrown..

    Read the article

  • Node.js, Nginx and Varnish with WebSockets

    - by Joe S
    I'm in the process of architecting the backend of a new Node.js web app that i'd like to be pretty scalable, but not overkill. In all of my previous Node.js deployments, I have used Nginx to serve static assets such as JS/CSS and reverse proxy to Node (As i've heard Nginx does a much better job of this / express is not really production ready). However, Nginx does not support WebSockets. I am making extensive use of Socket.IO for the first time and discovered many articles detailing this limitation. Most of them suggest using Varnish to direct the WebSockets traffic directly to node, bypassing Nginx. This is my current setup: Varnish : Port 80 - Routing HTTP requests to Nginx and WebSockets directly to node Nginx : Port 8080 - Serving Static Assets like CSS/JS Node.js Express: Port 3000 - Serving the App, over HTTP + WebSockets However, there is now the added complexity that Varnish doesn't support HTTPS, which requires Stunnel or some other solution, it's also not load balanced yet (Perhaps i will use HAProxy or something). The complexity is stacking up! I would like to keep things simpler than this if possible. Is it still necessary to reverse proxy Node.js using Nginx when Varnish is also present? As even if express is slow at serving static files, they should theoretically be cached by Varnish. Or are there better ways to implement this?

    Read the article

  • Copy File Contiguously to Disk from OSX/Unix/Linux to FAT32 FS?

    - by alharaka
    So the Sysinternals guys have that cool contig.exe utility that allows me ensure a file is contiguous. I need to copy overs ISO files to a FAT32 USB flash key. Grub4DOS requires the files be continuous, but I do not have Windows access at the moment. Is there a way to copy a file so it is contiguous on the target drive, or a tool like the aforementioned that will make an existing file contiguous. Again, I need it on FAT32, and there lies the rub.

    Read the article

  • Invalidating unused ssh keys

    - by JH
    I am using one ssh account for all my Subversion users. They send me their public keys and I put them in .ssh/authorized_key of the svn account, then they can check out the code from Subversion using ssh tunnel. So far everything works fine. The problem though is that I want to invalidate keys that have not been used for some time (say one month). Does anyone know a way to make sshd log the public key when a user signs in? Thanks.

    Read the article

  • Mercurial SSH process blocks when run from Local System

    - by Liedman
    We are using Mercurial over SSH for our development. We use Hudson for continous integration, and have deployed it on Tomcat, running on a Windows 2003 Server using the Local System account. Mercurial is configured to use Putty's plink.exe as its ssh command in Mercurial.ini, together with a private key for SSH authentication. When Hudson attempts any Mercurial command over SSH, the operation just blocks. I can see the three processes being started: hg.exe, cmd.exe and plink.exe. On the remote machine, I can also see the SSH session being opened and the authentication key being accepted. After that, nothing appears to happen, and everything just blocks, seemingly forever. (As a side note, subversion/SVN over SSH works from Hudson to the same server, using the same user and authentication key). A solution would of course be the best, but at least a hint for how I should debug it to get further would be nice, since I'm stuck and haven't even got an error message right now.

    Read the article

  • Configuring OpenLDAP as a Active Directory Proxy

    - by vadensumbra
    We try to set up an Active Directory server for company-wide authentication. Some of the servers that should authenticate against the AD are placed in a DMZ, so we thought of using a LDAP-server as a proxy, so that only 1 server in the DMZ has to connect to the LAN where the AD-server is placed). With some googling it was no problem to configure the slapd (see slapd.conf below) and it seemed to work when using the ldapsearch tool, so we tried to use it in apache2 htaccess to authenticate the user over the LDAP-proxy. And here comes the problem: We found out the username in the AD is stored in the attribute 'sAMAccountName' so we configured it in .htaccess (see below) but the login didn't work. In the syslog we found out that the filter for the ldapsearch was not (like it should be) '(&(objectClass=*)(sAMAccountName=authtest01))' but '(&(objectClass=*)(?=undefined))' which we found out is slapd's way to show that the attribute do not exists or the value is syntactically wrong for this attribute. We thought of a missing schema and found the microsoft.schema (and the .std / .ext ones of it) and tried to include them in the slapd.conf. Which does not work. We found no working schemata so we just picked out the part about the sAMAccountName and build a microsoft.minimal.schema (see below) that we included. Now we get the more precise log in the syslog: Jun 16 13:32:04 breauthsrv01 slapd[21229]: get_ava: illegal value for attributeType sAMAccountName Jun 16 13:32:04 breauthsrv01 slapd[21229]: conn=0 op=1 SRCH base="ou=oraise,dc=int,dc=oraise,dc=de" scope=2 deref=3 filter="(&(objectClass=\*)(?sAMAccountName=authtest01))" Jun 16 13:32:04 breauthsrv01 slapd[21229]: conn=0 op=1 SRCH attr=sAMAccountName Jun 16 13:32:04 breauthsrv01 slapd[21229]: conn=0 op=1 SEARCH RESULT tag=101 err=0 nentries=0 text= Using our Apache htaccess directly with the AD via LDAP works though. Anyone got a working setup? Thanks for any help in advance: slapd.conf: allow bind_v2 include /etc/ldap/schema/core.schema ... include /etc/ldap/schema/microsoft.minimal.schema ... backend ldap database ldap suffix "ou=xxx,dc=int,dc=xxx,dc=de" uri "ldap://80.156.177.161:389" acl-bind bindmethod=simple binddn="CN=authtest01,ou=GPO-Test,ou=xxx,dc=int,dc=xxx,dc=de" credentials=xxxxx .htaccess: AuthBasicProvider ldap AuthType basic AuthName "AuthTest" AuthLDAPURL "ldap://breauthsrv01.xxx.de:389/OU=xxx,DC=int,DC=xxx,DC=de?sAMAccountName?sub" AuthzLDAPAuthoritative On AuthLDAPGroupAttribute member AuthLDAPBindDN CN=authtest02,OU=GPO-Test,OU=xxx,DC=int,DC=xxx,DC=de AuthLDAPBindPassword test123 Require valid-user microsoft.minimal.schema: attributetype ( 1.2.840.113556.1.4.221 NAME 'sAMAccountName' SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )

    Read the article

  • Freeradius authentication failed for unknown reason

    - by Moein7tl
    I followed this instruction to force freeradius to use mysql database. and run freeradius in debug mod. but it rejects all authentication. mysql database : mysql select * from radcheck; +----+----------+-----------+----+---------+ | id | username | attribute | op | value | +----+----------+-----------+----+---------+ | 1 | test | Password | == | test123 | | 2 | test | Auth-Type | == | Local | +----+----------+-----------+----+---------+ 2 rows in set (0.02 sec) radtest command : # radtest test test123 localhost 0 testing123 Sending Access-Request of id 235 to 127.0.0.1 port 1812 User-Name = "test" User-Password = "test123" NAS-IP-Address = 127.0.0.1 NAS-Port = 0 Message-Authenticator = 0x00000000000000000000000000000000 rad_recv: Access-Reject packet from host 127.0.0.1 port 1812, id=235, length=20 radiusd debug mod log: rad_recv: Access-Request packet from host 127.0.0.1 port 51034, id=235, length=74 User-Name = "test" User-Password = "test123" NAS-IP-Address = 127.0.0.1 NAS-Port = 0 Message-Authenticator = 0xbf111cbbae24fb0f0a558bfa26f53476 # Executing section authorize from file /usr/local/etc/raddb/sites-enabled/default +- entering group authorize {...} ++[preprocess] returns ok ++[chap] returns noop ++[mschap] returns noop ++[digest] returns noop [suffix] No '@' in User-Name = "test", looking up realm NULL [suffix] No such realm "NULL" ++[suffix] returns noop [eap] No EAP-Message, not doing EAP ++[eap] returns noop ++[files] returns noop ++[expiration] returns noop ++[logintime] returns noop [pap] WARNING! No "known good" password found for the user. Authentication may fail because of this. ++[pap] returns noop ERROR: No authenticate method (Auth-Type) found for the request: Rejecting the user Failed to authenticate the user. Using Post-Auth-Type Reject # Executing group from file /usr/local/etc/raddb/sites-enabled/default +- entering group REJECT {...} [attr_filter.access_reject] expand: %{User-Name} - test attr_filter: Matched entry DEFAULT at line 11 ++[attr_filter.access_reject] returns updated Delaying reject of request 20 for 1 seconds Going to the next request Waking up in 0.9 seconds. Sending delayed reject for request 20 Sending Access-Reject of id 235 to 127.0.0.1 port 51034 Waking up in 4.9 seconds. Cleaning up request 20 ID 235 with timestamp +4325 Ready to process requests. where is the problem and how should I solve it?

    Read the article

  • saslauthd authentication error

    - by James
    My server has developed an expected problem where I am unable to connect from a mail client. I've looked at the server logs and the only thing that looks to identify a problem are events like the following: Nov 23 18:32:43 hig3 dovecot: imap-login: Login: user=, method=PLAIN, rip=xxxxxxxx, lip=xxxxxxx, TLS Nov 23 18:32:55 hig3 postfix/smtpd[11653]: connect from xxxxxxx.co.uk[xxxxxxx] Nov 23 18:32:55 hig3 postfix/smtpd[11653]: warning: SASL authentication failure: cannot connect to saslauthd server: No such file or directory Nov 23 18:32:55 hig3 postfix/smtpd[11653]: warning: xxxxxxx.co.uk[xxxxxxxx]: SASL LOGIN authentication failed: generic failure Nov 23 18:32:56 hig3 postfix/smtpd[11653]: lost connection after AUTH from xxxxxxx.co.uk[xxxxxxx] Nov 23 18:32:56 hig3 postfix/smtpd[11653]: disconnect from xxxxxxx.co.uk[xxxxxxx] The problem is unusual, because just half an hour previously at my office, I was not being prompted for a correct username and password in my mail client. I haven't made any changes to the server, so I can't understand what would have happened to make this error occur. Searches for the error messages yield various results, with 'fixes' that I'm uncertain of (obviously don't want to make it worse or fix something that isn't broken). When I run testsaslauthd -u xxxxx -p xxxxxx I also get the following result: connect() : No such file or directory But when I run testsaslauthd -u xxxxx -p xxxxxx -f /var/spool/postfix/var/run/saslauthd/mux -s smtp I get: 0: OK "Success." I found those commands on another forum and am not entirely sure what they mean, but I'm hoping they might give an indication of where the problem might lie. If it makes any difference, I'm running Ubuntu 10.04.1, Postfix 2.7.0 and Webmin/ Virtualmin.

    Read the article

  • Ubuntu 12.04 Server - eth0 1Gbps NIC eth1 10Gbps NIC - all traffic using eth0?

    - by James
    Ubuntu Server 12.04.1 x64 Primary role is an NFS fileserver, for Mac OSX Clients. Hardware: Eth0: 00:19.0 Ethernet controller: Intel Corporation 82579V Gigabit Network Connection (rev 04) Eth1: 07:00.0 Ethernet controller: MYRICOM Inc. Myri-10G Dual-Protocol NIC Config: ifconfig eth0 Link encap:Ethernet HWaddr <MACADDRESS> inet addr:192.168.0.150 Bcast:192.168.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:460042020 errors:0 dropped:148 overruns:0 frame:0 TX packets:231906707 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:581431978417 (581.4 GB) TX bytes:259057368617 (259.0 GB) Interrupt:20 Memory:f7d00000-f7d20000 eth1 Link encap:Ethernet HWaddr <MACADDRESS> inet addr:192.168.0.100 Bcast:192.168.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6832208 errors:0 dropped:2 overruns:0 frame:0 TX packets:376 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:513826442 (513.8 MB) TX bytes:33688 (33.6 KB) Interrupt:59 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:507 errors:0 dropped:0 overruns:0 frame:0 TX packets:507 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:45057 (45.0 KB) TX bytes:45057 (45.0 KB) nano /etc/network/interfaces #The loopback network interface auto lo iface lo inet loopback #The primary network interface auto eth0 iface eth0 inet static address 192.168.0.150 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 dns-nameservers 192.168.0.1 8.8.8.8 #second network interface auto eth1 iface eth1 inet static address 192.168.0.100 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 dns-nameservers 192.168.0.1 8.8.8.8 Currently I am using on the OSX clients: nfs://192.168.0.100/Volumes/Storage to mount the NFS share. My problem is why would all the data (and I have checked using various monitoring tools bmon, iftop, glances, etc) be going over the slower connection?? Also, after configuring /etc/network/interfaces with the above setup I always get an error message at bootup something about waiting for network configuration. Are these connected?

    Read the article

  • Unable to terminate extra EC2 instances

    - by Deborah Cole
    I'm just setting up my AWS server & I'm trying to use the EC2 Console to terminate some extra instances that I generated via the AWS for Eclipse toolkit's New Project AWS Java Web Project utility. Unfortunately, every time I stop, then terminate such an instance via the EC2 Console, it automatically recreates & reactivates itself! I really don't want to be paying for 4 dev systems when I only need 1, so can somebody please clue me in? Please explain gently... I'm new to this environment.

    Read the article

  • Are there any tools for monitoring individual Apache virtual hosts in real-time?

    - by Dave Forgac
    I'm looking for a way to monitor and record Apache traffic, separated by virtual host. I am currently using Munin to capture this and other data for the entire server however I can't seem to find a way to do this by vhost. This link describes using a module called mod_watch which is apparently no longer in development: http://www.freshnet.org/wordpress/2007/03/08/monitoring-apaches-virtualhost-with-munin/ The file that is listed as being compatible with Apache 2.x is reported to have problems with missing vhosts an reporting data correctly. Does anyone know of a reliable way to determine real-time traffic per vhost? If I can find this it should be easy enough to write a new Munin plugin. Edit: What I'd really like to see is something similar to the Apache server-status scoreboard page with the number of connections / requests separated by virtual host. This would give me the ability to check which vhost may be experiencing a spike in traffic in real time and would also provide the data needed for a Munin module (or some alternative performance monitoring / analysis system.)

    Read the article

  • httpd not starting with systemd on F17

    - by malfy
    Title says it all. This is a fresh Fedora 17 system running on a Xen hypervisor. No idea why it won't start [root@box~]  uname -a Linux box.localhost 3.5.4-2.fc17.i686.PAE #1 SMP Wed Sep 26 22:10:23 UTC 2012 i686 i686 i386 GNU/Linux [root@box~]  cat /etc/redhat-release Fedora release 17 (Beefy Miracle) [root@box~]  systemctl enable httpd.service ln -s '/usr/lib/systemd/system/httpd.service' '/etc/systemd/system/multi-user.target.wants/httpd.service' [root@box~]  systemctl start httpd.service Job failed. See system journal and 'systemctl status' for details. [root@box~]  systemctl status httpd.service httpd.service - The Apache HTTP Server (prefork MPM) Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled) Active: failed (Result: exit-code) since Fri, 19 Oct 2012 22:43:37 -0500; 3s ago Process: 18225 ExecStart=/usr/sbin/httpd $OPTIONS -k start (code=exited, status=226/NAMESPACE) CGroup: name=systemd:/system/httpd.service Oct 19 22:43:37 box.localhost systemd[18225]: Failed at step NAMESPACE spawning /usr/sbin/httpd: No such file or directory [root@box~]  ls -al /usr/sbin/httpd -rwxr-xr-x 1 root root 343496 Apr 30 04:56 /usr/sbin/httpd

    Read the article

  • Troubleshooting server performance with a hosted server?

    - by ProfessionalAmateur
    We are in a tough spot, we have a hosted server with the following specs: OS: Windows Server 2008 R2 Enterprise SP1 64bit Processor: Intel Xeon X7550 @ 2GHz (8 processors) RAM: 16GB The file system is on a SAN or NAS (not sure). We are seeing very odd issues where a user will open a 25MB .xslb file and it takes literally 60-120 seconds sometimes. The server is just dog slow for excel. Resources are not being pegged, CPU never jumps up, plenty of RAM... it's just oddly slow. Our host has been looking at the issue for several weeks with not much to show for it. Is there a utility I can run myself that will help trackdown our issue? I have found Server Performance Advisor V1.0 Any experience in using it? Our host is ultimately responsible for fixing this, but we are going on 1 month and our users are losing patience. Any tips would be helpful.

    Read the article

  • How to configure multiple VLAN's on a lacp link aggregation on openindiana(oi_151a7)?

    - by reco
    i have 4 links aggregated. $ dladm show-link LINK CLASS MTU STATE BRIDGE OVER igb0 phys 1500 up -- -- igb1 phys 1500 up -- -- igb2 phys 1500 up -- -- igb3 phys 1500 up -- -- aggr0 aggr 1500 up -- igb0 igb1 igb2 igb3 i managed to create one VLAN on the aggr0 link: $ dladm show-vlan LINK VID OVER FLAGS vlan1 9 aggr0 ----- if i try to add more i get the following error: $ dladm create-vlan -v 3 -l aggr0 vlan2 dladm: create operation failed: invalid argument

    Read the article

  • NTPD seems to delete all network interfaces

    - by Aurelin
    We have a couple of virtual interfaces configured on eth0 on a CentOS, and every now and then, they went down seemingly out of the blue. Now after going through the log files, I found out that apparently ntpd deletes all eth0 interfaces, and that dhclient automatically brings eth0 back up. The virtual interfaces, however, stay down which causes several of our websites to be inaccessible. Can someone explain to me why ntpd deletes interfaces? Can / should that be turned off, or can / should I configure dhclient to bring the virtual interfaces back up automatically, too? EDIT// The log files that I should've posted : Nov 12 13:10:28 raptor dhclient[20048]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x6a825e97) Nov 12 13:10:42 raptor dhclient[20048]: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 8 (xid=0x24554092) Nov 12 13:10:42 raptor dhclient[20048]: DHCPOFFER from 96.126.108.78 Nov 12 13:10:42 raptor dhclient[20048]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x24554092) Nov 12 13:10:42 raptor dhclient[20048]: DHCPACK from 96.126.108.78 (xid=0x24554092) Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #31 eth0, 50.116.50.97#123, interface stats: received=3255, sent=3256, dropped=0, active_time=1559394 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #32 eth0:0, 50.116.53.56#123, interface stats: received=3, sent=0, dropped=0, active_time=1559391 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #33 eth0:1, 66.175.211.192#123, interface stats: received=2, sent=0, dropped=0, active_time=1559389 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #34 eth0:2, 50.116.53.95#123, interface stats: received=3, sent=0, dropped=0, active_time=1559387 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #35 eth0:3, 97.107.132.32#123, interface stats: received=2, sent=0, dropped=0, active_time=1559385 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #36 eth0:4, 50.116.56.201#123, interface stats: received=2, sent=0, dropped=0, active_time=1559383 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #37 eth0:5, 66.175.212.121#123, interface stats: received=2, sent=0, dropped=0, active_time=1559381 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #38 eth0:6, 66.175.215.137#123, interface stats: received=2, sent=0, dropped=0, active_time=1559379 secs Nov 12 13:10:44 raptor NET[1573]: /sbin/dhclient-script : updated /etc/resolv.conf Nov 12 13:10:44 raptor dhclient[20048]: bound to 50.116.50.97 -- renewal in 32692 seconds. Nov 12 13:10:45 raptor ntpd[2109]: Listening on interface #39 eth0, 50.116.50.97#123 Enabled The eth0 config : DEVICE="eth0" ONBOOT="yes" BOOTPROTO="dhcp" IPV6INIT="no" IPADDR=50.116.50.97 NETMASK=255.255.255.0 GATEWAY=50.116.50.1 And the virtual interfaces (I posted the first one only, they look the same for the most part) : # Configuration for eth0:0 DEVICE=eth0:0 BOOTPROTO=none # This line ensures that the interface will be brought up during boot. ONBOOT=yes # eth0:0 IPADDR=50.116.53.56 NETMASK=255.255.255.0

    Read the article

  • Why do we get a sudden spike in response times?

    - by Christian Hagelid
    We have an API that is implemented using ServiceStack which is hosted in IIS. While performing load testing of the API we discovered that the response times are good but that they deteriorate rapidly as soon as we hit about 3,500 concurrent users per server. We have two servers and when hitting them with 7,000 users the average response times sit below 500ms for all endpoints. The boxes are behind a load balancer so we get 3,500 concurrents per server. However as soon as we increase the number of total concurrent users we see a significant increase in response times. Increasing the concurrent users to 5,000 per server gives us an average response time per endpoint of around 7 seconds. The memory and CPU on the servers are quite low, both while the response times are good and when after they deteriorate. At peak with 10,000 concurrent users the CPU averages just below 50% and the RAM sits around 3-4 GB out of 16. This leaves us thinking that we are hitting some kind of limit somewhere. The below screenshot shows some key counters in perfmon during a load test with a total of 10,000 concurrent users. The highlighted counter is requests/second. To the right of the screenshot you can see the requests per second graph becoming really erratic. This is the main indicator for slow response times. As soon as we see this pattern we notice slow response times in the load test. How do we go about troubleshooting this performance issue? We are trying to identify if this is a coding issue or a configuration issue. Are there any settings in web.config or IIS that could explain this behaviour? The application pool is running .NET v4.0 and the IIS version is 7.5. The only change we have made from the default settings is to update the application pool Queue Length value from 1,000 to 5,000. We have also added the following config settings to the Aspnet.config file: <system.web> <applicationPool maxConcurrentRequestsPerCPU="5000" maxConcurrentThreadsPerCPU="0" requestQueueLimit="5000" /> </system.web> More details: The purpose of the API is to combine data from various external sources and return as JSON. It is currently using an InMemory cache implementation to cache individual external calls at the data layer. The first request to a resource will fetch all data required and any subsequent requests for the same resource will get results from the cache. We have a 'cache runner' that is implemented as a background process that updates the information in the cache at certain set intervals. We have added locking around the code that fetches data from the external resources. We have also implemented the services to fetch the data from the external sources in an asynchronous fashion so that the endpoint should only be as slow as the slowest external call (unless we have data in the cache of course). This is done using the System.Threading.Tasks.Task class. Could we be hitting a limitation in terms of number of threads available to the process?

    Read the article

  • Advice on networking career

    - by fmysky
    Hello! I need some insights on a networking career. I have a valid CCNA and few months of experience as a Jr. network analyst. My focus is a type of job where I can administer networking/storage hardware and possibly managing servers/workstations too. I am new to this job area, specifically new to city like LA. I am currently unemployed and so, my question is: should I continue with my cisco certifications(routing/wireless) or other comptia certs or both to get a reliable job? I am really interested in CCNA wireless but watching craigslist(LA) and other job sites, it seems people need more cisco voice people. While, some others also ask for Net+ certs. I have scarce financial sources so, its better to make some good decisions. I have not been applying yet due to some personal problems but I will soon. Thank you! P.S. I don't know if I can ask questions like this here. Sorry about that.

    Read the article

  • Setting up Pure-FTPd with admin/user permissions for same directory

    - by modulaaron
    I need to set up 2 Pure-FTPd accounts - ftpuser and ftpadmin. Both will have access to a directory that contains 2 subdirectories - upload and downlaod. The permissions criteria needs to be as follows: ftpuser can upload to /upload but cannot view the contents (blind drop). ftpuser can download from /download but cannot write to it. ftpadmin has full read/write permissions to both, including file deletion Currently, the first two are not a problem - disabling /upload read access and /download write access for ftpuser did the job. The problem is that when a file is uploaded by ftpuser, it's permissions are set to 644, meaning that user ftpadmin can only read it (note that all FTP directories are chown'd to ftpuser:ftpadmin). How can I give ftpadmin the power he so rightfully deserves?

    Read the article

  • cPanel Virtfs won't umount

    - by JPerkSter
    Anyone have any experience with virtfs on cPanel servers? I can't seem to get them to unmount, as they say they are already unmounted: [root@Server ~]# cat /proc/mounts | grep user /dev/root /home/virtfs/user/lib ext3 rw,errors=continue,data=ordered 0 0 /dev/root /home/virtfs/user/opt ext3 rw,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/lib ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/sbin ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/share ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/bin ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/man ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/X11R6 ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/kerberos ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/libexec ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/local/bin ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/local/share ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/local/Zend ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/local/IonCube ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/include ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/local/lib ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda2 /home/virtfs/user/var/spool ext3 rw,nodev,noatime,nodiratime,errors=continue,data=ordered 0 0 /dev/sda2 /home/virtfs/user/var/lib ext3 rw,nodev,noatime,nodiratime,errors=continue,data=ordered 0 0 /dev/sda2 /home/virtfs/user/var/cpanel ext3 rw,nodev,noatime,nodiratime,errors=continue,data=ordered 0 0 /dev/sda2 /home/virtfs/user/var/run ext3 rw,nodev,noatime,nodiratime,errors=continue,data=ordered 0 0 /dev/sda2 /home/virtfs/user/var/log ext3 rw,nodev,noatime,nodiratime,errors=continue,data=ordered 0 0 /dev/sda6 /home/virtfs/user/tmp ext3 rw,nosuid,nodev,noexec,noatime,errors=continue,data=ordered 0 0 /dev/root /home/virtfs/user/bin ext3 rw,errors=continue,data=ordered 0 0 [root@Server ~]# for i in cat /proc/mounts |grep virtfs |grep user |awk '{print$2}'; do umount $i; done umount: /home/virtfs/user/lib: not mounted umount: /home/virtfs/user/opt: not mounted umount: /home/virtfs/user/usr/lib: not mounted umount: /home/virtfs/user/usr/sbin: not mounted umount: /home/virtfs/user/usr/share: not mounted umount: /home/virtfs/user/usr/bin: not mounted umount: /home/virtfs/user/usr/man: not mounted umount: /home/virtfs/user/usr/X11R6: not mounted umount: /home/virtfs/user/usr/kerberos: not mounted umount: /home/virtfs/user/usr/libexec: not mounted umount: /home/virtfs/user/usr/local/bin: not mounted umount: /home/virtfs/user/usr/local/share: not mounted umount: /home/virtfs/user/usr/local/Zend: not mounted umount: /home/virtfs/user/usr/local/IonCube: not mounted umount: /home/virtfs/user/usr/include: not mounted umount: /home/virtfs/user/usr/local/lib: not mounted umount: /home/virtfs/user/var/spool: not mounted umount: /home/virtfs/user/var/lib: not mounted umount: /home/virtfs/user/var/cpanel: not mounted umount: /home/virtfs/user/var/run: not mounted umount: /home/virtfs/user/var/log: not mounted umount: /home/virtfs/user/tmp: not mounted umount: /home/virtfs/user/bin: not mounted umount: /home/virtfs/user/dev: not mounted umount: /home/virtfs/user/proc: not mounted

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >