Search Results

Search found 12484 results on 500 pages for 'host'.

Page 388/500 | < Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >

  • So Close: How to get this SSH login working (.bashrc)

    - by This_Is_Fun
    Objective: SSH login ( + eliminate warning message) / run 2 commands / stay logged in: EDIT: Oops, I made a mistake (see below): This code does ~95% of what I wanted to do # .bashrc # Run two commands and stay logged in to new server. alias gr='ssh -t -p 5xx4x [email protected] 2> /dev/null "cd /var; ls; /bin/bash -i"' Now, after successful login / verify user logged in = root pts/0 2011-01-30 22:09 Try to 'logout' = bash: logout: not login shell: use `exit' I seem to have full root access w/o being logged into the shell? (The " /bin/bash -i " was added to 'Stay logged in' but doesn't work quite as expected) FYI: The question is "How to get this SSH login working" & it is mostly solved, sorry I made a mess... ... .. . Original Question Here: # .bashrc # Run two commands and stay logged in to new server. alias gr='ssh -t -p 5xx4x [email protected] "cd /var; ls; /bin/bash -i"' # (hack) Hide "map back to the address - POSSIBLE BREAK-IN ATTEMPT!" message. alias gr='ssh -p 5xx4x [email protected] 2> /dev/null' Both examples 'work' as shown; When I try to add the ' 2 /dev/null ' to the first example, then the whole thing breaks. I'm out of time trying to solve the warning message other ways, so is it possible to combine both examples to make example #1 work w/o the warning message? Thank you. ps. If you also know a proper way to kill the login warning message, please do tell (the 'standard' "edit host file" advice isn't working for me)

    Read the article

  • How to disable all bounce back email in exim 4.69

    - by liame
    I have set up an email server to send out solicited newsletters. There should be no "regular" users of this server, so it is not desirable to send bounce notifications back to the recipient. Especially so since I am tracking bounces myself by parsing the log files periodically. What I want is to unconditionally prevent exim from ever sending a bounce notification email back to a sender. How can I do this? Thank you! (I accidentally posted this to superuser before posting it here, disregard that if you come across) What I want is an email server that will accept all incoming emails, deliver it accordingly (that is remotely or locally) and not send a bounce notification the sender upon bounce. I log bounces myself, in a database. The only function bounce messages have in my setting is to waste resources and bandwidth. I need to send emails fast, using exiwhat during a run, I see a significant number of deliveries to [email protected]. I could potentially increase my email productivity by 10~20% if all bounce emails are eliminated.

    Read the article

  • How to get the permissions right for /dev/raw1394

    - by Mark0978
    I recently upgraded one of my ubuntu machines to Karmic and I'm having trouble getting the permissions of /dev/raw1394 set to 0666. They only thing this machine is used for is recording audio from a firepod which uses /dev/raw1394 via jackd and there are no other FireWire devices connected, so security around this device is not really an issue. If I run as root, everything works as expected, but I have some folks that run the recorder that I don't want to have root access. However, I can't figure out which lines setup the perms I've tied this: /etc/udev/permissions.d/raw1394.rules:raw1394:root:root:0666 And I have this setup (default install) /lib/udev/rules.d/75-persistent-net-generator.rules:SUBSYSTEMS=="ieee1394", ENV{COMMENT}="Firewire device $attr{host_id})" /lib/udev/rules.d/75-cd-aliases-generator.rules:# the "path" of usb/ieee1394 devices changes frequently, use "id" /lib/udev/rules.d/75-cd-aliases-generator.rules:ACTION=="add", SUBSYSTEM=="block", SUBSYSTEMS=="usb|ieee1394", ENV{ID_CDROM}=="?*", ENV{GENERATED}!="?*", \ /lib/udev/rules.d/60-persistent-storage-tape.rules:KERNEL=="st*[0-9]|nst*[0-9]", ATTRS{ieee1394_id}=="?*", ENV{ID_SERIAL}="$attr{ieee1394_id}", ENV{ID_BUS}="ieee1394" /lib/udev/rules.d/50-udev-default.rules:# FireWire (deprecated dv1394 and video1394 drivers) /lib/udev/rules.d/50-udev-default.rules:KERNEL=="dv1394-[0-9]*", NAME="dv1394/%n", GROUP="video" /lib/udev/rules.d/50-udev-default.rules:KERNEL=="video1394-[0-9]*", NAME="video1394/%n", GROUP="video" /lib/udev/rules.d/60-persistent-storage.rules:KERNEL=="sd*[!0-9]|sr*", ATTRS{ieee1394_id}=="?*", SYMLINK+="disk/by-id/ieee1394-$attr{ieee1394_id}" /lib/udev/rules.d/60-persistent-storage.rules:KERNEL=="sd*[0-9]", ATTRS{ieee1394_id}=="?*", SYMLINK+="disk/by-id/ieee1394-$attr{ieee1394_id}-part%n" And I find these lines in /var/log/syslog Apr 30 09:11:30 record kernel: [ 3.284010] ieee1394: Node added: ID:BUS[0-00:1023] GUID[000a9200c7062266] Apr 30 09:11:30 record kernel: [ 3.284195] ieee1394: Host added: ID:BUS[0-01:1023] GUID[00d0035600a97b9f] Apr 30 09:11:30 record kernel: [ 18.372791] ieee1394: raw1394: /dev/raw1394 device initialized What I can't figure out, is which line actually creates that raw1394 device in the first place. How do you get /dev/raw1394 to have permissions 0666?

    Read the article

  • Enable PasswordAuthentication on OpenSuse 10

    - by Riduidel
    Hi, I've a virtual instance of Suse 10 running in my VMWare player, and I'm fighting against it to allow ssh password authentcation. How can I make it working since I already have tuned the /etc/ssh/ssh_config file like that # $OpenBSD: ssh_config,v 1.20 2005/01/28 09:45:53 dtucker Exp $ Host * # ForwardAgent no ForwardX11 yes ForwardX11Trusted yes PubkeyAuthentication no RhostsRSAAuthentication no RSAAuthentication no PasswordAuthentication yes HostbasedAuthentication no Protocol 2 SendEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES SendEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT SendEnv LC_IDENTIFICATION LC_ALL With ssh connection sending me the following logs Incoming packet #0x5, type 51 / 0x33 (SSH2_MSG_USERAUTH_FAILURE) 00000000 00 00 00 1e 70 75 62 6c 69 63 6b 65 79 2c 6b 65 ....publickey,ke 00000010 79 62 6f 61 72 64 2d 69 6e 74 65 72 61 63 74 69 yboard-interacti 00000020 76 65 00 ve. Outgoing packet #0x6, type 50 / 0x32 (SSH2_MSG_USERAUTH_REQUEST) 00000000 00 00 00 04 72 6f 6f 74 00 00 00 0e 73 73 68 2d ....root....ssh- 00000010 63 6f 6e 6e 65 63 74 69 6f 6e 00 00 00 14 6b 65 connection....ke 00000020 79 62 6f 61 72 64 2d 69 6e 74 65 72 61 63 74 69 yboard-interacti 00000030 76 65 00 00 00 00 00 00 00 00 ve........ Telling me that it expects publickey and keyboard-interactive authentications, which I don't want to use.

    Read the article

  • get-eventlog issue

    - by Jim B
    I wanted to get a quick report of some log entries I saw on a server, so I ran: Get-Eventlog -logname system -newest 10 -computer fs1 | fl I got events back however the descriptions were all wrong. Here's an example: Index : 1260055 EntryType : Warning InstanceId : 2186936367 Message : The description for Event ID '-2108030929' in Source 'W32Time' cannot be found. The local compute r may not have the necessary registry information or message DLL files to display the message, or you may not have permission to access them. The following information is part of the event:'time. windows.com,0x1' Category : (0) CategoryNumber : 0 ReplacementStrings : {time.windows.com,0x1} Source : W32Time TimeGenerated : 1/25/2010 10:43:31 AM TimeWritten : 1/25/2010 10:43:31 AM UserName : Note that if I pull the event ID property it's correct (in this case 38) Is this is known issue or is something wrong. The messages resolve fine via event viewer locally and remotely Here is the powershell version info: Name : ConsoleHost Version : 2.0 InstanceId : bc58fcf8-bba3-4ca8-8972-17dbd5d9ff08 UI : System.Management.Automation.Internal.Host.InternalHostUserInterface CurrentCulture : en-US CurrentUICulture : en-US PrivateData : Microsoft.PowerShell.ConsoleHost+ConsoleColorProxy IsRunspacePushed : False Runspace : System.Management.Automation.Runspaces.LocalRunspace Here is the revised version info: Name Value ---- ----- CLRVersion 2.0.50727.3603 BuildVersion 6.0.6002.18111 PSVersion 2.0 WSManStackVersion 2.0 PSCompatibleVersions {1.0, 2.0} SerializationVersion 1.1.0.1 PSRemotingProtocolVersion 2.1

    Read the article

  • Cassandra Remote Connection

    - by Lyuben Todorov
    I'm not managing to connect to cassandra from outside machines. The database is hosted on a windows machine and im trying to connect through a mac (but this shouldn't cause problems) Local connection works: C:\cassandra\bin>cassandra-cli Starting Cassandra Client Connected to: "Test Cluster" on 127.0.0.1/9160 Welcome to Cassandra CLI version 1.1.6 But fails from other machines on the same network bin/cassandra-cli --host 192.168.0.10 --port 9160 org.apache.thrift.transport.TTransportException: java.net.ConnectException: Operation timed out at org.apache.thrift.transport.TSocket.open(TSocket.java:183) at org.apache.thrift.transport.TFramedTransport.open(TFramedTransport.java:81) at org.apache.cassandra.cli.CliMain.connect(CliMain.java:70) at org.apache.cassandra.cli.CliMain.main(CliMain.java:246) Exception connecting to 192.168.0.10/9160. Reason: Operation timed out. Welcome to Cassandra CLI version 1.2.0-beta3 Type 'help;' or '?' for help. Type 'quit;' or 'exit;' to quit. There is a router on the network but these ports have been triggred Ports: 1024, 7000, 7001, 7199, 9160 And the same ports were forwarded to 192.168.0.10 (where Cassandra is hosted) Cassandra version is 1.0.7 And the settings I think i need to change in cassandra.yaml listen_address: 192.168.0.10 rpc_address: I'm not really sure if I've missed any steps. Any help would be appreciated.

    Read the article

  • Distributed development staff needing a common IP range

    - by bakasan
    I work on a development staff that is geographically distributed, mostly all throughout the state of CA, but several key members also must travel frequently. We rely quite heavily on a 3rd party provider API for a great deal of our subsystems (can't get into who it is or what they do). The 3rd party however is quite stringent on network access and have no notion of a development sandbox. Access is restricted to 2, 3 IP numbers and that's about it. Once we account for our production servers, that leaves us with an IP or two to spare for our dev team--which is still problematic as people's home IP changes, people travel, we have more than 2 devs, etc. Wide IP blocks are not permitted by the 3rd party. Nor will they allow dynamic DNS type services. There is no simple console to swap IPs on the fly either (e.g. if a dev's IP at home changes or they are on the road). As none of us are deep network experts, I'm wondering what our viable options are? Are there such things as 3rd party hosts to VPNs? Generally I think of a VPN as a mechanism to gain access to a home office, but the notion would be a 3rd party VPN that we'd all connect to and we'd register this as an IP origin w/ our 3rd party. We've considered using Amazon EC2 to effectively host a dev environment for each dev and using that to connect. Amazon only gives you so many static IPs however (I believe 5?) so this would only be a stop gap solution until our team size out strips our IP count at Amazon. Those were the only viable thoughts that I had, but again, I'm far from a networking guy. Tried searching for similar threads, but I'm not even sure I know the right vernacular to look around for.

    Read the article

  • Localhost problems on Mac OS X 10.7

    - by Maya
    Sorry for the duplicate post ( http://stackoverflow.com/questions/9720871/localhost-problems-on-mac-os-x-10-7 ), but I got the advice that this is a better place to ask my question: I want to access a mysql server remotely over ssh. So I used port forwarding to access the remote 3306 port on my localhost as 8383. The ssh connection can established successfully. But when I want to telnet onto port 8383 on localhost I get the following error: ~: telnet 127.0.0.1 8383 Trying 127.0.0.1... telnet: connect to address 127.0.0.1: Connection refused telnet: Unable to connect to remote host I tried the same on a friends Laptop (also Mac OS X 10.7) and it worked fine, so it is very unlikely that the ssh connection is the problem. I assume it has something to do with my local network configuration. I turned off IPv6, just in case. My /etc/hosts looks like this: 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost I would greatly appreciate any help. Please point me in the right direction if this is not the right place to ask this question.

    Read the article

  • Unable to access Windows share

    - by mbnoimi
    I've installed Alfresco 4.2.d under Ubuntu 12.04 LTS; Everything done fine except I can't access it from Windows share although I got the link from Alfresco explorer which is: file:///%5C%5CECSA%5CAlfresco%5CSites%5Cswsdp%5CdocumentLibrary%5CAgency%20Files%5CImages%5Ccoins.JPG I tried to access it from: \\ECSA but I failed too so I made a ping (192.168.0.70 is server IP) then I got: C:\Users\user>ping 192.168.0.70 Pinging 192.168.0.70 with 32 bytes of data: Reply from 192.168.0.70: bytes=32 time<1ms TTL=64 Reply from 192.168.0.70: bytes=32 time<1ms TTL=64 Reply from 192.168.0.70: bytes=32 time<1ms TTL=64 Reply from 192.168.0.70: bytes=32 time<1ms TTL=64 Ping statistics for 192.168.0.70: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms C:\Users\user>ping ECSA Ping request could not find host ECSA. Please check the name and try C:\Users\user> Some logs of what's going on: C:\Users\user>net view ECSA System error 1707 has occurred. The network address is invalid. C:\Users\user>nbtstat -a 192.168.0.70 Local Area Connection: Node IpAddress: [192.168.0.84] Scope Id: [] NetBIOS Remote Machine Name Table Name Type Status --------------------------------------------- ECSA <20> UNIQUE Registered ECSA <00> UNIQUE Registered WORKGROUP <00> GROUP Registered MAC Address = 00-00-00-00-00-00 C:\Users\user> CIFS Server Configuration in file-servers.properties ### CIFS Server Configuration - file-servers.properties ### cifs.enabled=true cifs.serverName=${localname}A cifs.domain= cifs.broadcast=255.255.255.255 cifs.bindto=192.168.0.70 cifs.ipv6.enabled=false cifs.hostannounce=true cifs.disableNIO=false cifs.disableNativeCode=false cifs.sessionTimeout=900 cifs.maximumVirtualCircuitsPerSession=16 cifs.tcpipSMB.port=445 cifs.netBIOSSMB.sessionPort=139 cifs.netBIOSSMB.namePort=137 cifs.netBIOSSMB.datagramPort=138 cifs.WINS.autoDetectEnabled=true cifs.WINS.primary=192.168.0.70 cifs.WINS.secondary=192.168.0.1 cifs.sessionDebug= cifs.pseudoFiles.enabled=true cifs.pseudoFiles.explorerURL.enabled=true cifs.pseudoFiles.explorerURL.fileName=__Alfresco.url cifs.pseudoFiles.shareURL.enabled=false cifs.pseudoFiles.shareURL.fileName=__Share.url How can I fix this issue?

    Read the article

  • Can not join additional domain controllers

    - by Hosm
    Hi all, I had a dead PDC and another not so synced domain controller for my domain. using comments here link now the so called secondary domain controller has seized domain controls and I can verify it from dsa.msc that it is a domain controller. I set up another domain controller (win2003SRV) and about to promote an AD on it as a domain controller for my domain. When I try to join the new domain controller to the domain I face DNS problem. here is some more detail DNS was successfully queried for the service location (SRV) resource record used to locate a domain controller for domain DOMNAME.A.B: The query was for the SRV record for _ldap._tcp.dc._msdcs.DOMNAME.A.B The following domain controllers were identified by the query: update.DOMNAME.A.B Common causes of this error include: - Host (A) records that map the name of the domain controller to its IP addresses are missing or contain incorrect addresses. - Domain controllers registered in DNS are not connected to the network or are not running. For information about correcting this problem, click Help. it is worth noting that update.DOMNAME.A.B is the current domain controller to which I'd like to add another controller named PDC.DOMNAME.A.B Ip address of update.DOMNAME.A.B is 192.168.200.1 and for pdc.DOMNAME.A.B is 192.168.200.100 querying DNS on both machine return correct results. Any idea?

    Read the article

  • Easyphp Web Setup

    - by Dominique
    I've tried to setup an EasyPHP in local and make it visible from the Web via DynDNS, which I've already successed many times before, but now this just doesn't work, maybe I've forgotten something... *The "server" is a common workstation. Here is what I have done : 1) Installed EasyPhp (with a index.php/html file in WWW folder) 2) Changed the port in the config to port 80 3) Forwarded port 80 to the server IP in my router configuration 4) Added the server to the router DMZ *Also tried removing antivirus/firewall I've installed PortListener, pointed it on port 80, and when I access "myname.dyndns.com" it says Client connected GET / HTTP/1.1 Host: xyz.dyndns-remote.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; fr; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 Accept-Language: fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive So the server is accessible via Web, receive the connection successfully, but in my browser it says that the connection failed and show nothing...

    Read the article

  • Firebird 2.5 Database Corrupt

    - by BrendanH
    We have an issue where a database hangs the server when: a backup is performed (Hangs on a specific table) selecting * or count(1) from a specific table or viewing data that is related to the table (FKs, etc) We could browse the table to a certain point (using IBExpert) however after about 2900 records the machine just spikes and hangs. Performing a gfix -m does not work, and the validation reports back Record level errors = 4 (no matter how many times we run gfix -m, -v, etc. The Firebird.log file reports back these types of messages: Relation has 91631 orphan backversions (9214273 in use) in table BINS (137) - {Which is apparently just a warning} Unable to complete network request to host "MHPLZA1". Error reading data from the connection. INET/inet_error: read errno = 10054 SERVER/process_packet: broken port, server exiting Shutting down the server with 1 active connection(s) to 1 database(s), 0 active service(s) - {If we leave the backup to run while hanging, it eventually logs this error message} The setup is: The table is question has about 7000 records. The Firebird version is 2.5 Classic Server x64 install. The OS is Windows Server 2008. This is a virtual machine (VMWare) running on a massive server. (Does anyone have issues with VMs and Firebird?). We have the same setup running fine on other servers (However they are not virtual machines). Is there anyway to pin point the issue and or the cause?

    Read the article

  • snort with barnyard2 not working on Fedora 12

    - by aHunter
    Has anyone come across this error with barnyard2 and snort? --== Initializing Barnyard2 ==-- Initializing Input Plugins! Initializing Output Plugins! Parsing config file "/etc/snort/barnyard2.conf" Log directory = /var/log/barnyard2 database: compiled support for (mysql) database: configured to use mysql database: schema version = 107 database: host = localhost database: user = test database: database name = snort database: sensor name = localhost:eth0 database: sensor id = 1 database: data encoding = hex database: detail level = full database: ignore_bpf = no database: using the "log" facility --== Initialization Complete ==-- ______ -*> Barnyard2 <*- / ,,_ \ Version 2.1.8 (Build 251) |o" )~| By the SecurixLive.com Team: http://www.securixlive.com/about.php + '''' + (C) Copyright 2008-2010 SecurixLive. Snort by Martin Roesch & The Snort Team: http://www.snort.org/team.html (C) Copyright 1998-2007 Sourcefire Inc., et al. WARNING: Ignoring corrupt/truncated waldofile '/var/log/snort/barnyard.waldo' Opened spool file '/var/log/snort/snort.log.1282004944' ERROR: Unknown record type read: 104 Fatal Error, Quitting.. Snort seems to be working correctly as I have managed to get logs via syslog but when I try to use the barnyard config via Unified2 it is not working. Presumably because of the above error. Thanks in advance.

    Read the article

  • Mounting Replicated Gluster Multi-AZ Storage

    - by Roman Newaza
    I have Replicated Gluster Storage which is used by Auto scaling Servers. Both, Auto scaling and Storage are allocated in two Availability zones. Gluster: Number of Bricks: 4 x 2 = 8 Transport-type: tcp Bricks: Brick1: gluster01:/storage/1a # Zone A Brick2: gluster02:/storage/1b # Zone B Brick3: gluster03:/storage/2a # Zone A Brick4: gluster04:/storage/2b # Zone B Brick5: gluster01:/storage/3a # Zone A Brick6: gluster02:/storage/3b # Zone B Brick7: gluster03:/storage/4a # Zone A Brick8: gluster04:/storage/4b # Zone B I used Round Robin DNS for Gluster entry point, so DNS name resolves to all of the storage server addresses which are returned in different order all the time: # host storage.domain.com storage.domain.com has address xx.xx.xx.x1 storage.domain.com has address xx.xx.xx.x2 storage.domain.com has address xx.xx.xx.x3 storage.domain.com has address xx.xx.xx.x4 The Storage is mounted with Native Gluster Client: # grep storage /etc/fstab storage.domain.com:/storage /storage glusterfs defaults,log-level=WARNING,log-file=/var/log/gluster.log 0 0 I have heard Gluster might be mounted with the first Server IP and after that it will fetch its configuration with the rest of Servers. Personally, I never tested single Server mount setup and I don't know how Gluster handles this. On EC2, traffic among single Availability zone is free and between different zones is not. When Client in zone A writes to storage and IP of Storage in zone B is returned, it will cost me twice more for data transfer: Client (Zone A) - Storage Server (Zone B) - Replication to Storage Server (Zone A). Question: Would it be better to mount Storage Server of the same zone, so that data transfer charges apply only for replication (A - A - B)?

    Read the article

  • How much network latency is "typical" for east - west coast USA?

    - by Jeff Atwood
    At the moment we're trying to decide whether to move our datacenter from the west coast (Corvallis, OR) to the east coast (NY, NY). However, I am seeing some disturbing latency numbers from my location (Berkeley, CA) to the NYC host. Here's a sample result, retrieving a small .png logo file in Google Chrome and using the dev tools to see how long the request takes: Berkeley to NYC server: 215 ms latency, 46ms transfer time, 261ms total Berkeley to Corvallis server: 114ms latency, 41ms transfer time, 155ms total some URLs if you want to try yourself: http://careers.stackoverflow.com/content/cso/img/logo.png (NY, NY) http://serverfault.com/cache/logo.png (Corvallis, OR) It makes sense that Corvallis, OR is geographically closer to Berkeley, CA so I expect the connection to be a bit faster.. but I'm seeing an increase in latency of +100ms when I perform the same test to the NYC server. That seems .. excessive to me. Particularly since the time spent transferring the actual data only went up 10%, yet the latency went up ten times as much! That feels... wrong... to me. I found a few links here that were helpful (through Google no less!) ... http://serverfault.com/questions/63531/does-routing-distance-affect-performance-significantly http://serverfault.com/questions/61719/how-does-geography-affect-network-latency http://serverfault.com/questions/6210/latency-in-internet-connections-from-europe-to-usa ... but nothing authoritative. So, is this normal? It doesn't feel normal. What is the "typical" latency I should expect when moving network packets from the east coast <--> west coast of the USA?

    Read the article

  • How to setup DNS server behind a VPN

    - by Brian
    I want to host some websites behind a VPN and I need some help with the finer points of the configuration. Thus far I've settled on OpenVPN + Bind9 and I want to configure the domains like this: External DNS mail.example.com www.example.com vpn.example.com I want to be able to connect to the vpn using 'vpn.example.com'. Once connected I then want to be able to resolve anything which is '*.vpn.example.com' with the DNS server sitting behind the VPN. I know that OpenVPN can push DNS servers to clients when they connect. I am having trouble though with the DNS config, both internal and external. I've gone through a few tutorials etc. and tried to reason about it myself but I'm not getting anywhere. So my main question would be does the above configuration make sense? If so, any general pointers or examples would be greatly appreciated. Here's what I've tried so far based on this tutorial (I've redacted my domain with example.com). When I try the tests with dig at the end to check the resolution is working it fails. db.vpn.example.com $TTL 15m vpn.example.com. IN SOA ns.vpn.example.com. [email protected]. ( 2009010910 ;serial 900 ;refresh 900 ;retry 900 ;expire 900 ;minimum TTL ) vpn.example.com. IN NS ns.vpn.example.com. ns IN A 192.168.0.2 test IN A 192.168.0.2

    Read the article

  • Openfiler iSCSI performance

    - by Justin
    Hoping someone can point me in the right direction with some iSCSI performance issues I'm having. I'm running Openfiler 2.99 on an older ProLiant DL360 G5. Dual Xeon processor, 6GB ECC RAM, Intel Gigabit Server NIC, SAS controller with and 3 10K SAS drives in a RAID 5. When I run a simple write test from the box directly the performance is very good: [root@localhost ~]# dd if=/dev/zero of=tmpfile bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 4.64468 s, 226 MB/s So I created a LUN, attached it to another box I have running ESXi 5.1 (Core i7 2600k, 16GB RAM, Intel Gigabit Server NIC) and created a new datastore. Once I created the datastore I was able to create and start a VM running CentOS with 2GB of RAM and 16GB of disk space. The OS installed fine and I'm able to use it but when I ran the same test inside the VM I get dramatically different results: [root@localhost ~]# dd if=/dev/zero of=tmpfile bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 26.8786 s, 39.0 MB/s [root@localhost ~]# Both servers have brand new Intel Server NIC's and I have Jumbo Frames enabled on the switch, the openfiler box as well as the VMKernel adapter on the ESXi box. I can confirm this is set up properly by using the vmkping command from the ESXi host: ~ # vmkping 10.0.0.1 -s 9000 PING 10.0.0.1 (10.0.0.1): 9000 data bytes 9008 bytes from 10.0.0.1: icmp_seq=0 ttl=64 time=0.533 ms 9008 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.736 ms 9008 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.570 ms The only thing I haven't tried as far as networking goes is bonding two interfaces together. I'm open to trying that down the road but for now I am trying to keep things simple. I know this is a pretty modest setup and I'm not expecting top notch performance but I would like to see 90-100MB/s. Any ideas?

    Read the article

  • Time drift in Cloud Server - need to mainpulate GRUB config

    - by Aditya Advani
    We are hosting a VPS on a popular host and are experiencing a regular time drift of several minutes a day forward (approx 7). Linux Kernel: 2.6.18-164.11.1.el5 GNU/Linux Distro: CentOS release 5.4 (Final) We reached out to our hosting provider and their support advised us " This is a known issue with Cloud Servers. To fix this you will need to add one line to your grub config located at: /boot/grub/menu.lst The line you need to add is: noapic nolapic divider=10 nolapic_timer This should correct this issue. You will need to restart after this is added in. " Because I am wary of manipulating grub, mostly I'm terrified that our server may fail to restart - I ask you guys, the pro *nix admins - where exactly in this file does the recommended insertion below: # line from 1&1 for time syncing issue (Case 5163) noapic nolapic divider=10 nolapic_timer go? Please specify where exactly, and whether the order of commands is or is not important. Why is the block below "title CentOS ..." indented? If someone could give me an overview of how this works or point me to a resource that's easy to follow, that's what I'm looking for immediately, a light overview or basic understanding of what I;m doing. If GRUB and bootloaders are a deep dark treasure trove of kernel hacking or something, that's great well-recommended in-depth resources are also very welcome. This is my current /boot/grub/menu.lst # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file #boot=/dev/sda # serial --unit=0 --speed=57600 terminal --timeout=5 serial console timeout=5 title CentOS (2.6.18-164.11.1.el5) root (hd0,0) kernel /boot/vmlinuz-2.6.18-164.11.1.el5 ro root=/dev/hda1 console=tty0 console=tty initrd /boot/initrd-2.6.18-164.11.1.el5.img MOST IMPORTANT: I need to know where in the file above it is appropriate to paste the suggested line so I can confidently restart my VPS after manipulating GRUB config

    Read the article

  • Which apache/mysql/php package is best for windows?

    - by crosenblum
    I have tried appservnetwork, was the best so far, but I haven't seen them do an update in ages, EasyPHP is just slow to load always. Wamp and Xamp, all put in their description that is not for production. I do not plan to host publicly this site or site's I am working on. But I do want a fast loading apache/mysql/php server for development purposes. I used to really like WLMP, which is Lighttpd for Windows, but that project seems unupdated or abandoned. I refuse to use IIS, but i have no desire to get into any wars over it. I run windows xp sp3 at my home pc. I will need to have a web server setup for professional work, as well as some fun websites I am working on. I just want it fast enough, so i can run it via localhost, and not take forever to load in the browser. Thank you... I plan mostly do php programming and perhaps coldfusion via this.

    Read the article

  • Internet Explorer / Windows 7 does not want to show HTML file from local network drive

    - by Jaanus
    Setup: I have Windows 7 running inside VirtualBox on Mac OS X host. I have a shared drive with some HTML files, that I am mounting as a local drive W: in Windows, from the VirtualBox server \VBOXSVR. I want to look at them with a browser in Windows. Chrome in Windows 7 opens and shows those HTML files just fine (file:///W:/welcome.html). But Internet Explorer does not, and shows this error instead of the files: Internet Explorer cannot display the web page What you can try: [button Diagnose Connection Problems] More information This problem can be caused by a variety of issues, including: Internet connectivity has been lost. The website is temporarily unavailable. The Domain Name Server (DNS) is not reachable. The Domain Name Server (DNS) does not have a listing for the website's domain. If this is an HTTPS (secure) address, click Tools, click Internet Options, click Advanced, and check to be sure the SSL and TLS protocols are enabled under the security section. For the internet zone in the status bar, it shows: Internet | Protected Mode: On IE settings are a mystery to me, and I could possibly get it to work by tweaking IE settings, but I don't know which ones. How do I make IE show the same files that Chrome is happy to show? (Chrome showing them means that the files themselves are fine, there is something about the setup that just makes IE be a diva.)

    Read the article

  • Task Scheduler not able to execute .vbs successfully

    - by Django Reinhardt
    Hi there, got this weird problem, which will hopefully have an obvious solution for some enlightened soul: We have several daily tasks we run via a .vbs script on our server (through the Task Scheduler), and for months it has been fine, but recently we've hit a problem. The .vbs script stopped successfully executing... but oddly it worked fine when ran manually! The error given in these circumstances was always "Timeout". We thought we try a little creative thinking, and run the .vbs another way: Via a .bat file. Again we hit weird issues, but with a little more debugging information, this time around. The .bat file is nothing more than... CScript "C:\location\script.vbs" > Log.txt But the Task Scheduler fails with the following error: 0x1: An incorrect function was called or an unknown function was called. The log.txt file says: CScript Error: Initialization of the Windows Script Host failed. (Not enough storage is available to process this command. ) But get this: The .bat file executes perfectly (vbs script and all) if it's executed with a double click! There's only a problem when it's run by Task Scheduler. What the hell? We're running Windows Server 2008 R2 (x64) and yes, the Task Sheduler's results are the same whether the user is logged in or not. Also, the user that can run the scripts successfully manually, is also the same user that runs the scripts in Task Scheduler. Thanks for any help for this weird problem!

    Read the article

  • Sendmail delivering locally instead of to MTA in MX record

    - by CreativeNotice
    Ok, so I've got a box named websrv1.mydomain.com. It's a web server running ubuntu, apache2, sendmail, etc. My email is outsourced to a third party. So in my DNS I've got MX set to mx.thirdparty.net. I've no reason to accept incoming mail on my web server, every email should be sent to the third party. This works correctly accept with sending mail from the webserver (aka via cron or console). So from my web server, if I send an email to [email protected], it just disappears. No errors, nothing in dead.letter, nothing. I can send to any other address with no issues. If I send to [email protected] it's delivered locally which is fine. 1) Doing an nslookup shows the mx record is correct. 2) Running /mx mydomain.com from sendmail -bt returns the correct result. 3) Running sendmail -bv [email protected] returns: sudo sendmail -bv [email protected] [email protected]... deliverable: mailer esmtp, host mydomain.com., user [email protected] 4) Running 3,0 [email protected], returns: 3,0 [email protected] canonify input: me @ mydomain . com Canonify2 input: me Canonify2 returns: me canonify returns: me parse input: me Parse0 input: me Parse0 returns: me Parse1 input: me MailerToTriple input: me MailerToTriple returns: me Parse1 returns: $# esmtp $@ mydomain . com . $: me parse returns: $# esmtp $@ mydomain . com . $: me So I'm at a loss. Sendmail seems to see the mx record, but it's not using it.

    Read the article

  • ZFS - destroying deduplicated zvol or data set stalls the server. How to recover?

    - by ewwhite
    I'm using Nexentastor on a secondary storage server running on an HP ProLiant DL180 G6 with 12 Midline (7200 RPM) SAS drives. The system has an E5620 CPU and 8GB RAM. There is no ZIL or L2ARC device. Last week, I created a 750GB sparse zvol with dedup and compression enabled to share via iSCSI to a VMWare ESX host. I then created a Windows 2008 file server image and copied ~300GB of user data to the VM. Once happy with the system, I moved the virtual machine to an NFS store on the same pool. Once up and running with my VMs on the NFS datastore, I decided to remove the original 750GB zvol. Doing so stalled the system. Access to the Nexenta web interface and NMC halted. I was eventually able to get to a raw shell. Most OS operations were fine, but the system was hanging on the zfs destroy -r vol1/filesystem command. Ugly. I found the following two OpenSolaris bugzilla entries and now understand that the machine will be bricked for an unknown period of time. It's been 14 hours, so I need a plan to be able to regain access to the server. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924390 and http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=593704962bcbe0743d82aa339988?bug_id=6924824 In the future, I'll probably take the advice given in one of the buzilla workarounds: Workaround Do not use dedupe, and do not attempt to destroy zvols that had dedupe enabled. Update: I had to force the system to power off. Upon reboot, the system stalls at Importing zfs filesystems. It's been that way for 2 hours now.

    Read the article

  • Cygwin, ssh, and git on Windows Server 2008

    - by Paul
    Hi everyone. I'm trying to setup a git repository on an existing Windows 2008 (R2) server. I have successfully installed Cygwin & added git and ssh to the packages, and everything works perfectly (thanks to Mark for his article on it). I can ssh to localhost on the server, and I can do git operations locally on the server. When I try to do either from the client, however, I get the "port 22, Bad file number" error. Detailed SSH output is limited to this: OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007 debug1: Connecting to {myserver} [{myserver}] port 22. debug1: connect to address {myserver} port 22: Attempt to connect timed out without establishing a connection ssh: connect to host {myserver} port 22: Bad file number Google tells me that this means I'm being blocked, usually, by a firewall. So, double-checked the firewall settings on the server, rule is there allowing port 22 traffic. I even tried turning off the firewall briefly, no change in behavior. I can ssh just fine from that client to other servers. The hosting company swears that there's no other firewalls blocking that server on port 22 (or any other port, they claim, but I find that hard to believe). I have another trouble ticket into them, just in case the first support person was full of it, but meanwhile I wanted to see if anyone could think of anything else it can be. Thanks, Paul

    Read the article

  • Selenium server causes crazy load on server - how to prevent?

    - by Eric
    I'm running this linux: Linux host.themepark.com 2.6.32-220.4.1.el6.x86_64 #1 SMP Tue Jan 24 02:13:44 GMT 2012 x86_64 x86_64 x86_64 GNU/Linux And I run the Selenium stand-alone server on my box with this command: java -jar /home/l/cron/selenium-server-standalone-2.24.1.jar > /logs/selenium.log 2>&1 & Here's the problem: as soon as I do that, the server load starts skyrocketing. I even went back and downloaded older versions of the Selenium server, but same results with 2.23.1, 2.23.0, and 2.19.0. Note that the server load starts going nuts before I issue ANY commands to Selenium or do anything else. All I'm doing is firing up the server, per the command above. This used to work perfectly on my server without causing massive server load, so something has changed, but I'm not sure what. My server is a managed VPS so I don't know if there is some kind of auto-update script that kicked in or what... but it's a problem. (Incidentally, even though the server load climbs like crazy, everything still works: after firing up Selenium, my server creates a screen with Xvfb so Firefox will be happy, then a PHP script talks to Selenium to do what it needs to do before shutting everything down. It takes a LONG time, and the load gets all the way up to 8 [!!!] before it is finished, which kills my web server and makes the main site horribly unresponsive... but it does get everything done.) Any suggestions for what is going on, why it's started doing this and/or, most importantly, how I can make Selenium not kill the server when it starts up... would be GREATLY appreciated!

    Read the article

< Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >