Search Results

Search found 17097 results on 684 pages for 'entry level'.

Page 610/684 | < Previous Page | 606 607 608 609 610 611 612 613 614 615 616 617  | Next Page >

  • JNDI Datasource definition in Tomcat 6.0

    - by romaintaz
    Hi all, I want to define a DataSource to an Oracle database on my Tomcat 6.0. So, in conf/server.xml (yes, I know that this DataSource will be available for all the webapps in Tomcat, but it's not a problem here), I've set this Resource: <GlobalNamingResources> <Resource name="hibernate/HibernateDS" auth="Container" type="javax.sql.DataSource" url="jdbc:oracle:thin:@myserver:1542:foo" username="foo" password="bar" driverClassName="oracle.jdbc.OracleDriver" maxActive="50" maxIdle="10" validationQuery="select 1 from dual"/> Then, in the web.xml of my application, I set a resource-ref element: <resource-ref> <description>Hibernate Datasource</description> <res-ref-name>hibernate/HibernateDS</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> Finally, as Hibernate is used to manage the database connection, I have a webapps/mywebapp/WEB-INF/classes/hibernate.cfg.xml that creates a session-factory using the JNDI DataSource: <hibernate-configuration> <session-factory> <property name="connection.datasource">java:comp/env/hibernate/HibernateDS</property> ... However, when I start my Tomcat server, I get an error that says it could not create the INFO [net.sf.hibernate.util.NamingHelper] JNDI InitialContext properties:{} INFO [net.sf.hibernate.connection.DatasourceConnectionProvider] Using datasource: java:comp/env/hibernate/HibernateDS INFO [net.sf.hibernate.transaction.TransactionFactoryFactory] Transaction strategy: net.sf.hibernate.transaction.JDBCTransactionFactory INFO [net.sf.hibernate.transaction.TransactionManagerLookupFactory] No TransactionManagerLookup configured (in JTA environment, use of process level read-write cache is not recommended) WARN [net.sf.hibernate.cfg.SettingsFactory] Could not obtain connection metadata org.apache.tomcat.dbcp.dbcp.SQLNestedException: Cannot create JDBC driver of class '' for connect URL 'null' at org.apache.tomcat.dbcp.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1150) at org.apache.tomcat.dbcp.dbcp.BasicDataSource.getConnection(BasicDataSource.java:880) at net.sf.hibernate.connection.DatasourceConnectionProvider.getConnection(DatasourceConnectionProvider.java:59) at net.sf.hibernate.cfg.SettingsFactory.buildSettings(SettingsFactory.java:84) at net.sf.hibernate.cfg.Configuration.buildSettings(Configuration.java:1172) ... Caused by: java.lang.NullPointerException at sun.jdbc.odbc.JdbcOdbcDriver.getProtocol(JdbcOdbcDriver.java:507) at sun.jdbc.odbc.JdbcOdbcDriver.knownURL(JdbcOdbcDriver.java:476) at sun.jdbc.odbc.JdbcOdbcDriver.acceptsURL(JdbcOdbcDriver.java:307) at java.sql.DriverManager.getDriver(DriverManager.java:253) at org.apache.tomcat.dbcp.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1143) ... 11 more Do you have any idea why Hibernate is not able to construct the session-factory? What is wrong in my configuration?

    Read the article

  • Recognizing Dell EquilLogic with Nagios

    - by user3677595
    EDIT: All firmware and models are compatible, that is why nothing is posted about it. Okay, so there will be a lot here, so please bare with me. I've been working on this now for a few hours (reading manuals and such) so I'm not just coming here right out of the blue. I am working on a PRE-EXISTING Nagios server where there are several other existing plugins and checks running and working. Now I want to add another server there to check so I made the following modifications: First and foremost, I added a file to /usr/local/nagios/libexec named: check_equallogic.sh. The permissions are 755, the same as all others. I have chowned to nagios:nagios and in the listing it shows the Owner as Nagios. I then added a command to the commands.cfg file in \usr\local\nagios\etc\objects that shows the following: # 'check_equallogic' command definition define command{ command_name check_equallogic command_line $USER1$/check_equallogic -H $HOSTADDRESS$ -C $ARG1$ -t $ARG2$ $ARG3$ } Following this, I created a file named equallogic.cfg in the objects directory and it contains (more or less): define host{ use linux-server ; Inherit default values from a template host_name 172.16.50.11 ; The name we're giving to this device alias EqualLogic ; A longer name associated with the device address 172.16.50.11 ; IP address of the device contact_groups admins } Check Equallogic Information define service{ use generic-service host_name 172.16.50.11 service_description General Information check_command check_equallogic!public!info } After ensuring that permissions are okay for all files, I restart the nagios service, no errors. When I go into the WebGUI, I get the following errors AFTER the check runs: (Return code of 127 is out of bounds - plugin may be missing) Extra, probably unrelated problem Furthermore, when I log into the EquilLogic server, under Audit logs I get the following error: Level: AUDIT Time: 26/05/2014 3:59:13 PM Member: ps4100-1 Subsystem: agent Event ID: 22.7.1 SNMP packet validation failed, request received from 172.16.10.11 An snmpwalk receives a timeout, whereas others succeed. I will work on importing the MIBs tomorrow. The reason why I am mentioning it is because I want to make sure that it is only a MIB issue for the SNMP. If it is, then ignore this area. I am entirely unsure of what to do here.

    Read the article

  • Suspected network performance issue on VirtualBox Ubuntu guest on Win7 host

    - by Adam
    I set up Ubuntu 12.04 in VirtualBox on the Win7 machine I was allocated on my new project. I am running Java, Eclipse, Tomcat to develop a large data-intensive application and I noticed that this application runs at half the speed of my colleague's identical machine, where he runs it all under Windows. I think I have narrowed down the performance issue to the network, after comparing and equalising all the Java VM settings with my colleague. Is there a ping test I can do or some other network diagnostic test to flag up any problems? To give some background, the network performance is confusing. Running a network speed test to my colleague's machine with iperf shows speeds of 6 Mb/s from my Ubuntu guest, and 90 Mb/s from the win7 host. Large downloads, e.g. the Java SDK, come down at about 1.2 MB/s on both the guest and the host. Pings are sub-1ms on the host, but 1.5ms on the guest. I also did a broadband speed test, and got 10Mb/s download speed on both, but the host has an upload speed of 10Mb/s but the guest only uploads at 3Mb/s. I've been trying to diagnose any MTU problems with ping -M do to identify any kind of packet fragmentation problem but it's progressing very slow because I don't have much experience in this area. From what I read on other people's networking issues with VB and Linux guests on Win7 hosts, I should be able to get the speed on the guest up to the same level as the host. I installed a fresh VM with Ubuntu again to see if I'd foobar'd it somehow, but I'm getting the same readings with iperf on the virgin installation. My setup is: Adapter 1: Intel PRO/1000 MT Desktop (NAT) Adapter 2: ditto (host-only adapter) eth0 Link encap:Ethernet HWaddr 08:00:27:0b:76:bf inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe0b:76bf/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:86236 errors:0 dropped:0 overruns:0 frame:0 TX packets:49369 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:69163946 (69.1 MB) TX bytes:3530535 (3.5 MB) eth2 Link encap:Ethernet HWaddr 08:00:27:a3:26:b8 inet addr:192.168.56.101 Bcast:192.168.56.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fea3:26b8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:59 errors:0 dropped:0 overruns:0 frame:0 TX packets:57 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:9148 (9.1 KB) TX bytes:7648 (7.6 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:701 errors:0 dropped:0 overruns:0 frame:0 TX packets:701 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:66321 (66.3 KB) TX bytes:66321 (66.3 KB)

    Read the article

  • What info is really useful in my iptables log and how do I disable the useless bits?

    - by anthony01
    In my iptables rules files, I entered this at the end: -A INPUT -j LOG --log-level 4 --log-ip-options --log-prefix "iptables: " I DROP everything besides INPUT for SSH (port 22) I have a web server and when I try to connect to it through my browser, through a forbidden port number (on purpose), I get something like that in my iptables.log Sep 24 14:05:57 myserver kernel: [xx.xx] iptables: IN=eth0 OUT= MAC=aa:bb:cc SRC=yy.yy.yy.yy DST=xx.xx.xx.xx LEN=64 TOS=0x00 PREC=0x00 TTL=54 ID=59351 DF PROTO=TCP SPT=63776 DPT=1999 WINDOW=65535 RES=0x00 SYN URGP=0 Sep 24 14:06:01 myserver kernel: [xx.xx] iptables: IN=eth0 OUT= MAC=aa:bb:cc SRC= yy.yy.yy.yy DST=xx.xx.xx.xx LEN=48 TOS=0x00 PREC=0x00 TTL=54 ID=63377 DF PROTO=TCP SPT=63776 DPT=1999 WINDOW=65535 RES=0x00 SYN URGP=0 Sep 24 14:06:09 myserver kernel: [xx.xx] iptables: IN=eth0 OUT= MAC=aa:bb:cc SRC=yy.yy.yy.yy DST=xx.xx.xx.xx LEN=48 TOS=0x00 PREC=0x00 TTL=54 ID=55025 DF PROTO=TCP SPT=63776 DPT=1999 WINDOW=65535 RES=0x00 SYN URGP=0 Sep 24 14:06:25 myserver kernel: [xx.xx] iptables: IN=eth0 OUT= MAC=aa:bb:cc SRC=yy.yy.yy.yy DST=xx.xx.xx.xx LEN=48 TOS=0x00 PREC=0x00 TTL=54 ID=54521 DF PROTO=TCP SPT=63776 DPT=1999 WINDOW=65535 RES=0x00 SYN URGP=0 Sep 24 14:06:55 myserver kernel: [xx.xx] iptables: IN=eth0 OUT= MAC=aa:bb:cc SRC=yy.yy.yy.yy DST=xx.xx.xx.xx LEN=100 TOS=0x00 PREC=0x00 TTL=54 ID=35050 PROTO=TCP SPT=63088 DPT=22 WINDOW=33304 RES=0x00 ACK PSH URGP=0 Sep 24 14:06:55 myserver kernel: [xx.xx] iptables: IN=eth0 OUT= MAC=aa:bb:cc SRC=yy.yy.yy.yy DST=xx.xx.xx.xx LEN=52 TOS=0x00 PREC=0x00 TTL=54 ID=14076 PROTO=TCP SPT=63088 DPT=22 WINDOW=33264 RES=0x00 ACK URGP=0 Sep 24 14:06:55 myserver kernel: [xx.xx] iptables: IN=eth0 OUT= MAC=aa:bb:cc SRC=yy.yy.yy.yy DST=xx.xx.xx.xx LEN=52 TOS=0x00 PREC=0x00 TTL=54 ID=5277 PROTO=TCP SPT=63088 DPT=22 WINDOW=33248 RES=0x00 ACK URGP=0 Sep 24 14:06:56 myserver kernel: [xx.xx] iptables: IN=eth0 OUT= MAC=aa:bb:cc SRC=yy.yy.yy.yy DST=xx.xx.xx.xx LEN=100 TOS=0x00 PREC=0x00 TTL=54 ID=25501 PROTO=TCP SPT=63088 DPT=22 WINDOW=33304 RES=0x00 ACK PSH URGP=0 As you can see, I typed xx.xx.xx.xx:1999 in my browser, and it tried to connect until it timed out. 1) There are many similar lines for just one event. Do you think I need all of them? How would I avoid duplicates? 2) The last 4 lines are for my port 22. But since I allow port 22 INPUT for my web server, why are they here? 3) Do I need info like LEN,TOS,PREC and others? I'm trying to find a page that explains them one by one, by I can't find anything.

    Read the article

  • New Exchange 2010 CAS cannot find domain controllers

    - by NorbyTheGeek
    I am experiencing problems migrating from Exchange 2003 to Exchange 2010. I am on the first step: installing a new 2010 Client Access Server role. The Active Directory domain functional level is 2003. All domain controllers are 2003 R2. The only existing Exchange 2003 server happens to be housed on one of the domain controllers. It is running Exchange 2003 Standard w/ SP2. IPv6 is enabled and working on all domain controllers, servers, and routers, including this new Exchange server. After installing the CAS role on a new 2008 R2 server (Hyper-V VM) I am receiving 2114 Events: Process MSEXCHANGEADTOPOLOGYSERVICE.EXE (PID=1600). Topology discovery failed, error 0x80040a02 (DSC_E_NO_SUITABLE_CDC). Look up the Lightweight Directory Access Protocol (LDAP) error code specified in the event description. To do this, use Microsoft Knowledge Base article 218185, "Microsoft LDAP Error Codes." Use the information in that article to learn more about the cause and resolution to this error. Use the Ping or PathPing command-line tools to test network connectivity to local domain controllers. Prior to each, I receive the following 2080 Event: Process MSEXCHANGEADTOPOLOGYSERVICE.EXE (PID=1600). Exchange Active Directory Provider has discovered the following servers with the following characteristics: (Server name | Roles | Enabled | Reachability | Synchronized | GC capable | PDC | SACL right | Critical Data | Netlogon | OS Version) In-site: b.company.intranet CDG 1 0 0 1 0 0 0 0 0 s.company.intranet CDG 1 0 0 1 0 0 0 0 0 Out-of-site: a.company.intranet CD- 1 0 0 0 0 0 0 0 0 o.company.intranet CD- 1 0 0 0 0 0 0 0 0 g.company.intranet CD- 1 0 0 0 0 0 0 0 0 Connectivity between the new Exchange server and all domain controllers via IPv4 and IPv6 are all working. I have verified that the new Exchange server is a member of the following groups: Exchange Servers Exchange Domain Servers Exchange Install Domain Servers Exchange Trusted Subsystem Heck, I even put the new Exchange server into Domain Admins just to see if it would help. It didn't. I can't find any evidence of Active Directory replication problems, all pre-setup Setup tasks (/PrepareLegacyExchangePermissions, /PrepareSchema, /PrepareAD, /PrepareDomain) completed successfully. The only problem so far that I haven't been able to resolve with my Active Directory is I am unable to get my IPv6 subnets into Sites and Services Where should I proceed from here?

    Read the article

  • Automating silent software deployments on Solaris 10

    - by datSilencer
    Hello everyone. Essentially, the question I'd like to ask is related to the automation of software package deployments on Solaris 10. Specifically, I have a set of software components in tar files that run as daemon processes after being extracted and configured in the host environment. Pretty much like any server side software package out there, I need to ensure that a list of prerequisites are met before extracting and running the software. For example: Checking that certain users exists, and they are associated with one or many user groups. If not, then create them and their group associations. Checking that target application folders exist and if not, then create them with preconfigured path values defined when the package was assembled. Checking that such folders have the appropriate access control level and ownership for a certain user. If not, then set them. Checking that a set of environment variables are defined in /etc/profile, pointed to predefined path locations, added to the general $PATH environment variable, and finally exported into the user's environment. Other files include /etc/services and /etc/system. Obviously, doing this for many boxes (the goal in question) by hand can be slow and error prone. I believe a better alternative is to somehow automate this process. So far I have thought about the following options, and discarded them for one reason or another. 1) Traditional shell scripts. I've only troubleshooted these before, and I don't really have much experience with them. These would be my last resort. 2) Python scripts using the pexpect library for analyzing system command output. This was my initial choice since the target Solaris environments have it installed. However, I want to make sure that I'm not reinveting the wheel again :P. 3) Ant or Gradle scripts. They may be an option since the boxes also have java 1.5 enabled, and the fileset abstractions can be very useful. However, they may fall short when dealing with user and folder permissions checking/setting. It seems obvious to me that I'm not the first person in this situation, but I don't seem to find a utility framework geared towards this purpose. Please let me know if there's a better way to accomplish this. I thank you for your time and help.

    Read the article

  • Samba doesnt require password on xbmc but does on ubuntu

    - by Chris
    I have samba setup on a fedora 13 machine, and I use it to share with my xbmc client in the family room. When I set this up there no password or anything was required I merely entered in paths such as: smb://<host>/<share> and all worked. Now on my ubuntu 10.04 machine when I try to access the same hosts, for example through smbmount though I receive an error. smbmount //media/Music ~/Music/ # media is in my /etc/hosts and resolves to # correct IP address for the machine I receive error: operation not permitted after pressing enter when it prompts for password. Here is my entry from /etc/samba/smb.conf: [global] workgroup = WORKGROUP server string = Samba Server Version %v # log files split per-machine: log file = /var/log/samba/log.%m # maximum size of 50KB per log file, then rotate: max log size = 50 security = user passdb backend = tdbsam ; security = domain ; passdb backend = tdbsam ; realm = MY_REALM ; password server = <NT-Server-Name> ; security = user ; passdb backend = tdbsam ; domain master = yes ; domain logons = yes ; logon script = %m.bat ; logon script = %u.bat ; logon path = \\%L\Profiles\%u ; logon path = ; add user script = /usr/sbin/useradd "%u" -n -g users ; add group script = /usr/sbin/groupadd "%g" ; add machine script = /usr/sbin/useradd -n -c "Workstation (%u)" -M -d /nohome -s /bin/false "%u" ; delete user script = /usr/sbin/userdel "%u" ; delete user from group script = /usr/sbin/userdel "%u" "%g" ; delete group script = /usr/sbin/groupdel "%g" ; local master = no ; os level = 33 ; preferred master = yes ; wins support = yes ; wins server = w.x.y.z ; wins proxy = yes ; dns proxy = yes load printers = yes cups options = raw ; printcap name = /etc/printcap # obtain a list of printers automatically on UNIX System V systems: ; printcap name = lpstat ; printing = cups ; map archive = no ; map hidden = no ; map read only = no ; map system = no ; store dos attributes = yes #============================ Share Definitions ============================== [homes] comment = Home Directories browseable = no writable = yes ; valid users = %S ; valid users = MYDOMAIN\%S # Un-comment the following and create the netlogon directory for Domain Logons: ; [netlogon] ; comment = Network Logon Service ; path = /var/lib/samba/netlogon ; guest ok = yes ; writable = no ; share modes = no # Un-comment the following to provide a specific roving profile share. # The default is to use the user's home directory: ; [Profiles] ; path = /var/lib/samba/profiles ; browseable = no ; guest ok = yes # A publicly accessible directory that is read only, except for users in the # "staff" group (which have write permissions): ; [public] ; comment = Public Stuff ; path = /home/samba ; public = yes ; writable = yes ; printable = no ; write list = +staff [tv] comment = TV path = /media/Isos/tv public = yes writable = yes printable = no write list = +media [music] comment = Music path = /media/Storage/music/ public = yes writable = yes printable = no write list = +media [pictures] comment = Pictures path = /media/Storage/pictures public = yes writable = yes printable = no write list = +media

    Read the article

  • How to install GIT on an offline RHEL?

    - by Stijn Vanpoucke
    I'm using the following commands from the manual to install GIT $ tar -zxf git-1.7.2.2.tar.gz $ cd git-1.7.2.2 $ make prefix=/usr/local all $ sudo make prefix=/usr/local install but I'm receiving the following exceptions ... cache.h: At top level: cache.h:746: error: expected declaration specifiers or â...â before âtime_tâ cache.h:889: warning: âstruct timevalâ declared inside parameter list cache.h:895: warning: âstruct timevalâ declared inside parameter list cache.h:970: error: expected specifier-qualifier-list before âoff_tâ cache.h:979: error: expected specifier-qualifier-list before âoff_tâ cache.h:997: error: expected specifier-qualifier-list before âoff_tâ cache.h:1057: error: expected declaration specifiers or â...â before âoff_tâ cache.h:1063: error: expected declaration specifiers or â...â before âuint32_tâ cache.h:1064: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before ânt h_packed_object_offsetâ cache.h:1065: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before âfi nd_pack_entry_oneâ cache.h:1067: error: expected declaration specifiers or â...â before âoff_tâ cache.h:1069: error: expected declaration specifiers or â...â before âoff_tâ cache.h:1070: error: expected declaration specifiers or â...â before âoff_tâ cache.h:1094: error: expected specifier-qualifier-list before âoff_tâ cache.h:1168: error: expected â)â before â*â token cache.h:1177: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before âre ad_in_fullâ cache.h:1178: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before âwr ite_in_fullâ cache.h:1179: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before âwr ite_str_in_fullâ cache.h:1252: error: expected declaration specifiers or â...â before âFILEâ In file included from credential-store.c:2: credential.h:28: error: expected declaration specifiers or â...â before âFILEâ credential.h:29: error: expected declaration specifiers or â...â before âFILEâ In file included from credential-store.c:4: parse-options.h:115: error: expected specifier-qualifier-list before âintptr_tâ credential-store.c: In function âparse_credential_fileâ: credential-store.c:13: error: âFILEâ undeclared (first use in this function) credential-store.c:13: error: âfhâ undeclared (first use in this function) credential-store.c:17: warning: implicit declaration of function âfopenâ credential-store.c:19: error: âerrnoâ undeclared (first use in this function) credential-store.c:19: error: âENOENTâ undeclared (first use in this function) credential-store.c:24: error: too many arguments to function âstrbuf_getlineâ credential-store.c:24: error: âEOFâ undeclared (first use in this function) credential-store.c:39: warning: implicit declaration of function âfcloseâ credential-store.c: In function âprint_entryâ: credential-store.c:44: warning: implicit declaration of function âprintfâ credential-store.c:44: warning: incompatible implicit declaration of built-in fu nction âprintfâ credential-store.c: In function âmainâ: credential-store.c:132: warning: implicit declaration of function âumaskâ credential-store.c:144: error: âstdinâ undeclared (first use in this function) credential-store.c:144: error: too many arguments to function âcredential_readâ credential-store.c:147: warning: implicit declaration of function âstrcmpâ Is this because I didn't install the dependencies? apt-get install libcurl4-gnutls-dev libexpat1-dev gettext libz-dev libssl-dev How do I install them offline?

    Read the article

  • Creating a network link between 2 very close buildings

    - by Daniel Johnson
    I have a charity who have two adjacent medium sized modern detached houses (in the UK): the buildings stand next to each other and are less than 5 metres apart. They have DSL connected to a single computer in one of the buildings. They want to add a network with wireless, and want it to work across both buildings. Being a charity they need to keep costs down. The network would be used for sharing Word documents, e-mail, browsing and skyping. My initial thoughts were to connect the buildings with fibre. So: Option 1 Use fibre between the buildings. Sufficient cable and two TP-LINK MC100CM Fast Ethernet Media Converters. Cost ~£80.00. But there is the extra cost and hassle of running the cable down and up the external walls, lifting and relaying paving, and burying underground. Never having fitted fibre I'm also a little worried about going up the wall and then bending the cable at 90 degrees to go through the wall and into the building. Option 2 Use two TP-Link TL-WA7510N High Powered Outdoor 5Ghz 15dBi Wireless antennas to connect the buildings. There is a clear line of sight at first floor level. Cost ~£100. And much easier to fit than fibre! Is using the TL-WA7510Ns overkill? Is there something more suitable? I had hoped to use some Netgear stuff, e.g. two DGN2200, one in each house and also use them to provide the wireless link between the buildings. However, in bridge mode wireless client association is not available and repeater mode with client association only supports WEP security which isn't strong enough. Is there something similar that would be up to the job? Option 3 Connect the buildings with UTP cable. My concerns here are risk of electric shock due to a difference of potential between the buildings (or are they so close this shouldn't be an issue) and protection from lightning strikes. Is fitting lighting arrestors expensive? And what can be done to ameliorate against the risk of shock? This all falls outside my area of expertise so I would really appreciate some advice.

    Read the article

  • NFS4 / ZFS: revert ACL to clean/inherited state

    - by Keiichi
    My problem is identical to this Windows question, but pertains NFS4 (Linux) and the underlying ZFS (OpenIndiana) we are using. We have this ZFS shared via NFS4 and CIFS for Linux and Windows users respectively. It would be nice for both user groups to benefit from ACLs, but the one missing puzzle piece goes thusly: Each user has a home, where he sets a top-level, inherited ACL. He can later on refine permissions for the contained files/folders iteratively. Over time, sometimes permissions need to be generalized again to avoid increasing pollution of ACL entries. You can tweak the ACL of every single file if need be to obtain the wanted permissions, but that defeats the purpose of inherited ACLs. So, how can an ACL be completely cleared like in the question linked above? I have found nothing about what a blank, inherited ACL should look like. This usecase simply does not seem to exist. In fact, the solaris chmod manpage clearly states A- Removes all ACEs for current ACL on file and replaces current ACL with new ACL that represents only the current mode of the file. I.e. we get three new ACL entries filled with stuff representing the permission bits, which is rather useless for cleaning up. If I try to manually remove every ACE, on the last one I get chmod A0- <file> chmod: ERROR: Can't remove all ACL entries from a file Which by the way makes me think: and why not? In fact, I really want the whole file-specific ACL gone. The same holds for linux, which enumerates ACEs starting with 1(!), and verbalizes its woes less diligently nfs4_setacl -x 1 <file> Failed setxattr operation: Unknown error 524 So, what is the idea behind ACLs under Solaris/NFS? Can they never be cleaned up? Why does the recursion option for the ACL setting commands pollute all children instead of setting a single ACL and making the children inherit? Is this really the intention of the designers? I can clean up the ACLs using a windows client perfectly well, but am I supposed to tell the linux users they have to switch OS just to consolidate permissions?

    Read the article

  • Software Raid 10 corrupted superblock after dual disk failure, how do I recover it?

    - by Shoshomiga
    I have a software raid 10 with 6 x 2tb hard drives (raid 1 for /boot), ubuntu 10.04 is the os. I had a raid controller failure that put 2 drives out of sync, crashed the system and initially the os didnt boot up and went into initramfs instead, saying that drives were busy but I eventually managed to bring the raid up by stopping and assembling the drives. The os booted up and said that there were filesystem errors, I chose to ignore because it would remount the fs in read-only mode if there was a problem. Everything seemed to be working fine and the 2 drives started to rebuild, I was sure that it was a sata controller failure because I had dma errors in my log files. The os crashed soon after that with ext errors. Now its not bringing up the raid, it says that there is no superblock on /dev/sda2, even if I assemble manually with all the device names. I also did a memtest and changed the motherboard in addition to everything else. EDIT: This is my partition layout Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0009c34a Device Boot Start End Blocks Id System /dev/sdb1 * 2048 511999 254976 83 Linux /dev/sdb2 512000 3904980991 1952234496 83 Linux /dev/sdb3 3904980992 3907028991 1024000 82 Linux swap / Solaris All 6 disks have the same layout, partition #1 is for raid 1 /boot, partition #2 is for raid 10 far plan, partition #3 is swap, but sda did not have swap enabled EDIT2: This is the output of mdadm --detail /dev/md1 Layout : near=1, far=2 Chunk Size : 64k UUID : a0feff55:2018f8ff:e368bf24:bd0fce41 Events : 0.3112126 Number Major Minor RaidDevice State 0 8 34 0 spare rebuilding /dev/sdc2 1 0 0 1 removed 2 8 18 2 active sync /dev/sdb2 3 8 50 3 active sync /dev/sdd2 4 0 0 4 removed 5 8 82 5 active sync /dev/sdf2 6 8 66 - spare /dev/sde2 EDIT3: I ran ddrescue and it has copied everything from sda except a single 4096 byte sector that I suspect is the raid superblock EDIT4: Here is some more info too long to fit here lshw: http://pastebin.com/2eKrh7nF mdadm --detail /dev/sd[abcdef]1 (raid1): http://pastebin.com/cgMQWerS mdadm --detail /dev/sd[abcdef]2 (raid10): http://pastebin.com/V5dtcGPF dumpe2fs of /dev/sda2 (from the ddrescue cloned drive): http://pastebin.com/sp0GYcJG I tried to recreate md1 based on this info with the command mdadm --create /dev/md1 -v --assume-clean --level=10 --raid-devices=6 --chunk=64K --layout=f2 /dev/sda2 missing /dev/sdc2 /dev/sdd2 missing /dev/sdf2 But I can't mount it, I also tried to recreate it based on my initial mdadm --detail /dev/md1 but it still doesn't mount It also warns me that /dev/sda2 is an ext2fs file system but I guess its because of ddrescue

    Read the article

  • Nginx, proxy passing to Apache, and SSL

    - by Vic
    I have Nginx and Apache set up with Nginx proxy-passing everything to Apache except static resources. I have a server set up for port 80 like so: server { listen 80; server_name *.example1.com *.example2.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass http://127.0.0.1:8080; include /etc/nginx/conf.d/proxy.conf; } } And since we have multiple ssl sites (with different ssl certificates) I have a server{} block for each of them like so: server { listen 443 ssl; server_name *.example1.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass https://127.0.0.1:8443; include /etc/nginx/conf.d/proxy.conf; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Proto https; } } server { listen 443 ssl; server_name *.example2.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass https://127.0.0.1:8445; include /etc/nginx/conf.d/proxy.conf; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Proto https; } } First of all, I think there is a very obvious problem here, which is that I'm double-encrypting everything, first at the nginx level and then again by Apache. To make everything worse, I just started using Amazon's Elastic Load Balancer, so I added the certificate to the ELB and now SSL encryption is happening three times. That's gotta be horrible for performance. What is the sane way to handle this? Should I be forwarding https on the ELB - http on nginx - http on apache? Secondly, there is so much duplication above. Is the best method to not repeat myself to put all of the static asset handling in an include file and just include it in the server?

    Read the article

  • IPv6: Should I have private addresses?

    - by AlReece45
    Right now, we have a rack of servers. Every server right now has at least 2 IP addresses, one for the public interface, another for the private. The servers that have SSL websites on them have more IP addresses. We also have virtual servers, that are configured similarly. Private Network The private range is currently just used for backups and monitoring. Its a gigabit port, the interface usage does not usually get very high. There are other technologies we're considering using that would use this port: iSCSI (implementations usually recommends dedicating an interface to it, which would be yet another IP network), VPN to get access to the private range (something I'd rather avoid) dedicated database servers LDAP centralized configuration (like puppet) centralized logging We don't have any private addresses in our DNS records (only public addresses). For our servers to utilize the correct IP address for the right interface (and not hard code the IP address) probably requires setting up a private DNS server (So now we add 2 different dns entries to 2 different systems). Public Network Our public range has a variety of services include web, email, and ftp. There is a hardware firewall between our network and the "public" network. We have (relatively secure) method to instruct the firewall to open and close administrative access (web interfaces, ssh, etc) for our current IP address. With either solution discussed, the host-based firewalls will be configured as well. The public network currently runs at a dedicated 20Mbps link. There are a couple of legacy servers with fast-ethernet ports, but they are scheduled for decommissioning. All of the other production boxes have at least 2 Gigabit Ethernet ports. The more traffic-heavy servers have 4-6 available (none is using more than the 2 Gigabit ports right now). IPv6 I want to get an IPv6 prefix from our ISP. So at least every "server" has at least one IPv6 interface. We'll still need to keep the IPv4 addressees up and available for legacy clients (web servers and email at the very least). We have two IP networks right now. Adding the public IPv6 address would make it three. Just use IPv6? I'm thinking about just dumping the private IPv4 range and using the IPv6 range as the primary means of all communications. If an interface starts reaching its capacity, utilize the newly free interfaces to create a trunk. It has the advantage that if either the public or private traffic needs to exceed 1Gbps. The traffic for each interface is already analyzed on a regular basis to predict future bandwidth use. In the rare instances where bandwidth unexpected peaks: utilize QoS to ensure traffic (like our limited SSH access) is prioritized correctly so the problem can be corrected (if possible, our WAN is the bottleneck right now). It also has the advantage of not needing to make an entry for every private address. We may have private DNS (or just LDAP), but it'll be much more limited in scope with less entries to duplicate. Summary I'm trying to make this network as "simple" as possible. At the same time, I want to make sure its reliable, upgradeable, scalable, and (eventually) redundant. Having one IPv6 network, and a legacy IPv4 network seems to be the best solution to me. Regarding using assigned IPv6 addresses for both networks, sharing the available bandwidth on one (more trunked if needed): Are there any technical disadvantages (limitations, buffers, scalability)? Are there any other security considerations (asides from firewalls mentioned above) to consider? Are there regulations or other security requirements (like PCI-DSS) that this doesn't meet? Is there typical software for setting up a Linux network that doesn't have IPv6 support yet? (logging, ldap, puppet) Some other thing I didn't consider?

    Read the article

  • Getting error while installing mod_wsgi in centos

    - by user825904
    I have reinstalled python with enable shared [root@master mod_wsgi-3.4]# make clean rm -rf .libs rm -f mod_wsgi.o mod_wsgi.la mod_wsgi.lo mod_wsgi.slo mod_wsgi.loT rm -f config.log config.status rm -rf autom4te.cache [root@master mod_wsgi-3.4]# LD_RUN_PATH=/usr/local/lib make apxs -c -I/usr/local/include/python2.7 -DNDEBUG mod_wsgi.c -L/usr/local/lib -L/usr/local/lib/python2.7/config -lpython2.7 -lpthread -ldl -lutil -lm /usr/lib64/apr-1/build/libtool --silent --mode=compile gcc -prefer-pic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -Wformat-security -fno-strict-aliasing -DLINUX=2 -D_REENTRANT -D_GNU_SOURCE -pthread -I/usr/include/httpd -I/usr/include/apr-1 -I/usr/include/apr-1 -I/usr/local/include/python2.7 -DNDEBUG -c -o mod_wsgi.lo mod_wsgi.c && touch mod_wsgi.slo In file included from /usr/local/include/python2.7/Python.h:8, from mod_wsgi.c:142: /usr/local/include/python2.7/pyconfig.h:1161:1: warning: "_POSIX_C_SOURCE" redefined In file included from /usr/include/sys/types.h:26, from /usr/include/apr-1/apr-x86_64.h:127, from /usr/include/apr-1/apr.h:19, from /usr/include/httpd/ap_config.h:25, from /usr/include/httpd/httpd.h:43, from mod_wsgi.c:34: /usr/include/features.h:162:1: warning: this is the location of the previous definition In file included from /usr/local/include/python2.7/Python.h:8, from mod_wsgi.c:142: /usr/local/include/python2.7/pyconfig.h:1183:1: warning: "_XOPEN_SOURCE" redefined In file included from /usr/include/sys/types.h:26, from /usr/include/apr-1/apr-x86_64.h:127, from /usr/include/apr-1/apr.h:19, from /usr/include/httpd/ap_config.h:25, from /usr/include/httpd/httpd.h:43, from mod_wsgi.c:34: /usr/include/features.h:164:1: warning: this is the location of the previous definition mod_wsgi.c: In function ‘wsgi_server_group’: mod_wsgi.c:991: warning: unused variable ‘value’ mod_wsgi.c: In function ‘Log_isatty’: mod_wsgi.c:1665: warning: unused variable ‘result’ mod_wsgi.c: In function ‘Log_writelines’: mod_wsgi.c:1802: warning: unused variable ‘msg’ mod_wsgi.c: In function ‘Adapter_output’: mod_wsgi.c:3087: warning: unused variable ‘n’ mod_wsgi.c: In function ‘Adapter_file_wrapper’: mod_wsgi.c:4138: warning: unused variable ‘result’ mod_wsgi.c: In function ‘wsgi_python_term’: mod_wsgi.c:5850: warning: unused variable ‘tstate’ mod_wsgi.c:5849: warning: unused variable ‘interp’ mod_wsgi.c: In function ‘wsgi_python_child_init’: mod_wsgi.c:7050: warning: unused variable ‘l’ mod_wsgi.c:6948: warning: unused variable ‘interp’ mod_wsgi.c: In function ‘wsgi_add_import_script’: mod_wsgi.c:7701: warning: unused variable ‘error’ mod_wsgi.c: In function ‘wsgi_add_handler_script’: mod_wsgi.c:8179: warning: unused variable ‘dconfig’ mod_wsgi.c:8178: warning: unused variable ‘sconfig’ mod_wsgi.c: In function ‘wsgi_hook_handler’: mod_wsgi.c:9375: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9377: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9379: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9383: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9403: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9405: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9408: warning: suggest parentheses around assignment used as truth value mod_wsgi.c: In function ‘wsgi_daemon_worker’: mod_wsgi.c:10819: warning: unused variable ‘duration’ mod_wsgi.c:10818: warning: unused variable ‘start’ mod_wsgi.c: In function ‘wsgi_hook_daemon_handler’: mod_wsgi.c:13172: warning: unused variable ‘i’ mod_wsgi.c:13170: warning: unused variable ‘elts’ mod_wsgi.c:13169: warning: unused variable ‘head’ mod_wsgi.c: At top level: mod_wsgi.c:8142: warning: ‘wsgi_set_user_authoritative’ defined but not used mod_wsgi.c:15251: warning: ‘wsgi_hook_check_user_id’ defined but not used /usr/lib64/apr-1/build/libtool --silent --mode=link gcc -o mod_wsgi.la -rpath /usr/lib64/httpd/modules -module -avoid-version mod_wsgi.lo -L/usr/local/lib -L/usr/local/lib/python2.7/config -lpython2.7 -lpthread -ldl -lutil -lm

    Read the article

  • A proper way to create non-interactive accounts?

    - by AndreyT
    In order to use password-protected file sharing in a basic home network I want to create a number of non-interactive user accounts on a Windows 8 Pro machine in addition to the existing set of interactive accounts. The users that corresponds to those extra accounts will not use this machine interactively, so I don't want their accounts to be available for logon and I don't want their names to appear on welcome screen. In older versions of Windows Pro (up to Windows 7) I did this by first creating the accounts as members of "Users" group, and then including them into "Deny logon locally" list in Local Security Policy settings. This always had the desired effect. However, my question is whether this is the right/best way to do it. The reason I'm asking is that even though this method works in Windows 8 Pro as well, it has one little quirk: interactive users from "User" group are still able to see these extra user names when they go to the Metro screen and hit their own user name in the top-right corner (i.e. open "Sign out/Lock" menu). The command list that drops out contains "Sign out" and "Lock" commands as well as the names of other users (for "switch user" functionality). For some reason that list includes the extra users from "Deny logon locally" list. It is interesting to note that this happens when the current user belongs to "Users" group, but it does not happen when the current user is from "Administrators". For example, let's say I have three accounts on the machine: "Administrator" (from "Administrators", can logon locally), "A" (from "Users", can logon locally), "B" (from "Users", denied logon locally). When "Administrator" is logged in, he can only see user "A" listed in his Metro "Sign out/Lock" menu, i.e. all works as it should. But when user "A" is logged in, he can see both "Administrator" and user "B" in his "Sign out/Lock" menu. Expectedly, in the above example trying to switch from user "A" to user "B" by hitting "B" in the menu does not work: Windows jumps to welcome screen that lists only "Administrator" and "A". Anyway, on the surface this appears to be an interface-level bug in Windows 8. However, I'm wondering if going through "Deny logon locally" setting is the right way to do it in Windows 8. Is there any other way to create a hidden non-interactive user account?

    Read the article

  • Problem with several USB devices on Windows XP. How to diagnose USB device?

    - by Lukasz Baran
    Hello All, Recently I've bought some electronic devices (like e.g. Sony Ebook reader PRS-600 or HP PDA IPaq) which are connected to PC using USB ports. These devices have own batteries and they are recharged using USB. However, I am experiencing a problem with Ebook Reader and the similiar thing happens with my PDA (but in this case it's more rare). Sometimes, I am unable to make it start recharging, when it's connected to USB port. The LCD diode on my Reader lights up and the icon in the system tray appears. They both indicate that the device has been connected. However, the device should start recharching and it should appear on Windows XP as a 'explorable' device (just like pendrive it offers some HDDs). None of these happens:( This problem appears on two PCs - desktop and laptop. Both have Windows XP SP3 installed. And what's interesting and surprising to me is the fact that these devices sometimes work without any problems. I mean, I just plug them in and they work properly. Besides that, there are some computers (I have tested a variety of possible conditions) on which the problem does not appear! I am confused and, on the other hand, determined to find out what is the real cause of such strange behavior. Maybe I'm wrong but I always assumed that I'm using deterministic machines, but the described situation makes me feel like I was dreaming:) And that's why I'm asking you here. Are there any tools that could help me diagnose USB ports? Or maybe anyone knows the solution? I have a lot experience with low level programming (mostly from the times of MS-DOS applications), but I'm not an expert when it comes to WinXP technology. If this was a problem with network connection, I would know what to do - I know how to track network traffic, but with USB ports I feel powerless. I have a feeling that the problem may lie in a way WinXP handles USB devices. Personally, I hate auto-discovery and P'n'P features available in Windows. I know that some of may suggest to move to Linux or other *Nix system. Unfortunately, I have to use WinXP in my professional life, so it's rather impossible. Besides that, I don't consider XP as a total disaster. Any help from you will be appreciated!

    Read the article

  • I have a NGINX server configured to work with node.js, but many times a file of 1.03MB of js is not loaded by various browser and various pc

    - by Totty
    I'm using this in a local LAN so it should be quite fast. The nginx server use the node.js server to serve static files, so it must pass throught node.js to download the files, but that is not a problem when I'm not using the nginx. In chrome with debugger on I can see that the status is: 206 - partial content and it only has downloaded 31KB of 1.03MB. After 1.1 min it turns red and the status failed. Waiting time: 6ms Receiving: 1.1 min The headers in google chrom: Request URL:http://192.168.1.16/production/assembly/script/production.js Request Method:GET Status Code:206 Partial Content Request Headersview source Accept:*/* Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3 Accept-Encoding:gzip,deflate,sdch Accept-Language:pt-PT,pt;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Cookie:connect.sid=s%3Abls2qobcCaJ%2FyBNZwedtDR9N.0vD4Fi03H1bEdCszGsxIjjK0lZIjJhLnToWKFVxZOiE Host:192.168.1.16 If-Range:"1081715-1350053827000" Range:bytes=16090-16090 Referer:http://192.168.1.16/production/assembly/ User-Agent:Mozilla/5.0 (Windows NT 6.0) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Response Headersview source Accept-Ranges:bytes Cache-Control:public, max-age=0 Connection:keep-alive Content-Length:1 Content-Range:bytes 16090-16090/1081715 Content-Type:application/javascript Date:Mon, 15 Oct 2012 09:18:50 GMT ETag:"1081715-1350053827000" Last-Modified:Fri, 12 Oct 2012 14:57:07 GMT Server:nginx/1.1.19 X-Powered-By:Express My nginx configurations: File 1: user totty; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /home/totty/web/production01_server/node_modules/production/_logs/_NGINX_access.txt; error_log /home/totty/web/production01_server/node_modules/production/_logs/_NGINX_error.txt; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## autoindex on; include /home/totty/web/production01_server/_deployment/nginxConfigs/server/*; } File that is included by the previous file: server { # custom location for entry # using only "/" instead of "/production/assembly" it # would allow you to go to "thatip/". In this way # we are limiting to "thatip/production/assembly/" location /production/assembly/ { # ip and port used in node.js proxy_pass http://127.0.0.1:3000/; } location /production/assembly.mongo/ { proxy_pass http://127.0.0.1:9000/; proxy_redirect off; } location /production/assembly.logs/ { autoindex on; alias /home/totty/web/production01_server/node_modules/production/_logs/; } }

    Read the article

  • SSH Connection Refused - Debug using Recovery Console

    - by olrehm
    Hey everyone, I have found a ton of questions answered about debugging why one cannot connect via SSH, but they all seem to require that you can still access the system - or say that without that nothing can be done. In my case, I cannot access the system directly, but I do have access to the filesystem using a recovery console. So this is the situation: My provider made some kernel update today and in the process also rebooted my server. For some reason, I cannot connect via SSH anymore, but instead get a ssh: connect to host mydomain.de port 22: Connection refused I do not know whether sshd is just not running, or whether something (e.g. iptables) blocks my ssh connection attempts. I looked at the logfiles, none of the files in /var/log contain any mentioning on ssh, and /var/log/auth.log is empty. Before the kernel update, I could log in just fine and used certificates so that I would not need a password everytime I connect from my local machine. What I tried so far: I looked in /etc/rc*.d/ for a link to the /etc/init.d/ssh script and found none. So I am expecting that sshd is not started properly on boot. Since I cannot run any programs in my system, I cannot use update-rc to change this. I tried to make a link manually using ln -s /etc/init.d/ssh /etc/rc6.d/K09sshd and restarted the server - this did not fix the problem. I do not know wether it is at all possible to do it like this and whether it is correct to create it in rc6.d and whether the K09 is correct. I just copied that from apache. I also tried to change my /etc/iptables.rules file to allow everything: # Generated by iptables-save v1.4.0 on Thu Dec 10 18:05:32 2009 *mangle :PREROUTING ACCEPT [7468813:1758703692] :INPUT ACCEPT [7468810:1758703548] :FORWARD ACCEPT [3:144] :OUTPUT ACCEPT [7935930:3682829426] :POSTROUTING ACCEPT [7935933:3682829570] COMMIT # Completed on Thu Dec 10 18:05:32 2009 # Generated by iptables-save v1.4.0 on Thu Dec 10 18:05:32 2009 *filter :INPUT ACCEPT [7339662:1665166559] :FORWARD ACCEPT [3:144] :OUTPUT ACCEPT [7935930:3682829426] -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp --dport 25 -j ACCEPT -A INPUT -p tcp -m tcp --dport 993 -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m tcp --dport 143 -j ACCEPT -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 8080 -s localhost -j ACCEPT -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 -A INPUT -j ACCEPT -A FORWARD -j ACCEPT -A OUTPUT -j ACCEPT COMMIT # Completed on Thu Dec 10 18:05:32 2009 # Generated by iptables-save v1.4.0 on Thu Dec 10 18:05:32 2009 *nat :PREROUTING ACCEPT [101662:5379853] :POSTROUTING ACCEPT [393275:25394346] :OUTPUT ACCEPT [393273:25394250] COMMIT # Completed on Thu Dec 10 18:05:32 2009 I am not sure this is done correctly or has any effect at all. I also did not find any mentioning of iptables in any file in /var/log. So what else can I do? Thank you for your help.

    Read the article

  • "Service Unavailable" when browsing to static HTML page in non-application IIS website on Windows 2003 (possibly SharePoint WSS 2.0 related?)

    - by Jordan Rieger
    Background: My client has an old Pentium III Windows 2003 server whose 16/36 GB disks are dying. On it he has a database-driven web site and email application that needs further customization by a developer (me). First we need to get it working on the new server. The original developer is no longer available to provide a system setup guide. So my client got a tech who imaged the old drives over to the new server and managed to get it booting. But the IIS-driven site no longer works. In fact it seems that IIS itself does not work. Problem: Service Unavailable when attempting to browse from the server itself to the URL for a local Web Site called test which I setup in IIS to serve a single static index.htm file. This I did to isolate the problem, and eliminate the client's application from the equation. The site is setup on port 80 with the host header "test.myclientsdomain.com", and I used the etc\hosts file to point that host at the local IP. I know the host entry took effect because I can ping it. When doing an iisreset, I get: Attempting start... Restart attempt failed. IIS Admin Service or a service dependent on IIS Admin is not active. It most likely failed to start, which may mean that it's disabled. Despite this message, the services all stay in the Started state. The only relevant System event logs I found are: Event Type: Error Event Source: W3SVC Event Category: None Event ID: 1002 Date: 11/4/2012 Time: 11:04:47 PM User: N/A Computer: ALPHA1 Description: Application pool 'DefaultAppPool' is being automatically disabled due to a series of failures in the process(es) serving that application pool. Event Type: Error Event Source: W3SVC Event Category: None Event ID: 1039 Date: 11/4/2012 Time: 11:13:12 PM User: N/A Computer: ALPHA1 Description: A process serving application pool 'DefaultAppPool' reported a failure. The process id was '5636'. The data field contains the error number. Data: 0000: 7e 00 07 80 ~.. And one Application event log: Event Type: Error Event Source: Windows SharePoint Services 2.0 Event Category: None Event ID: 1000 Date: 11/4/2012 Time: 11:34:04 PM User: N/A Computer: ALPHA1 Description: #50070: Unable to connect to the database STS_Config on ALPHA2\SharePoint. Check the database connection information and make sure that the database server is running. That last log tells me that the tech may have initially tried to have both the old and the new server running, by renaming the new server from ALPHA1 to ALPHA2. And perhaps SharePoint grabbed onto that change, and now can't tell that the machine name has been switched back to the old ALPHA1. But why would SharePoint interfere with a static IIS web site serving a single HTML file? The test site is not even within an Application pool (I clicked the Remove button.) What I have tried/eliminated: No relevant services seem to be disabled: IIS Admin, WWW Publishing, Sharepoint Timer Giving Full Control to All Users/Everyone on the c:\inetpub\test folder serving my test site. I can connect to and query the local SharePoint config database (ALPHA1\SHAREPOINT\STS_CONFIG) from SSMS. But when I try to do stsadm -o setconfigdb -connect -databaseserver ALPHA1\SHAREPOINT it tells me The SharePoint admininstration port does not exist. Please use stsadm.exe to create it. And when I do that, using the port 9487 specified in the IIS SharePoint Admin site config, it tells me the port is already in use. Needless to say, simply browsing to the admin site gives me a similar error about being unable to reach the config database. I didn't want to go further down the SharePoint path as it may be completed unrelated to my IIS issue, and I don't even know yet if SharePoint is required for this application to work. The app itself is ASP.Net/C#/Silverlight and a little MS Word integration (maybe that's where the SharePoint stuff comes in.)

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • Reliable access to Internet but not local network (not DNS or proxy issues)

    - by Ian Goldby
    I'm looking for help with a Vista Home Premium laptop that has trouble accessing any resource on our home network, but accesses the Internet just fine. The set-up is this: The Vista laptop and a MacBook Pro connect wirelessly to the router-modem. A Synology DS212j NAS drive has a wired connection to the router-modem. Devices on the local network are always referred to by IP address, so this cannot be a DNS issue. The MacBook Pro connects reliably to the NA via AFP (network shared folders), SMB (network shared folders) and HTTP. The Vista laptop connects to and browses sites on the Internet without any problems. It can log into the NAS via SMB and list the shared folders (so there is nothing wrong with the log-in credentials), but when it tries to open any of the folders Explorer just hangs with the spinning cursor for several minutes and then says "\192.168.1.64\shared\Photos is not accessible. You might not have permission to use this network resource. Contact the administrator of this server to find out if you have access permissions. The specified network name is no longer available." It can ping the NAS successfully. If I try to open the NAS drive's web interface, the browser just hangs. This is the same with IE, Firefox and Chrome. (There is no proxy.) I can log into the NAS drive with FTP and navigate directories, but when I try to list the contents of a directory with more than a handful of entries, the ftp client hangs. I set up a website on the MacBook. The Vista laptop was able to load some of the pages, but loading any of the images was very hit and miss. Images embedded in HTML pages never worked no matter how many times I reloaded the page, but when I linked directly to the image it did load (though several attempts were sometimes needed). I tried all of this with the Windows Firewall turned off, and with AVG turned off. That made no difference. I'd really appreciate any suggestions anyone can make. The fact that the Vista laptop has trouble with HTTP and FTP as well as SMB connections suggests to me that this is a problem at the TCP level or below. But don't forget it accesses sites outside the LAN with no problems.

    Read the article

  • Group traffic shaping with traffic control?

    - by mmcbro
    I'm trying to limit the output bandwidth generated by an application with linux tc. This application sends me the source port of the request that I use has a filter to limit each user at a given downloadspeed. I feel that my setup could be managed way better if I had a better knowledge of linux tc. At the application level users are categorized as members of a group, each group have a limited bandwidth. Example : Members of group A : 512kbit/s Members of group B : 1Mbit/s Members of group C : 2Mbit/s When a user connects to the application, it retrieves the source port to the origin of the request from the user and sends me the source port and the bandwidth at which the user must be limited depending on group to which it belongs. With these informations I must add the appropriate rules so that the user (the source port in reality) is limited to the right bandwidth. If the user that connect isn't a member of any group it should be limited at a default bandwidth speed. I'm actually managing this by using a self made daemon that add or remove rules from when it receive a request from the application. With my little knowledge of tc I'm not able to limit other users (ones that aren't in a group, all others in fact) at a default speed and my configuration seems awful to me. Here is the base of my tc qdisc and classes : tc qdisc add dev eth0 root handle 1: htb tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbps ceil 125mbps To classify a user at a given speed I have to add one subclass and then associate one filter to it : # a member of group A tc class add dev eth0 parent 1:1 classid 1:11 htb rate 512kbps ceil 512kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 50001 flowid 1:11 # a member of group A again tc class add dev eth0 parent 1:1 classid 1:12 htb rate 512kbps ceil 512kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 61524 flowid 1:12 # a member of group B again tc class add dev eth0 parent 1:1 classid 1:13 htb rate 1000kbps ceil 1000kbps # tts associated filter to match his source port tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 57200 flowid 1:13 I already know that a source port could be the same if its coming from a different IP address the thing is the application is behind a proxy so I don't have to manage any IP address in that situation. I would like to know how to manage the fact that for all other users (request/source port, whatever you name it) could be limited at a given speed each. I mean that each connection should be able to use at max 100kbit/s for example, not a shared 100kbit/s. I also would like to know if there is a way to simplify my rules. I don't know if it is possible to use only one class per group and associate multiple filters to the same class so each users could be handled by one class and not one class per user. I appreciate any advice, thanks.

    Read the article

  • Frequent and weird wifi disconnections

    - by Sidou
    How would you explain, troubleshoot (and solve) the following problem? Wifi ADSL modem router D-link 2640R installed in living room at about 1.8m height. Working fine, synchronising and getting/serving stable internet connection. First situation: -Laptop 01 in other end of the house, let's say in room01 southern to the living room, distant by about 15m. Getting stable signal of good to very good quality. No disconnection. -Laptop 02 in room02 opposite to room01 (5m West) which makes it almost at the same distance and direction from the router located 15m North. Getting stable signal of good to very good quality. No disconnection. Second situation: -Laptop 01 moved to room03 Northern to the living room (actually just 3m behind the wall where the router lies). Getting stable signal of excellent quality. No disconnection. -Laptop 02 still in room02 but now experiences frequent disconnections (actually almost impossible to get the Internet even though the signal level is still very good. Either no Internet with the wifi icon appearing connected to access point or no connection established at all which happens every 2 minutes and that means virtually no Internet at all as I can just get a timeframe of 1 minute or so to load any website or even get to the router's web based control panel. If Laptop 01 is completely shut down or its wifi adapters shut down or even still working but its wifi MAC address forbidden, then Laptop 02 has no problem at all. If Laptop 02 is moved to a nearer location to the router, in the living room for instance, then no connection problem occurs even if Laptop 01 is also connected. And also if we move back Laptop 01 to its original location (room 01), then no problem as well. I'm completely lost and don't know how to address this issue. I tried to change the Wifi channel and even tried the auto channel scan but that didn't solve it. I know that the problem is probably coming from Laptop 01 being in its new location or some sort of interference as the problem occurs only under the described condition but I have no idea how to solve it! I also scanned the neighborhood for wifi jam using InSSIDer, there are few other access points but they don't seem to affect the situation. Any ideas about the steps to follow or tools to use ?

    Read the article

  • How do you backup 40+ Centos5.5 servers?

    - by John Little
    We are embarrassed to ask this question. Apologies for our lack of UNIX expertise. We have inherited 40+ centos 5.5 servers, and don't know how to back them up. We need low level clone type images so that we could restore the servers from scratch if we had to replace the HDs etc. We have used the "dd" command, but we assume this only works if you want to back up one local disk to another, not 40 servers to one server with an external USB HD attached. All 40 servers have a pair of mirrored disks (dont know if its HW or SW raid). Most only have 100MB used. SErvers are running apache, zend, tomcat, mysql etc. Ideally we dont want to have to shut them down to backup (but could). We assume that standard unix commands like tar, cpio, rsync, scp etc. are of no use as they only copy files, not partitions, all attributes, groups etc. i.e. do not produce a result which can simply be re-imaged to a new HD to get the serer back from dead. We have a large SAN, a spare windows box and spare unix boxes, but these are only visible to one layer in the network. We have an unused Dell DL2000 monster tape unit, but no sw or documentation for it. WE have a copy of symantec backup exec, but we have no budget for unix client licenses. (The company has negative amounts of money). We need to be able to initiate the backup remotely, as we can only access the servers in person in an emergency (i.e. to restore) Googling returns some applications to do this, e.g. clonezilla - looks difficult to install and invasive. Mondo, only seems to support backup if you are local to the machine. Amanda might be an option, but looks like days/weeks of work to learn and setup? Is there anything built into Centos, or do we have to go the route of installing, learning and configuring a set of backup softwares? Any ideas? This must be a pretty standard problem which goggling doesnt give an obvious answer.

    Read the article

  • seaudit report detail

    - by user1014130
    I've just started using selinux in the last 6 months and am getting to grips with it. However, using sealert on a new CENTOS 6 server, Im not getting the level of detail I was with CENTOS 5. To illustrate: Running sealert -a /var/log/audit/audit.log On CENTOS 5 I get: Summary: SELinux is preventing postdrop (postfix_postdrop_t) "getattr" to /var/log/httpd/error_log (httpd_log_t). Detailed Description: SELinux denied access requested by postdrop. It is not expected that this access is required by postdrop and this access may signal an intrusion attempt. It is also possible that the specific version or configuration of the application is causing it to require additional access. Allowing Access: Sometimes labeling problems can cause SELinux denials. You could try to restore the default system file context for /var/log/httpd/error_log, restorecon -v '/var/log/httpd/error_log' If this does not work, there is currently no automatic way to allow this access. Instead, you can generate a local policy module to allow this access - see FAQ (http://fedora.redhat.com/docs/selinux-faq-fc5/#id2961385) Or you can disable SELinux protection altogether. Disabling SELinux protection is not recommended. Please file a bug report (http://bugzilla.redhat.com/bugzilla/enter_bug.cgi) against this package. Additional Information: Source Context root:system_r:postfix_postdrop_t Target Context system_u:object_r:httpd_log_t Target Objects /var/log/httpd/error_log [ file ] Source postdrop Source Path /usr/sbin/postdrop Port Host Source RPM Packages postfix-2.3.3-2.1.el5_2 Target RPM Packages Policy RPM selinux-policy-2.4.6-279.el5_5.1 Selinux Enabled True Policy Type targeted MLS Enabled True Enforcing Mode Enforcing Plugin Name catchall_file Host Name server109-228-26-144.live-servers.net Platform Linux server109-228-26-144.live-servers.net 2.6.18-194.8.1.el5 #1 SMP Thu Jul 1 19:04:48 EDT 2010 x86_64 x86_64 Alert Count 1 First Seen Wed Jun 13 11:43:55 2012 Last Seen Wed Jun 13 11:43:55 2012 but on CENTOS 6 I just get: Summary: SELinux is preventing postdrop (postfix_postdrop_t) "getattr" to /var/log/httpd/error_log (httpd_log_t). Detailed Description: SELinux denied access requested by postdrop. It is not expected that this access is required by postdrop and this access may signal an intrusion attempt. It is also possible that the specific version or configuration of the application is causing it to require additional access. Allowing Access: Sometimes labeling problems can cause SELinux denials. You could try to restore the default system file context for /var/log/httpd/error_log, restorecon -v '/var/log/httpd/error_log' If this does not work, there is currently no automatic way to allow this access. Instead, you can generate a local policy module to allow this access - see FAQ (http://fedora.redhat.com/docs/selinux-faq-fc5/#id2961385) Or you can disable SELinux protection altogether. Disabling SELinux protection is not recommended. Please file a bug report (http://bugzilla.redhat.com/bugzilla/enter_bug.cgi) against this package. Im running exactly the same command. Does anyone have any idea why Im not getting the "Additional information" that I do with CENTOS 5? Thanks in advance Dylan

    Read the article

< Previous Page | 606 607 608 609 610 611 612 613 614 615 616 617  | Next Page >