Search Results

Search found 18993 results on 760 pages for 'context info'.

Page 544/760 | < Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >

  • red5 Install on Ubuntu 10.04? Problems with libslf4j

    - by mrgordon
    I've been trying for many days to get Red5 to install on Ubuntu 10.04. I finally managed to get red5.sh to stop hanging a few seconds in but now I'm getting the following error: Setting default logging context: default Exception in thread "main" java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.red5.server.Bootstrap.bootStrap(Bootstrap.java:135) at org.red5.server.Bootstrap.main(Bootstrap.java:50) Caused by: java.lang.NoSuchMethodError: org.slf4j.impl.StaticLoggerBinder.getContextSelector()Lch/qos/logback/classic/selector/ContextSelector; at org.red5.logging.Red5LoggerFactory.getLogger(Red5LoggerFactory.java:121) at org.red5.logging.Red5LoggerFactory.getLogger(Red5LoggerFactory.java:108) at org.red5.server.Launcher.launch(Launcher.java:51) ... 6 more I suspected that this had to do with slf4j not being installed or on my classpath. I installed logback and libslf4j-java from aptitude and I see related files in my red5 lib directories. For example: /usr/share/red5/lib/slf4j-api-1.6.1.jar /usr/share/red5/lib/log4j-over-slf4j-1.6.1.jar /usr/share/red5/lib/logback-classic-0.9.26.jar /usr/share/red5/lib/logback-core-0.9.26.jar /usr/share/red5/lib/jcl-over-slf4j-1.6.1.jar /usr/share/red5/lib/jul-to-slf4j-1.6.1.jar And I set my classpath to /usr/share/red5/lib/ Any ideas on where to proceed from here? There seem to be a lot of people having trouble getting 10.04 and red5 0.9 working together. I've tried red5-0.9.1.tar.gz and red5_0.9.0-RC1_all.deb. The libraries above should be all that are needed according to Red5's documentation and I got the latest version of each.

    Read the article

  • IPMI not fucntioning with Network Bonding

    - by muhammed sameer
    Hey, I am having problems with running IPMI on my servers that have network bonding enabled. Platform: CentOS release 5.3 (Final) Kernel: 2.6.18-92.el5 64bit Dell PowerEdge 1950 Ethernet controller: Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet I have bonded the interface eth0 and eth1 as active passive, with eth0 as the active interface, below is conf description from /proc Bonding Mode: fault-tolerance (active-backup) Primary Slave: eth0 Currently Active Slave: eth0 MII Status: up MII Polling Interval (ms): 30 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth0 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:22:19:56:b9:cd Slave Interface: eth1 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:22:19:56:b9:cf My IPMI device is as follows IPMI Device Information Interface Type: KCS (Keyboard Control Style) Specification Version: 2.0 I2C Slave Address: 0x10 NV Storage Device: Not Present Base Address: 0x0000000000000CA8 (I/O) Register Spacing: 32-bit Boundaries I Have used openIPMI as well as freeipmi both to control the chassis via the IPMI card, but on servers which have bonding enabled, the command times out, below is the full run of the command with debug info. ipmi_lan_send_cmd:opened=[0], open=[4482848] IPMI LAN host 70.87.28.115 port 623 Sending IPMI/RMCP presence ping packet ipmi_lan_send_cmd:opened=[1], open=[4482848] No response from remote controller Get Auth Capabilities command failed ipmi_lan_send_cmd:opened=[1], open=[4482848] No response from remote controller Get Auth Capabilities command failed Error: Unable to establish LAN session Failed to open LAN interface Unable to get Chassis Power Status On the other hand I configured IPMI on a box with the same specs as mentioned above without bonding and IPMI works perfectly. Has anyone faced this problem with IPMI + Bonding ? I would be thankful is someone helps circumvent this issue. Muhammed Sameer

    Read the article

  • need help with automating a CMD java tool which queries alexa AWS using batch

    - by Eli.C
    Hi everyone, I need to get all available info on 600 URLs from "Alexa Web Information Service", I downloaded the java tool and I'm able to run a single query each time with a single switch/Response Group. I would like to ask how to write a batch file that would automate the process? The java tool runs from the CMD with the following: C:\>java UrlInfo (key1) (key2) (URL) (Response Group) UrlInfo - constant key1 - constant key2 -constant URL - variable (I guess I need to use the "(" sign to read from a file) Response Group - variable - (14 total, and I need to run each Response Group on each of the URLs once ) the app returns data in clear text formatted as XML after each query, here is an example: C:\>java UrlInfo (key1) (key2) www.url.com Rank Response: (?xml version="1.0"?) (aws:UrlInfoResponse xmlns:aws="http://alexa.amazonaws.com/doc/2005-10-05/") (aws:Response xmlns:aws="http://awis.amazonaws.com/doc/2005-07-11") (aws:OperationRequest) (aws:RequestId)ec2b6-e8ae-b392(/aws:RequestId) (/aws:OperationRequest) (aws:UrlInfoResult) (aws:Alexa) (aws:TrafficData) (aws:DataUrl type="canonical")url.com/(/aws:DataUrl) (aws:Rank)**472906**(/aws:Rank) (/aws:TrafficData) (/aws:Alexa) (/aws:UrlInfoResult) (aws:ResponseStatus xmlns:aws="http://alexa.amazonaws.com/doc/2005-10-05/") (aws:StatusCode)Success(/aws:StatusCode) (/aws:ResponseStatus) (/aws:Response) (/aws:UrlInfoResponse) Any help would be really appreciated Thanks and regards Eli.C

    Read the article

  • Install MatroskaProp on Windows 7 x64

    - by Neophytos
    To see more information in Windows Explorer property pages and menus about Matroska Video (.mkv) files, similar to what one can see when selecting native Windows media (.avi, .asf, .wmv or even just plain old mpg) files, Matroska links (from http://www.matroska.org/downloads/windows.html) to a download of the MatroskaProp shell extension (http://www.jory.info/serendipity/archives/14-MatroskaProp-2.8-Released.html). It used to work for me under Windows XP 32-bit. Now I have Windows 7 x64, and downloaded, installed and ran it. Configuration and settings page is fine. But it does not seem to actually register any shell extension. Nothing is added to Explorer windows, menus or property pages when selecting .mkv or .mks files). I tried calling the register hook manually using regsvr32.dll, that again invoked the configuration window and let me set all options, and when confirming even said the registration succeeded, but seems to have had no effect. In the registry I cannot find any traces of the shell extension being installed. Can this extension be made to work under Windows 7 or x64 systems? Are there known problems with installing this or other old shell extensions on x64, or on Windows 7?

    Read the article

  • CoreStore Encryption Error on Mac Lion

    - by Michael
    I am trying to encrypt an external drive using diskutil CoreStorage on Mac Lion 10.7.4. I thought the only requirements were that the drive have GUID partition scheme and Journaled HFS+ file system. I think my drive is configured accordingly but when I type the following command I get an error message back: Michaels-MacBook-Pro:~ Michael$ diskutil cs convert disk2 -passphrase TestPassword Error converting disk to CoreStorage: The given file system is not supported on Core Storage (-69756) Here are the details reported for the drive in question: Michaels-MacBook-Pro:~ Michael$ diskutil list disk2 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *500.1 GB disk2 1: EFI 209.7 MB disk2s1 2: Apple_HFS Test1 499.8 GB disk2s2 Michaels-MacBook-Pro:~ Michael$ diskutil list disk2 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *500.1 GB disk2 1: EFI 209.7 MB disk2s1 2: Apple_HFS Test1 499.8 GB disk2s2 Michaels-MacBook-Pro:~ Michael$ diskutil info disk2s2 Device Identifier: disk2s2 Device Node: /dev/disk2s2 Part of Whole: disk2 Device / Media Name: Test1 Volume Name: Test1 Escaped with Unicode: Test1 Mounted: Yes Mount Point: /Volumes/Test1 Escaped with Unicode: /Volumes/Test1 File System Personality: Journaled HFS+ Type (Bundle): hfs Name (User Visible): Mac OS Extended (Journaled) Journal: Journal size 40960 KB at offset 0xe8e000 Owners: Disabled Partition Type: Apple_HFS OS Can Be Installed: Yes Media Type: Generic Protocol: FireWire SMART Status: Not Supported Volume UUID: 1024D0B8-1C45-3057-B040-AE5C3841DABF Total Size: 499.8 GB (499763888128 Bytes) (exactly 976101344 512-Byte-Blocks) Volume Free Space: 499.3 GB (499315826688 Bytes) (exactly 975226224 512-Byte-Blocks) Device Block Size: 512 Bytes Read-Only Media: No Read-Only Volume: No Ejectable: Yes Whole: No Internal: No I'm a little concerned that the "Partition Type: Apple_HFS" entry is causing the problem, but I don't know how to change that. I only seem to be able to control the "File System Personality: Journaled HFS+" in Disk Utility. Can anyone shed some light on this for me?

    Read the article

  • Adding tables to a herd in bucardo

    - by Joseph the Dreamer
    Forgive my ignorance, I am a JS programmer given the task to do DB replication using bucardo. I understand the concept of how bucardo works, but setting it up is a bit confusing. The set-up is: Lubuntu Linux Two databases test_master and test_slave, using PostgreSQL Each DB has a table named test, containing 2 columns: id (PK) and test (int) I use pgAdmin3 I have already added them to bucardo's list of databases and added all tables. Table: public.test DB: test_slave PK: id (int4) Table: public.test DB: test_master PK: id (int4) As you see, due to the fact that the DBs are identical, even the schema names are identical. So when I do: bucardo_ctl add herd sample_herd public.test Ok, so it got added to the herd. But this command gets confused which database public.test comes from. So when I add a sync: $ bucardo_ctl add sync sample_sync source=sample_herd targetdb=test_slave type=fullcopy Failed to add sync: DBD::Pg::st execute failed: ERROR: Source and target databases cannot be the same: test_slave at line 118. at line 30. CONTEXT: PL/Perl function "validate_sync" at /usr/bin/bucardo_ctl line 3362. What does it mean that source and target cannot be the same? If it got confused as to which public.test to use as source, how do I differentiate?

    Read the article

  • [Zend Framework - Ubuntu10.04- Lamp- First Project] i get 500 error on http://localhost/zftutorial/p

    - by meyosef
    Hi I new in Zend and Lamp, my software: Zend Framework, Ubuntu10.04,Lamp. I made my first Zend Project with Zend tool (according this tutorial http://akrabat.com/wp-content/uploads/Getting-Started-with-Zend-Framework.pdf) But when i go to http://localhost/zftutorial/public i get 500 error. My $ dir -l of zftutorial: drwxr-xr-x 6 root root 4096 2010-06-01 23:54 application drwxr-xr-x 2 root root 4096 2010-06-01 23:54 docs drwxr-xr-x 3 root root 4096 2010-06-02 00:23 library drwxr-xr-x 3 root root 4096 2010-06-02 00:00 nbproject drwxr-xr-x 2 root root 4096 2010-06-01 23:54 public drwxr-xr-x 4 root root 4096 2010-06-01 23:54 tests my:/etc/apache2/sites-available/default <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Thanks,

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • Attempting to update Amazon Route53 using a script, but domain is not being updated

    - by ks78
    I have several Amazon EC2 instances, running Ubuntu 10.04, with which I'd like to use Amazon's Route53. I setup a script as described in Shlomo Swidler's article, but I'm still missing something. When the script runs, it doesn't return any output, which I initially assumed meant it ran correctly. However, when I check the DNS records using MyR53DNS, there are no entries for my instances. Here's my script: #!/bin/tcsh -f set root=`dirname $0` setenv EC2_HOME /usr/lib/ec2-api-tools setenv EC2_CERT /etc/cron.route53/ec2_x509_cert.pem setenv EC2_PRIVATE_KEY /etc/cron.route53/ec2_x509_private.pem setenv AWS_ACCESS_KEY_ID myaccesskeyid setenv AWS_SECRET_ACCESS_KEY mysecretaccesskey /user/bin/ec2-describe-instances | \ perl -ne '/^INSTANCE\s+(i-\S+).*?(\S+\.amazonaws\.com)/ \ and do { $dns = $2; print "$1 $dns\n" }; /^TAG.+\sShortName\s+(\S+)/ \ and print "$1 $dns\n"' | \ perl -ane 'print "$F[0] CNAME $F[1] --replace\n"' | \ xargs -n 4 $/etc/cron.route53/cli53/cli53.py \ rrcreate -x 60 mydomain.com Does anyone see a problem with this script? If its not the script, what else could be preventing my Route53 domain from being updated? I am using the Security Groups to IP-restrict the instances. I've tried opening port 53, but that didn't seem to have an effect. Is there another port that Route53 uses? I'd appreciate any help or guidance the ServerFault community can offer. Let me know if you need any further info.

    Read the article

  • JBossMQ - Clustered Queues/NameNotFoundException: QueueConnectionFactory error

    - by mfarver
    I am trying to get an application working on a JBoss Cluster. It uses Queues internally, and the developer claims that it should work correctly in a clustered environment. I have jbossmq setup as a ha-singleton on the cluster. The application works correctly on whichever node currently is running the queue, but fails on the other nodes with a: "javax.naming.NameNotFoundException: QueueConnectionFactory not bound" error. I can look at JNDIview from the jmx-console and see that indeed the QueueConnectionFactory class only appears on the primary node in the Global context. Is there a way to see the Cluster's JNDI listing instead of each server? The steps I took from a default Jboss 4.2.3.GA installation were to use the "all" configuration. Then removed /server/all/deploy/hsqldb-ds.xml and /deploy-hasingleton/jms/hsqldb-jdbc2-service.xml, copying the example/jms/mysq-jdbc2-service.xml file into its place (editing that file to use DefaultDS instead of MySqlDS). Finally I created a mysql-ds.xml file in the deploy directory pointing "DefaultDS" at an empty database. I created a -services.xml file in the deploy directory with the queue definition. like the one below: <server> <mbean code="org.jboss.mq.server.jmx.Queue" name="jboss.mq.destination:service=Queue,name=myfirstqueue"> <depends optional-attribute-name="DestinationManager"> jboss.mq:service=DestinationManager </depends> </mbean> </server> All of the other cluster features of working, the servers list each other in the view, and sessions are replicating back and forth. The JBoss documentation is somewhat light in this area, is there another setting I might have missed? Or is this likely to be a code issue (is there different code to do a JNDI lookup in a clusted environment?) Thanks

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    Hi All, I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • Getting AWStats to work in Ubuntu 12.04

    - by koogee
    I'm new to apache and i'm trying to set up AWStats on my ubuntu 12.04 server. I've followed the guide at Ubuntu docs https://help.ubuntu.com/community/AWStats I set it up according to the instructions and awstats is able to generate initial stats from apache log successfully. I placed the links to awstats in the default virtual host file. However when I try to run http://server-ip-address:8080/awstats/awstats.pl, I get: Error: SiteDomain parameter not defined in your config/domain file. You must edit it for using this version of AWStats. Setup ('/etc/awstats/awstats.conf' file, web server or permissions) may be wrong. Check config file, permissions and AWStats documentation (in 'docs' directory). Here is my /etc/apache2/sites-available/default file: <VirtualHost *:8080> ServerAdmin webmaster@localhost DocumentRoot /home/saad/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/saad/www/> Options Indexes FollowSymLinks MultiViews AllowOverride AuthConfig Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> Alias /awstatsclasses "/usr/share/awstats/lib/" Alias /awstats-icon "/usr/share/awstats/icon/" Alias /awstatscss "/usr/share/doc/awstats/examples/css" ScriptAlias /awstats/ /usr/lib/cgi-bin/ Options ExecCGI -MultiViews +SymLinksIfOwnerMatch ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> The only three variables I edited in /etc/awstats/awstats.conf are: LogFile="/var/log/apache2/access.log" SiteDomain="server-name.noip.org" HostAliases="localhost 127.0.0.1 server-name.no-ip.org" The apache server works fine and i'm able to access other pages stored on the server. Any guidance would be welcome.

    Read the article

  • What can cause an increase in inactive memory and how to reclame it?

    - by Boaz
    Hi All, I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB

    Read the article

  • weblogic plug-in apache http server location directive question

    - by user39510
    We are using Weblogic Portal and Apache 2.x http server with the weblogic plug-in for apache for load-balancing. We have an application that right now can only be accessed from one of our managed servers. What I would like to do is use the Location directive to direct all requests for that page to the one managed server and I can't get it to work. The context that the portal tries to forward to is something like /MyWebApp?portalusername= (where equals a legitimate user. For example /MyWebApp?portalusername=joesmith. All other applications and the plug-in is load balancing as expected because every now and then you'll get sent to the second managed server for this particular application and its not deployed. I tried various things in the Apache http.conf like the following but can't seem to get it work. Any suggestions? The following is a snippet of the httpd.conf. Its a standard out of the box httpd.conf file with the weblogic plugin configuration. <Location /MyWebApp> SetHandler weblogic-handler WebLogicCluster myserver:7011 </Location> <Location /?> SetHandler weblogic-handler WebLogicCluster myserver:7011, myserver2:7012 </Location>

    Read the article

  • Using dnsmasq for accessing multiple nameservers assigned by DHCP

    - by Ash
    At my work desktop running openSUSE 11.4, I have a local network which gets its address, domain (work.site) and nameservers (10.100.1.1, 10.100.1.2) info through DHCP - which get written into /etc/resolv.conf I get to access the internet using the work network, and these 2 nameservers end up returning the entries for any public domain name lookups on the internet. I also have a private VPN that I end up connecting. The nameserver (10.111.1.1) and domain (private.site) are rarely bound to change for this network, but currently they're pushed by the openVPN client into networkmanager, and which also gets merged with the existing /etc/resolv.conf My resolv.conf ultimately ends up looking like this: search private.site work.site nameserver 127.0.0.1 nameserver 10.111.1.1 nameserver 10.100.1.1 As you can see the 2nd nameserver from my work network was pushed out because of the max 3 entry limitations. It is fine still, but would be a problem if that nameserver goes down for maintenance or something. So I found out that dnsmasq could help me here, and hence I setup dnsmasq just as a local DNS resolver without any DHCP support. So right now this is my /etc/dnsmasq.conf: resolv-file=/etc/resolv.conf server=/private.site/10.111.1.1 server=/1.111.10.in-addr.arpa/10.111.1.1 listen-address=127.0.0.1 bind-interfaces log-queries I've made dnsmasq get the list of nameservers from /etc/resolv.conf since NetworkManager seems to be updating this list correctly (for a max of 3 nameservers). I'm able to resolve the host names in both the networks correctly. So these are the questions I have: Is there a way I can make either NetworkManager or dhclient write out the list of nameservers somewhere else which I can make dnsmasq use as resolv-file ? How do I make dnsmasq use certain nameservers as the default for all queries ? Right now I notice that lookups for public domains on the internet are usually sent to both the nameservers - the one on work.site as well as private.site. It would be good if I can limit this only to work.site.

    Read the article

  • Delay before download starts when serving files using nginx

    - by glumbo
    I am currently using nginx to serve downloads off my website. Users sometimes need to wait about 5 seconds before their download starts after clicking a download link. I'm not sure if I need to start using raid 10 (I'm currently using raid 50) or if this is a problem with my nginx configuration. I am also on a 1gbit line but download sometimes go as low as 10kB/s. My server: Dual Xeon 5620 CPU, 12x2TB drives with 8GB ram. This is my nginx.conf #user nobody; worker_processes 12; worker_rlimit_nofile 10240; worker_rlimit_sigpending 32768; error_log logs/error.log crit; #pid logs/nginx.pid; events { worker_connections 2048; } http { include mime.types; default_type application/octet-stream; access_log off; limit_conn_log_level info; log_format xfs '$arg_id|$arg_usr|$remote_addr|$body_bytes_sent|$status'; #sendfile on; #tcp_nopush on; reset_timedout_connection on; server_tokens off; autoindex off; keepalive_timeout 0; #keepalive_timeout 65; limit_zone one $binary_remote_addr 10m; perl_modules perl; perl_require download.pm;

    Read the article

  • Can't setup 3 nodes MongoDB recplica set

    - by Victor Lin
    I just follow instructions in MongoDB document Replica Sets - Basics to setup a 3-node Replica set. Everything goes fine when I do the initiate and add first node in the primary. [foo@host-a mongodb]$ bin/mongo localhost MongoDB shell version: 1.8.2 connecting to: localhost > rs.initiate() { "info2" : "no configuration explicitly specified -- making one", "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } > rs.add("host-b") { "ok" : 1 } So far so good, but when I try to add third node myset:PRIMARY> rs.addArb("host-c") Sun Aug 7 22:57:09 MessagingPort recv() errno:104 Connection reset by peer 127.0.0.1:27017 Sun Aug 7 22:57:09 SocketException: remote: error: 9001 socket exception [1] Sun Aug 7 22:57:09 DBClientCursor::init call() failed Sun Aug 7 22:57:09 query failed : local.$cmd { count: "system.replset", query: {}, fields: {} } to: 127.0.0.1 Sun Aug 7 22:57:09 Error: error doing query: failed shell/collection.js:150 Sun Aug 7 22:57:09 trying reconnect to 127.0.0.1 Sun Aug 7 22:57:09 reconnect 127.0.0.1 ok As result, the current primary became secondary, and the host-b was marked as dead, but actually, it is still alive. myset:SECONDARY> rs.status() { "set" : "myset", "date" : ISODate("2011-08-08T04:03:23Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "host-a:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "optime" : { "t" : 1312775799000, "i" : 1 }, "optimeDate" : ISODate("2011-08-08T03:56:39Z"), "self" : true }, { "_id" : 1, "name" : "host-b", "health" : 0, "state" : 6, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2011-08-08T04:03:22Z"), "errmsg" : "still initializing" } ], "ok" : 1 } How could this happen? I just follow the guide in the document, did I do something wrong? Moreover, I can't do anything on current secondary server. It doesn't allow me to reconfig on the secondary node, but the problem is there is no primary node. myset:SECONDARY> rs.reconfig({}) { "errmsg" : "replSetReconfig command must be sent to the current replica set primary.", "ok" : 0 } Any ideas?

    Read the article

  • Cannot connect to local network shares when connected to VPN. Error: "the user name could not be found"

    - by Nick G
    I keep finding that on our small company LAN (7 users, 3 servers) that some servers keep becoming "not accessible" for the purposes of file sharing. They display the message "\SERVER is not accessible. You might not have permission to use this network resource. The user name could not be found". But I don't know why "the user name could not be found" as all the machines are on the same domain and the PDC and BDC seem to be behaving OK. EDIT: VPN seems to be the cause: It turns out I can see the server if I use the IP address (\\1.2.3.4\ etc) or the FQ active directory name (eg \server.domainname.local) but not if I use the server name on its own or a mapped network drive originally created from the "short" name. Oddly though, my machine has no issue resolving the server's DNS name as I can ping the machine name OK and it immediately comes back with the IP, however nslookup seems to fail. It seems to be a problem with how Windows looks up machine names when connected to VPNs. When I'm connected to a VPN, windows seems to use the DNS assocated with the VPN and not the one on the domain controller. This behavior to me, seems incorrect as surely that would mean connecting to any VPN would break any ability to lookup local machine names for servers and printers etc. So I guess the real question now is, how can I make my machine still search the local Active Directory DNS (the PDC) even when connected to a VPN? More info in my comments below.

    Read the article

  • Check for updates of a specific Debian package list

    - by Erwan Queffélec
    The setup I run a Debian Squeeze host that I use to build a multilanguage project (python, java, php...) and generate custom packages (debian and RPM) automatically (through jenkins) The problem The target distributions of those Debian packages are Etch, Lenny and Squeeze. But our project has some native dependencies that are available only through the DebianRelease + 1 repository (i.e Lenny + 1 == Squeeze, Squeeze + 1 == Wheezy). We for example, need the jetty packages from Squeeze in Lenny, and the cyrus-imapd-2.4 packages from Wheezy in Squeeze. Some additional info : Some package we can simply 'backport by hand' by mirroring the DebianRelease + 1 packages to our own repositories. For instance, the jetty package from Squeeze will run fine on Lenny because it doesn't need an s**tload of additional dependencies However we do need to rebuild some packages. For instance, cyrus-imapd-2.4 from Wheezy has a lot of unsatisfied dependencies on Squeeze. So we need to rebuild it in Squeeze and then upload it to our repo. The question I need to have a simple way of knowing if they are any updates on those extra packages (both "normal" and "security" updates). I could write a simple script that runs weekly, get some parameters from a file, and generate an update report. Let's say the file looks like this: jetty:squeeze cyrus-imapd-2.4:wheezy The script should run as normal user not to mess up the system apt configuration and issue the appropriate commands to generate that report. Does Debian has some built-in apt-* commands/options dedicated to that kind of problem I could use to write this script ? If not, can someone think of another clean solution to achieve what I need ?

    Read the article

  • mysql_tzinfo_to_sql missing on my system

    - by Sk1ppeR
    I ran into problem with timezones within MySQL. Long story short, my application is worldwide, and each database has it's own timezone set within the application (not the server) in the way of "Europe/Berlin", "Europe/Vienna", "America/Sao Paulo". Obviously this is unacceptable for MySQL at first per connection. I read that it handles data better if you use UTC offsets. Basically my goal is to log a field's alteration in another table using a trigger. For that I use UNIX_TIMESTAMP within the trigger. Although UNIX_TIMESTAMP() follows the global timezone for the server which obviously bothers me a lot :| So I went to search for a "per connection" solution to use inside the trigger and well I found that mysql_tzinfo_to_sql can actually import zone info (UTC offsets) from my linux's zoneinfo files. Although to my amuse, when I ran the commant I got the following: bash: mysql_tzinfo_to_sql: command not found So I'm looking for a solution to fix that. I don't want to "map" the timezone names into UTC offset just so I could use in the trigger. Is there an alternative tool? Or at least sources for this one in particular only? What kind of queries does this tool generates so I could do it manually then if there is no alternative tool. Thanks in advance on any help on the issue! P.S: The OS is Debian GNU/Linux 6.0 and the MySQL server is the one from aptitude with performance tweaks with my.cnf

    Read the article

  • ./rtnet start rteth0-mac: unknown interface: No such device ioctl: No such device

    - by Anisha Kaul
    I have installed the RTnet over Xenomai. RTnet compiled well, and I also tested loopback on the single machine and was able to ping. However I noticed ./rtnet start showing the following output: What should I interpret when all it says is "no such device"? What more info should I provide here for you to help me in getting rid of this error? linux-y3pi:/usr/local/rtnet/sbin # ./rtnet start rteth0: unknown interface: No such device rteth0-mac: unknown interface: No such device ioctl: No such device ioctl: No such device ioctl: No such device ioctl: No such device ioctl (add): No such device vnic0: unknown interface: No such device SIOCSIFADDR: No such device vnic0: unknown interface: No such device SIOCSIFNETMASK: No such device Waiting for all slaves...ioctl: No such device ioctl: No such device linux-y3pi:/usr/local/rtnet/sbin # lsmod: linux-y3pi:/usr/local/rtnet/sbin # lsmod Module Size Used by tdma 18281 0 rtmac 9274 1 tdma rtcfg 49485 0 rtcap 7216 0 rt_loopback 1563 2 rtpacket 5517 0 rtudp 10655 0 rt_8139too 11374 0 rtipv4 22842 2 rtcfg,rtudp rtnet 42130 9 tdma,rtmac,rtcfg,rtcap,rt_loopback,rtpacket,rtudp,rt_8139too,rtipv4 ip6t_LOG 8480 6 xt_tcpudp 3540 2 xt_pkttype 1176 3 ipt_LOG 8201 6 xt_limit 2159 12 snd_pcm_oss 44878 0 snd_mixer_oss 15151 1 snd_pcm_oss snd_seq 55731 0 s nd_seq_device 6698 1 snd_seq edd 8407 0 ip6t_REJECT 4306 3 nf_conntrack_ipv6 8186 4 nf_defrag_ipv6 10128 1 nf_conntrack_ipv6 ip6table_raw 1451 1 xt_NOTRACK 1112 4 ipt_REJECT 2397 3 xt_state 1314 8 iptable_raw 1478 1 iptable_filter 1706 1 ip6table_mangle 1756 0 nf_conntrack_netbios_ns 1678 0 nf_conntrack_ipv4 8957 4 nf_conntrack 80411 5 nf_conntrack_ipv6,xt_NOTRACK,xt_state,nf_conntrack_netbios_ns,nf_conntrack_ipv4 nf_defrag_ipv4 1561 1 nf_conntrack_ipv4 ip_tables 18872 2 iptable_raw,iptable_filter ip6table_filter 1679 1 ip6_tables 19066 4 ip6t_LOG,ip6table_raw,ip6table_mangle,ip6table_filter x_tables 24094 16 ip6t_LOG,xt_tcpudp,xt_pkttype,ipt_LOG,xt_limit,ip6t_REJECT,ip6table_raw,xt_NOTRACK,ipt_REJECT,xt_state,iptable_raw,iptable_filter,ip6table_mangle,ip_tables,ip6table_filter,ip6_tables fuse 69279 3 loop 17417 0 dm_mod 71671 0 snd_hda_codec_via 57768 1 snd_hda_intel 24871 2 snd_hda_codec 95006 2 snd_hda_codec_via,snd_hda_intel snd_hwdep 6540 1 snd_hda_codec snd_pcm 90716 3 snd_pcm_oss,snd_hda_intel,snd_hda_codec snd_timer 22050 2 snd_seq,snd_pcm snd 71410 14 snd_pcm_oss,snd_mixer_oss,snd_seq,snd_seq_device,snd_hda_codec_via,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_timer soundcore 7854 1 snd iTCO_wdt 11716 0 iTCO_vendor_support 2942 1 iTCO_wdt snd_page_alloc 8324 2 snd_hda_intel,snd_pcm sr_mod 13186 0 cdrom 37628 1 sr_mod i2c_i801 9677 0 pcspkr 1950 0 sg 28847 0 serio_raw 4534 0 ext4 361361 2 jbd2 82943 1 ext4 crc16 1699 1 ext4 i915 500199 2 drm_kms_helper 33537 1 i915 drm 211193 3 i915,drm_kms_helper sd_mod 33977 5 i2c_algo_bit 5625 1 i915 intel_agp 11529 1 i915 intel_gtt 16397 3 i915,intel_agp ata_generic 3787 0 ata_piix 22875 4 ahci 20097 0 libahci 22089 1 ahci libata 194812 4 ata_generic,ata_piix,ahci,libahci scsi_mod 204709 4 sr_mod,sg,sd_mod,libata linux-y3pi:/usr/local/rtnet/sbin #

    Read the article

  • How can I map a Windows group login to the dbo schema in a database?

    - by Christian Hayter
    I have a database for which I want to restrict access to 3 named individuals. I thought I could do the following: Create a local Windows group on the database server and add the named individuals to it. Create a Windows login in SQL Server mapped to the local Windows group. Map the login to the "dbo" schema in the database, so that the users can access all objects without having to qualify them with the schema name. When I try to do step 3, I get the following error: Msg 15353, Level 16, State 1, Line 1 An entity of type database cannot be owned by a role, a group, an approle, or by principals mapped to certificates or asymmetric keys. I have tried to do this via the IDE, the sp_changedbowner sproc, and the ALTER AUTHORIZATION command, and I get the same error each time. After searching MSDN and Google, I find that this restriction is by design. Great, that's useful. Can anyone tell me: Why this restriction exists? It seems very arbitrary. More importantly, can I accomplish my requirement some other way? Other info that might be pertinent: The server is fully up to date with service packs and hotfixes. All objects in the database are owned by the "dbo" schema, and it's not feasible to change that. The database is running in compatibility level 80, and it's not feasible to change that to 90 yet. I am free to make any other changes (within reason, depending on what they are).

    Read the article

  • Xbox 360 formatting for MP4 h.264?

    - by Kayle
    For the life of me, I can't get the Xbox 360 to recognize anything other than a WMV. It's full updated with Xbox Live and I even added all the Video apps (Zune, Netflix, etc) just to see if it would force some sort of codec update. I'm using handbrake, at the moment, to try and convert an mkv with subtitles. Handbrake burns the subtitles into the video for me, which is important. I can't figure out what is missing though, as the Xbox refuses to acknowledge my MP4s. Here's the VLC codec info output: As far as I can tell, this is EXACTLY what the xbox supports. I don't want to convert every video twice by throwing it through Windows Movie Maker just to get it to WMV. None of the converters mentions above output to WMV and I don't like WMV since it's not as universal as MP4. I've tried changing the container name manually to M4V, MOV, and AVI to see if I can trick the Xbox... no beans. The video displays perfectly in WMP, of course. If I convert it to WMV, suddenly it works fine. But I don't find this acceptable, as I know the Xbox is supposed to support other filetypes. If anyone knows why my Xbox won't play anything but WMVs, I would be very appreciative to know why! It's driving me nuts. The only other information I can think to include is that it's about 3 years old and of the cheapest variety (Xbox Arcade). I've added a hard drive, xbox live, and have thoroughly updated it. Maybe there's a specific video update I'm unaware of??

    Read the article

  • Files Corrupted on System Restore

    - by Yar
    I restored OSX 10.6.2 today (was 10.6.3 and not booting) by copying the system over from a backup. The data directories were not touched. I am seeing some files as 0 bytes, and getting permission-denied errors when copying, even when using sudo cp or the Finder itself. Some programs, differently, take the files at face value and see no permission problems (such as zip), but they see the files as zero bytes, which would be game-over for recovery. cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/blah/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted I have tried sudo chown, sudo chmod -R 777 and sudo chflags -R nouchg which do not change the end result. Strangely, this is only affecting my .git directories (perhaps because they start with a period, but renaming them -- which works -- does not change anything). What else can I do to take ownership of these files? Edit: This question comes from StackOverflow because I originally thought it was a GIT problem. It's definitely not (just) GIT. Anyway, this is to help put some of the comments in context.

    Read the article

  • getting "No LoginModules configured" for JAAS login under WebSphere security domain

    - by user1739040
    I have a JAX-RPC web service running on WebSphere V7. It requires a UserNameToken for security. I have a custom login module (MyLoginModule) which extracts the username and password, and that module is defined as a JAAS application login in the websphere admin console. Using IBM RAD 8.0, I have bound the token consumer to the login module using the JAAS config name of the module. This all works fine and happy on my development server. Now I realize, that for deployment to another server, I am required to move the JAAS login from global security to a security domain. When I do that, it breaks my web service. I get this SOAP Fault message: com.ibm.wsspi.wssecurity.SoapSecurityException: WSEC6520E: Construction of the login context failed. The exception is : javax.security.auth.login.LoginException: No LoginModules configured for MyLoginModule According to the IBM docs: The JAAS application logins, the JAAS system logins, and the JAAS J2C authentication data aliases can all be configured at the domain level. By default, all of the applications in the system have access to the JAAS logins configured at the global level. The security runtime first checks for the JAAS logins at the domain level. If it does not find them, it then checks for them in the global security configuration. Configure any of these JAAS logins at a domain only when you need to specify a login that is used exclusively by the applications in the security domain. So I am looking to make sure my application is in the domain, and I have tried everything I can think of. (I have assigned the domain to "all scopes", to the entire cell, etc.) No luck, I keep getting the same error response to my web service client. Any help or hints are appreciated.

    Read the article

< Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >