Search Results

Search found 41582 results on 1664 pages for 'fault tolerance'.

Page 110/1664 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Do I need VMWare vSphere?

    - by Gk
    I'm planning use vmware to upgrade some of very aged server instead replace with all new bunch of server. VMWare vSphere sounds great but because of low budget I can't afford for both licenses and SANs. Without SAN, is vSphere worth the price? As I know without SAN, the VMWare HA, VMontion, FT is unavailable. So, do I need vSphere or only ESXi free version assume that I only need backup vm daily? Do you know any completed solution about backup on ESXi 4? TIA, -Gk

    Read the article

  • Apache Jmeter + Random Double

    - by Filipe Batista
    Is it possible to generate random double numbers in JMeter? I tried to use the Random in the config element where i have defined the Minimum value: 47.9999 (RND1) Maximum value: 30.9999 (RND2) Then in the selected Prepared Selected Statement i placed this values: Parameter values:${RND1},${RND1},${RND2} Parameter types:DOUBLE,DOUBLE,DOUBLE But it seems not work, because i receive an error: Response message: java.sql.SQLException: Cannot convert class java.lang.String to SQL type requested due to java.lang.NumberFormatException - For input string: "${RND1}"

    Read the article

  • haproxy and tomcat intermittent hangs

    - by Lorin
    I am trying to run haproxy in front of tomcat on a Solaris x86 box, but I am getting intermittent failures. At seemingly random intervals, the request just hangs until haproxy times out the connection. I thought maybe it was my app, but I've been able to reproduce it with the tomcat manager app, and hitting tomcat directly there is no problems at all. Hitting it repeatedly with curl will cause the error within 10-15 tries curl -ikL http://admin:admin@<my server>:81/manager/status haproxy is running on port 81, tomcat on port 7000. haproxy returns a 504 gateway timeout to the client, and puts this into the log file: Sep 7 21:39:53 localhost haproxy[16887]: xxx.xxx.xxx.xxx:65168 [07/Sep/2009:21:39:23.005] http_proxy http_proxy/tomcat7000 5/0/0/-1/30014 504 194 - - sHNN 0/0/0/0/0 0/0 "GET /manager/status HTTP/1.1" Tomcat shows nothing, no error in the logs and no indication that the request ever makes it to the tomcat server. The request count is not incremented, the manager app only shows activity on one thread, serving up the manager app. Here are my haproxy and tomcat connector settings, I've been playing with both a good deal trying to chase down the issue, so they may not be ideal, but they definitely don't seem like they should cause this error. server.xml <Connector port="7000" protocol="HTTP/1.1" enableLookups="false" maxKeepAliveRequests="1" connectionLinger="10" /> haproxy config global log loghost local0 chroot /var/haproxy listen http_proxy :81 mode http log global option httplog option httpclose clitimeout 150000 srvtimeout 30000 contimeout 3000 balance roundrobin cookie SERVERID insert server tomcat7000 127.0.0.1:7000 cookie server00 check inter 2000

    Read the article

  • Basic 301 Redirection Help

    - by Marc
    I am trying to learn redirection for a WordPress site of my own. I am testing the concept of redirecting a single WordPress post by using a dummy site. However, it doesn't seem to be working for me. I am trying to redirect www.perfectmatchmaker[dot]org/finding-the-right-matchmaker to www.perfectmatchmaker[dot]org/finding-the-perfect-matchmaker I read that using the following is how to do this: Redirect 301 /old.html http://www.you[dot]com/new.html So this is what my .htaccess file currently looks like: # Use PHP5 as default AddHandler application/x-httpd-php5 .php # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress Redirect 301 /finding-the-right-matchmaker.html http://www.perfectmatchmaker.org/finding-the-perfect-matchmaker.html I've also tried removing the ".html". The redirection of the URL is finally working, but the URL shows no posts available. If I try to redirect the other post on the site by adding the following on the next line of the .htaccess file, I get an error that there is a "redirect loop" occurring. redirect 301 /find-love-and-your-perfect-match-through-the-use-of-a-match-maker http://www.perfectmatchmaker.org/find-love-and-your-perfect-match Any help you can provide me would be much appreciated. Thanks! Marc

    Read the article

  • Can't connect to server using lftp

    - by Roland
    I have a lftp file that I want to execute using the following command lftp -f /usr/scripts/fileS.lftp If I run this file I get Delaying before reconnect: Now within this file (fileS.lftp) I have the following code open -u username,password server mput -E * close If I run open -u username,password server I get the following error Couldnt get a file descriptor referring to the console I assume I need to allow a connection on the server I'm trying to connect to, how can I do this? Any help would be highly appreciated.

    Read the article

  • Problem booting virtual machine after converting VMDK to VHD

    - by vg1890
    I used the VMWare VCenter Converter Standalone Client to convert a physical drive on my old PC to a virtual drive. The conversion worked fine and I ended up with a valid VMDK file. Next, I wanted to convert the VMDK to a VHD for use with Microsoft Virtual PC, since that's what I use on my new box. I used WinImage for the conversion and that worked fine, too. I can access the files from the virtual drive through WinImage. However, when I create a new virtual machine using Virtual PC and add the existing VHD file, the machine doesn't boot. The initial boot screen flashes with the amount of RAM and then the screen goes black. If I turn off the VM and reboot in safe mode I can see the drivers being loaded until eventually it gets to crcdisk.sys and hangs indefinitely. Any ideas how to fix this? I'm not opposed to starting over from scratch if there's another method to turn my physical machine into a Virtual PC VM. Thanks! EDIT - I should add that the virtual drive is a system boot drive and not a secondary drive. EDIT - I tried booting from the install CD and doing a repair. The result was that the system could not be repaired due to a "driver error."

    Read the article

  • How much RAM required by Varnish?

    - by Gobind Singh Deo
    Hi, I'm using Apache for serving static files. Apache2 require too much RAM. I want to reduce the RAM usage. I don't have experience with Varnish. It's said to be faster. I don't know how Varnish works. So, How much RAM needed for running Apache2+Varnish? Will Apache2+Varnish have higher RAM usage than Apache2 without Varnish? Thanks.

    Read the article

  • How to set shmall, shmmax, shmni, etc ... in general and for postgresql

    - by jpic
    I've used the documentation from PostgreSQL to set it for example this config: >>> cat /proc/meminfo MemTotal: 16345480 kB MemFree: 1770128 kB Buffers: 382184 kB Cached: 10432632 kB SwapCached: 0 kB Active: 9228324 kB Inactive: 4621264 kB Active(anon): 7019996 kB Inactive(anon): 548528 kB Active(file): 2208328 kB Inactive(file): 4072736 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 3432 kB Writeback: 0 kB AnonPages: 3034588 kB Mapped: 4243720 kB Shmem: 4533752 kB Slab: 481728 kB SReclaimable: 440712 kB SUnreclaim: 41016 kB KernelStack: 1776 kB PageTables: 39208 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 8172740 kB Committed_AS: 14935216 kB VmallocTotal: 34359738367 kB VmallocUsed: 399340 kB VmallocChunk: 34359334908 kB HardwareCorrupted: 0 kB AnonHugePages: 456704 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 12288 kB DirectMap2M: 16680960 kB >>> ipcs -l ------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 4316816 max total shared memory (kbytes) = 4316816 min seg size (bytes) = 1 ------ Semaphore Limits -------- max number of arrays = 128 max semaphores per array = 250 max semaphores system wide = 32000 max ops per semop call = 32 semaphore max value = 32767 ------ Messages Limits -------- max queues system wide = 31918 max size of message (bytes) = 8192 default max size of queue (bytes) = 16384 sysctl.conf extract: kernel.shmall = 1079204 kernel.shmmax = 4420419584 postgresql.conf non defaults: max_connections = 60 # (change requires restart) shared_buffers = 4GB # min 128kB work_mem = 4MB # min 64kB wal_sync_method = open_sync # the default is the first option checkpoint_segments = 16 # in logfile segments, min 1, 16MB each checkpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0 effective_cache_size = 6GB Is this appropriate ? If not (or not necessarily), in which case would it be appropriate ? We did note nice performance improvements with this config, how would you improve it ? How should kernel memory management parameters be set ? Can anybody explain how to really set them from the ground up ?

    Read the article

  • Can't connect to computer via SBS2011 RWA

    - by sbrattla
    I've got an SBS 2011 Essentials server. Users a able to log on to Remote Web Access using their username and password. However, the trouble starts when a users attempts to log on remotely to his/her computer from the Remote Web Access website. When the user clicks on his/her computer (in the RWA website), the user is first presented with a window listing Publisher, Type, Remote Computer name and Gateway Server. Everything seems fine here, and the user clicks Connect. The user credentials are provided, and a connection is attempted. However, the logon attempt always fails with the message "The logon attempt failed". The logon attempt always generates three log events in the server log: EventId: 4672 - Special Logon EventId: 4624 - Logon EventId: 4634 - Logoff All events happens have the same timestamp. No events are logged on the client machine which the user attempts to log on to. Others have solved this by going to their IIS server and enable "Windows Authentication" for Rpc and RpcWithCert (in Default Web Site). However, this is in place on the server. I've also got RD CAPs and RD RAPs in place. As a side note; if i try to connect to any of the machines using the Remote Desktop Connection using the "Connect from anywhere" functionality - then things work flawlessly! In other words, the error only occurs when attempting to login to a computer via the Remote Web Access website. I've run out of ideas for how I can solve this (too many hours spent). Any ideas highly appreciated!

    Read the article

  • Reconfigure RAID on Dell PowerEdge T710

    - by Stefano Borini
    I have a Dell PowerEdge T710 under my feet at this very moment, with RedHat Enterprise Server 5.3. I have 6 1TB disks and two 500GB. parted reports two devices, one 500 GB and the other 4 TB. So I assume the RAID has been setup as mirror for two disks, and I assume as RAID 5 the remaining ones. I say "I assume" because it does not make sense. Having 6 disks in RAID 5, I should obtain a total space of 5 TB, not 4TB. It's not even RAID 10: I would end up with a 3 TB unit. How can I check and eventually modify the RAID array definition? In the Fujitsu Siemens I played with some time ago, at boot I had the chance to enter the controller BIOS, but here I don't see a clear way to perform this operation.

    Read the article

  • Multiple IP addresses on one NIC register twice in DNS server

    - by Brad B.
    Hi, We've got a build server (Windows Server 2008 SP2, 64-bit) which has one NIC and two IP addresses registered to that NIC (192.168.1.30 and 192.168.1.31). The build server is registering two identical Host (A) records for itself in our DNS server: buildserver.example.com = 192.168.1.30 buildserver.example.com = 192.168.1.31 I know in the "Advanced TCP/IP Settings" window for the build server's NIC, under the "DNS" tab, there is a check box labeled "Register this connection's addresses in DNS". I only want ONE of the IP addresses (ending in .30) to be registered in DNS not both of them. Can that be done? My best guess is to disable the "Register this connection's addresses in DNS" and manually add the Host (A) record to our DNS server. Thanks for any help!

    Read the article

  • Squid causing websocket issues

    - by Kvad
    I am running squid 2.x. When trying to use websockets in my web application I get the following in my squid logs 13/Jun/2012:10:05:08 +1000 558 192.168.19.76 TCP_MISS/100 199 POST http://api.pusherapp.com/apps/21932/channels/2830b5dd-e75b-4788-ae4a-6da903460d22/events? - DIRECT/107.22.252.43 - TCP_MISS/100 indicates that the service is returning the wrong thing from what I can see. What can I do to fix this?

    Read the article

  • SBS 08 Backup fail

    - by Bastien974
    I'm trying to backup my SBS 08 (only C:) with Windows Server Backup. It fails a few minutes after it started : Backup started at '08/12/2009 1:27:23 PM' failed as Volume Shadow copy operation failed for backup volumes with following error code '2155348022'. Please rerun backup once issue is resolved. In the EventViewer i have lots of error : VSS : 12289 SQLVDI : 1 MSSQL$MICROSOFT##SSEE : 18210 MSSQL$MICROSOFT##SSEE : 3041 SQLWRITER : 24583 All VSS ans SQL services are started. I have WSUS 3.0, Exchange 07. I don't have any third party backup software running at the same time. Thanks for your help !

    Read the article

  • How easy is it to migrate a Linux VM image from one VM env to another?

    - by T.J. Crowder
    If I stick to one of the standard, well-supported VM disk images (like a raw image, or VDI, VMDK, ...), are Linux VMs typically easy to move between VM environments? E.g., between (say) VirtualBox and KVM, or VMWare and Xen? I'm talking here of fully virtualized environments, not paravirtualization requiring support within the guest OS. It seems to me that the kernels in most Linux distributions these days are configured to...keep an open mind and detect things at boot time, so you don't have the issue that you sometimes have moving a Windows VM from one virtualization system to another (I'm thinking particularly of HAL issues that Windows has, like ACPI vs. non-ACPI; I've also just had Windows VMs generally acting strangely when moved from VMWare to VirtualBox, for instance). I'm looking for a general answer, but if it helps, specifically I'm mostly going to be doing this with Ubuntu 8.04 LTS and 10.04 LTS guests. But that could change.

    Read the article

  • PHP upgrade to 5.3 from 5.2, sessions no longer get stored

    - by Damo
    background link: http://stackoverflow.com/questions/7014945/php-upgrade-5-2-to-5-3-session-issue I have upgraded PHP on my 2008 std server from PHP 5.2 to PHP 5.3. Following the upgrade, sessions no longer work correctly. I have copied over the settings from my PHP.ini files which are applicable and configure new settings in line with the server or PHP's recommendations. PHP executes fine correctly, however session data does not get saved. I have session data stored in c:\temp. For each session created, I can see the session file in this folder. However no information gets written into the session file. Permissions wise, IUSR and EVERYONE has write access to this folder. If I downgrade to PHP 5.2, sessions are saved correctly and the site functions correctly. I have followed advise to ensure my code is optimised. closing session files correctly and forcing a session reset. I'm stumped. session Session Support enabled Registered save handlers files user sqlite Registered serializer handlers php php_binary wddx DirectiveLocal ValueMaster Value session.auto_startOffOff session.bug_compat_42OnOn session.bug_compat_warnOnOn session.cache_expire180180 session.cache_limiternocachenocache session.cookie_domainno valueno value session.cookie_httponlyOffOff session.cookie_lifetime00 session.cookie_path// session.cookie_secureOffOff session.entropy_fileno valueno value session.entropy_length00 session.gc_divisor100100 session.gc_maxlifetime14401440 session.gc_probability11 session.hash_bits_per_character44 session.hash_function00 session.namePHPSESSID53PHPSESSID53 session.referer_checkno valueno value session.save_handlerfilesfiles session.save_path/temp/temp session.serialize_handlerphpphp session.use_cookiesOnOn session.use_only_cookiesOnOn session.use_trans_sid00

    Read the article

  • Chroot with CentOS 5.3 + openssh 4.3p2

    - by Scud
    OS: CentOS 5.3, with openssh 4.3p2 Trying to set 'chroot' in ssh shell, but openssh version prior to 4.8 doesn't take below settings. yum update openssh open up to version 4.3 which is quite old. Doesn't CentOS support openssh 4.8 or up? If that's the case, how to set chroot with openssh 4.3? or is it better to just using FTP? My purpose is limit SFTP or FTP access to certain folder, not root folder. Thanks! Match group sftponly ChrootDirectory /home/%u X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp

    Read the article

  • BackupExec errors when starting services

    - by blade
    Hi, I've installed backupexec 2010 trial on my server, with an appropriately-privileged AD account, but get errors when starting the required services from the login page: Processing services Start services on server: WIN-HQ7JSCRTTSQ Starting Enterprise Vault Admin Service on WIN-HQ7JSCRTTSQ. The service Enterprise Vault Admin Service is already running on WIN-HQ7JSCRTTSQ. Starting Backup Exec Remote Agent for Windows Systems on WIN-HQ7JSCRTTSQ. The service Backup Exec Remote Agent for Windows Systems is already running on WIN-HQ7JSCRTTSQ. Starting Backup Exec Device & Media Service on WIN-HQ7JSCRTTSQ. Error starting the service Backup Exec Device & Media Service on WIN-HQ7JSCRTTSQ. Service-specific error code returned: 0x2000e2d3 (536928979) Starting Backup Exec Server on WIN-HQ7JSCRTTSQ. Error starting the service Backup Exec Server on WIN-HQ7JSCRTTSQ. The dependency service or group failed to start. Starting Backup Exec Job Engine on WIN-HQ7JSCRTTSQ. Error starting the service Backup Exec Job Engine on WIN-HQ7JSCRTTSQ. The dependency service or group failed to start. Starting Backup Exec Agent Browser on WIN-HQ7JSCRTTSQ. Error starting the service Backup Exec Agent Browser on WIN-HQ7JSCRTTSQ. The dependency service or group failed to start. Starting Backup Exec DLO Administration Service on WIN-HQ7JSCRTTSQ. Error starting the service Backup Exec DLO Administration Service on WIN-HQ7JSCRTTSQ. Error code returned: Starting Backup Exec DLO Maintenance Service on WIN-HQ7JSCRTTSQ. The service Backup Exec DLO Maintenance Service is already running on WIN-HQ7JSCRTTSQ. Starting Backup Exec Web Service on WIN-HQ7JSCRTTSQ. The service Backup Exec Web Service is already running on WIN-HQ7JSCRTTSQ. Start services on server WIN-HQ7JSCRTTSQ completed. Processing services completed! How can I resolve this?

    Read the article

  • Does SOLARIS have similar file to Linux's /etc/security/limits.conf?

    - by SQL Warrior
    I'm doing compliance check on SOLARIS 10 OS. I need to verify the following parameter settings: core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited open files (-n) 65536 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited virtual memory (kbytes, -v) unlimited Sure I could use ulimit -cH to get display above. But I also need to find where those settings are. I'm from Linux, in Linux we have /etc/security/limts.conf file to hold alike information. Do we have such file in Solaris? TIA!

    Read the article

  • Magento installation problem on Nginx in Windows

    - by Nithin
    I am trying to install magento locally using nginx as the web server instead of Apache. I copied the magento folder to the html directory. When i try to call the magento folder, I get the 404 not found error. I am able to access other php files setup in the html folder and have PHP installed. Here is my config file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 8080; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm index.php; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; allow all; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME c:/nginx/html/$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } How do I fix this? This is what I found in the error.log file : 2011/09/06 12:22:35 [error] 5632#0: *1 "/cygdrive/c/nginx/html/magento/index.php/install/index.html" is not found (20: Not a directory), client: 127.0.0.1, server: localhost, request: "GET /magento/index.php/install/ HTTP/1.1", host: "localhost:8080"

    Read the article

  • Hive metadata permission issue

    - by Chandramohan
    We are getting this error on Hive, while creating a DB / table hive> CREATE TABLE pokes (foo INT, bar STRING); FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. NestedThrowables: org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask Hive log : org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:298) at org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:601) at org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286) at org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958) at java.security.AccessController.doPrivileged(Native Method) at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953) at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159) at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803) at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698) at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:234) at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:261) at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:196) at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:171) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:354) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:306) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:451) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:232) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:197) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:108) at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:1868) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:1878) at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:470) ... 15 more Caused by: org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:114) at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521) at org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588) at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300) at org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161) at org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583) ... 42 more Caused by: java.util.NoSuchElementException: Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1191) at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106) ... 52 more 2011-08-11 18:02:36,964 ERROR ql.Driver (SessionState.java:printError(343)) - FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask

    Read the article

  • wuinstall doesn't work with winrs

    - by wizard
    I've been having issues with psexec so I've been migrating to use winrs (part of the winrm system). It's a very nice remoting tool which is proving to be more reliable then psexec. Wuinstall is used to install available windows updates. The two however don't play well together. I'm working on a verity of windows servers 2003, 2008 and 2008r2. Wuinstall behaves the same across all hosts and behaves as expected if executed locally by the same user. Command: winrs -r:server wuinstall /download Produces WUInstall.exe Version 1.1 Copyright by hs2n Informationstechnologie GmbH 2009 Visit: http://www.xeox.com, http://www.hs2n.at for new versions Searching for updates ... Criteria: IsInstalled=0 and Type='Software' Result Code: Succeeded 7 Updates found, listing all: Security Update for Windows Server 2008 R2 x64 Edition (KB2544893) Security Update for .NET Framework 3.5.1 on Windows 7 and Windows Server 2008 R2 SP1 for x64-based Systems (KB2518869) Security Update for Microsoft .NET Framework 3.5.1 on Windows 7 and Windows S erver 2008 R2 SP1 for x64-based Systems (KB2539635) Security Update for Microsoft .NET Framework 3.5.1 on Windows 7 and Windows S erver 2008 R2 SP1 for x64-based Systems (KB2572077) Security Update for Windows Server 2008 R2 x64 Edition (KB2588516) Security Update for Windows Server 2008 R2 x64 Edition (KB2620704) Security Update for Windows Server 2008 R2 x64 Edition (KB2617657) Downloading updates ... Error occured: CreateUpdateDownloader failed! Result CODE: 0x80070005 Return code: 1 Googling "0x80070005" finds "unspecified error" which isn't helpful. Thoughts? Is there a better way?

    Read the article

  • Warning flagged by the 'rkhunter'

    - by gkt.pro
    when I scanned my Ubuntu 10.04 with rkhunter a root kit hunter toolkit, it gave following warning: Is there something that I have to worry about. [23:06:19] /usr/sbin/adduser [ Warning ] [23:06:19] Warning: The command '/usr/sbin/adduser' has been replaced by a script: /usr/sbin/adduser: a /usr/bin/perl script text executable [23:06:20] /usr/sbin/rsyslogd [ Warning ] [23:06:20] Warning: The file properties have changed: [23:06:22] /usr/bin/dpkg [ Warning ] [23:06:22] Warning: The file properties have changed: [23:06:22] /usr/bin/dpkg-query [ Warning ] [23:06:22] Warning: The file properties have changed: [23:06:24] /usr/bin/ldd [ Warning ] [23:06:24] Warning: The file properties have changed: [23:06:24] Warning: The command '/usr/bin/ldd' has been replaced by a script: /usr/bin/ldd: Bourne-Again shell script text executable [23:06:24] /usr/bin/logger [ Warning ] [23:06:24] Warning: The file properties have changed: [23:06:25] /usr/bin/mail [ Warning ] [23:06:25] Warning: The file '/usr/bin/mail' exists on the system, but it is not present in the rkhunter.dat file. [23:06:27] /usr/bin/sudo [ Warning ] [23:06:27] Warning: The file properties have changed: [23:06:29] /usr/bin/whereis [ Warning ] [23:06:29] Warning: The file properties have changed: [23:06:29] /usr/bin/lwp-request [ Warning ] [23:06:29] Warning: The command '/usr/bin/lwp-request' has been replaced by a script: /usr/bin/lwp-request: a /usr/bin/perl -w script text executable [23:06:29] /usr/bin/bsd-mailx [ Warning ] [23:06:29] Warning: The file '/usr/bin/bsd-mailx' exists on the system, but it is not present in the rkhunter.dat file. [23:06:30] /sbin/fsck [ Warning ] [23:06:30] Warning: The file properties have changed: [23:06:30] /sbin/ifdown [ Warning ] [23:06:30] Warning: The file properties have changed: [23:06:31] /sbin/ifup [ Warning ] [23:06:31] Warning: The file properties have changed: [23:06:34] /bin/dmesg [ Warning ] [23:06:34] Warning: The file properties have changed: [23:06:35] /bin/more [ Warning ] [23:06:35] Warning: The file properties have changed: [23:06:36] /bin/mount [ Warning ] [23:06:36] Warning: The file properties have changed: [23:06:37] /bin/which [ Warning ] [23:06:37] Warning: The command '/bin/which' has been replaced by a script: /bin/which: POSIX shell script text executable [23:08:58] Checking /dev for suspicious file types [ Warning ] [23:08:58] Warning: Suspicious file types found in /dev: [23:08:58] Checking for hidden files and directories [ Warning ] [23:08:58] Warning: Hidden directory found: /etc/.java [23:08:58] Warning: Hidden directory found: /dev/.udev [23:08:58] Warning: Hidden directory found: /dev/.initramfs [23:09:01] Checking version of Exim MTA [ Warning ] [23:09:01] Warning: Application 'exim', version '4.71', is out of date, and possibly a security risk. [23:09:01] Checking version of GnuPG [ Warning ] [23:09:01] Warning: Application 'gpg', version '1.4.10', is out of date, and possibly a security risk. [23:09:01] Checking version of OpenSSL [ Warning ] [23:09:01] Warning: Application 'openssl', version '0.9.8k', is out of date, and possibly a security risk.

    Read the article

  • Why am I getting 403 Forbidden after enabling HTTPS for Apache on Mac OS X?

    - by Daryl Spitzer
    I enabled HTTPS on the Apache server built-in to Mac OS X 10.6 (on my MacBook Pro) by uncommenting: Include /private/etc/apache2/extra/httpd-ssl.conf ...in /etc/apache2/httpd.conf and modifying /etc/apache2/extra/httpd-ssl.conf to include: DocumentRoot "/Users/dspitzer/foo/bar" ServerName dot.com:443 ServerAdmin [email protected] ... SSLCertificateFile "/private/etc/apache2/siab_cert.pem" SSLCertificateKeyFile "/private/etc/apache2/siab_key.pem" Then I restart apache (with sudo apachectl restart) and go to https://localhost/ in Safari, where I get: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>403 Forbidden</title> </head><body> <h1>Forbidden</h1> <p>You don't have permission to access / on this server.</p> </body></html> I've tried changing 443 in /etc/apache2/extra/httpd-ssl.conf to 8443 and going to https://localhost:8443/ and I get the same error. I read http://serverfault.com/questions/88037/why-am-i-getting-this-403-forbidden-error and confirmed that execute permission is given for all parent directories of the vhost dir: /Users/dspitzer/foo/bar. Is there a log file somewhere that might give me a clue?

    Read the article

  • Vagrant synced folders aren't case sensitive

    - by lvmisooners
    For our web stack, we are moving from a Windows Server to CentOS. To facilitate development, we're utilizing Vagrant to run CentOS VMs locally. We're using Vagrant's Synced Folders feature to allow devs to use their favorite IDEs on their host machine, but we're finding that one key feature is missing from this setup: file system case sensitivity. The synced folder inside the VM apparently takes on the properties of the host's file system, so if I'm developing from a Windows machine, or even OSX, the file system isn't case sensitive. This is a big issue, as our production servers will be pure CentOS, and its file system will be case sensitive. Case sensitivity is one of the main reasons we wanted to have a local VM. We want to prevent "It works on my machine!" Some workarounds we've considered or tried: Use lsyncd to sync from the vagrant share to a location within the VM that is case sensitive updating files on the host doesn't seem to generate the events in the VM that lsync listens to Make a case-sensitive partition on the host (Doesn't work for Windows) Use samba this may be an option, but we haven't vetted it yet. Is there a better way? Note that we have developers using Windows, OS X, and Ubuntu, and the solution needs to work everywhere.

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >