Search Results

Search found 10610 results on 425 pages for 'apache hadoop'.

Page 8/425 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • HDFS some datanodes of cluster are suddenly disconnected while reducers are running

    - by user1429825
    I have 8 slave computers and 1 master computer for running Hadoop (ver 0.21) some datanodes of cluster are suddenly disconnected while I was running MapReduce code on 10GB data After all mappers finished and around 80% of reducers was processed, randomly one or more datanode disconned from network. and then the other datanodes start to disappear from network even if I killed the MapReduce job when I found some datanode was disconnected. I've tried to change dfs.datanode.max.xcievers to 4096, turned off fire-walls of all computing node, disabled selinux and increased the number of file open limit to 20000 but they didn't work at all... anyone have a idea to solve this problem? followings are error log from mapreduce 12/06/01 12:31:29 INFO mapreduce.Job: Task Id : attempt_201206011227_0001_r_000006_0, Status : FAILED java.io.IOException: Bad connect ack with firstBadLink as ***.***.***.148:20010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:889) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:820) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427) and followings are logs from datanode 2012-06-01 13:01:01,118 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-5549263231281364844_3453 src: /*.*.*.147:56205 dest: /*.*.*.142:20010 2012-06-01 13:01:01,136 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020) Starting thread to transfer block blk_-3849519151985279385_5906 to *.*.*.147:20010 2012-06-01 13:01:19,135 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-5797481564121417802_3453 to *.*.*.146:20010 got java.net.ConnectException: > Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1257) at java.lang.Thread.run(Thread.java:722) 2012-06-01 13:06:20,342 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_6674438989226364081_3453 2012-06-01 13:09:01,781 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-3849519151985279385_5906 to *.*.*.147:20010 got java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/*.*.*.142:60057 remote=/*.*.*.147:20010] at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246) at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:164) at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:203) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:388) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:476) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1284) at java.lang.Thread.run(Thread.java:722) hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/home/hadoop/data/name</value> </property> <property> <name>dfs.data.dir</name> <value>/home/hadoop/data/hdfs1,/home/hadoop/data/hdfs2,/home/hadoop/data/hdfs3,/home/hadoop/data/hdfs4,/home/hadoop/data/hdfs5</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <property> <name>dfs.http.address</name> <value>0.0.0.0:20070</value> <description>50070 The address and the base port where the dfs namenode web ui will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:20075</value> <description>50075 The datanode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.secondary.http.address</name> <value>0.0.0.0:20090</value> <description>50090 The secondary namenode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:20010</value> <description>50010 The address where the datanode server will listen to. If the port is 0 then the server will start on a free port. </description> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:20020</value> <description>50020 The datanode ipc server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.https.address</name> <value>0.0.0.0:20475</value> </property> <property> <name>dfs.https.address</name> <value>0.0.0.0:20470</value> </property> </configuration> mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>masternode:29001</value> </property> <property> <name>mapred.system.dir</name> <value>/home/hadoop/data/mapreduce/system</value> </property> <property> <name>mapred.local.dir</name> <value>/home/hadoop/data/mapreduce/local</value> </property> <property> <name>mapred.map.tasks</name> <value>32</value> <description> default number of map tasks per job.</description> </property> <property> <name>mapred.tasktracker.map.tasks.maximum</name> <value>4</value> </property> <property> <name>mapred.reduce.tasks</name> <value>8</value> <description> default number of reduce tasks per job.</description> </property> <property> <name>mapred.map.child.java.opts</name> <value>-Xmx2048M</value> </property> <property> <name>io.sort.mb</name> <value>500</value> </property> <property> <name>mapred.task.timeout</name> <value>1800000</value> <!-- 30 minutes --> </property> <property> <name>mapred.job.tracker.http.address</name> <value>0.0.0.0:20030</value> <description> 50030 The job tracker http server address and port the server will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>mapred.task.tracker.http.address</name> <value>0.0.0.0:20060</value> <description> 50060 </property> </configuration>

    Read the article

  • How do I control output files name and content of an Hadoop streaming job?

    - by Eran Kampf
    Is there a way to control the output filenames of an Hadoop Streaming job? Specifically I would like my job's output files content and name to be organized by the ket the reducer outputs - each file would only contain values for one key and its name would be the key. Update: Just found the answer - Using a Java class that derives from MultipleOutputFormat as the jobs output format allows control of the output file names. http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/mapred/lib/MultipleOutputFormat.htmlhttp://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/mapred/lib/MultipleOutputFormat.html I havent seen any samples for this out there... Can anyone point out to an Hadoop Streaming sample that makes use of a custom output format Java class?

    Read the article

  • Linux server apache httpd processes take i/o wait to close to 100% and lock down server

    - by user3682065
    For about 5 days now, and seemingly out of the blue, my linux server has started locking up from time to time. The pattern is always the same as far as I can tell from top and iotop commands around the time it starts happening: One or more httpd processes (usually one) hang and start using up 100% of CPU power, the %wa goes close to 100% and in the iotop I see several httpd processes with 99.99% in the IO column. I'm also running an SVN server on this machine through apache and the one way that I've been consistently able to reproduce this is to do an SVN commit of new files or an SVN update from the repository on this server (I am the only one using this SVN repository). This will always reproduce this scenario successfully, but until very recently I had no problems at all checking in/out of SVN. But sometimes it just happens for no detectable reason at all it seems. So it seems like there is some issue with my Apache that leads it to have processes use up a lot of read/write upon certain triggers. I was wondering if anyone could help me uncover that issue. EDIT: OK now it's happening again: This is top: [root@server ~]# top top - 10:56:54 up 2:59, 5 users, load average: 171.46, 70.35, 27.01 Tasks: 328 total, 2 running, 326 sleeping, 0 stopped, 0 zombie Cpu(s): 1.9%us, 2.0%sy, 0.0%ni, 0.0%id, 96.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2021144k total, 1968192k used, 52952k free, 2500k buffers Swap: 4194288k total, 2938584k used, 1255704k free, 39008k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 10390 apache 20 0 2774m 936m 6200 D 2.0 47.4 1:52.27 httpd 2149 root 20 0 927m 13m 1040 S 0.7 0.7 1:50.46 namecoind 11 root 20 0 0 0 0 R 0.3 0.0 0:30.10 events/0 23 root 20 0 0 0 0 S 0.3 0.0 0:17.88 kblockd/1 2049 root 20 0 382m 4932 2880 D 0.3 0.2 0:03.67 httpd 2144 root 20 0 1702m 69m 1164 S 0.3 3.5 5:19.68 bitcoind 6325 root 20 0 15164 1100 656 R 0.3 0.1 0:11.09 top 10311 apache 20 0 387m 9496 7320 D 0.3 0.5 0:01.89 httpd 10313 apache 20 0 391m 10m 7364 D 0.3 0.5 0:02.40 httpd 10466 apache 20 0 399m 12m 7392 D 0.3 0.7 0:02.41 httpd 10599 apache 20 0 391m 9324 7340 D 0.3 0.5 0:00.15 httpd 10628 apache 20 0 384m 7620 4052 D 0.3 0.4 0:00.01 httpd 10633 apache 20 0 384m 7048 3504 D 0.3 0.3 0:00.01 httpd 10634 apache 20 0 384m 8012 4048 D 0.3 0.4 0:00.02 httpd 10638 apache 20 0 400m 22m 9.8m D 0.3 1.1 0:01.93 httpd 10640 apache 20 0 385m 8288 4028 D 0.3 0.4 0:00.03 httpd 10641 apache 20 0 401m 21m 6376 D 0.3 1.1 0:01.45 httpd 10759 apache 20 0 385m 8816 3480 D 0.3 0.4 0:01.45 httpd 10773 apache 20 0 384m 8044 3464 D 0.3 0.4 0:00.02 httpd This is an iotop snapshot: Total DISK READ: 5.93 M/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 10732 be/4 apache 3.76 K/s 0.00 B/s 0.00 % 58.48 % httpd 876 be/3 root 0.00 B/s 52.68 K/s 0.00 % 52.98 % [jbd2/dm-1-8] 10906 be/4 root 124.17 K/s 0.00 B/s 0.00 % 23.03 % sh -c [ -x /usr/local/psa/admin/sbin/backupmng ] && /usr/local/psa/admin/sbin/backupmng >/dev/null 2>&1 2156 be/4 root 206.94 K/s 0.00 B/s 0.00 % 21.15 % bitcoind 10904 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 18.94 % mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock 10773 be/4 apache 7.53 K/s 0.00 B/s 0.00 % 14.77 % httpd 10641 be/4 apache 15.05 K/s 0.00 B/s 0.00 % 11.57 % httpd 10399 be/4 apache 1057.29 K/s 0.00 B/s 43.16 % 10.56 % httpd 10682 be/4 sw-cp-se 158.03 K/s 0.00 B/s 0.00 % 7.45 % sw-engine-cgi -c /usr/local/psa/admin/conf/php.ini -d auto_prepend_file=auth.php3 -u psaadm 10774 be/4 apache 3.76 K/s 0.00 B/s 0.00 % 6.53 % httpd 10624 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 5.53 % httpd 10356 be/4 apache 899.26 K/s 0.00 B/s 35.52 % 4.01 % httpd 10795 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 3.93 % httpd 10804 be/4 apache 7.53 K/s 0.00 B/s 0.00 % 3.08 % httpd 4379 be/4 root 2.89 M/s 0.00 B/s 99.99 % 0.00 % namecoind 10619 be/4 apache 462.80 K/s 0.00 B/s 7.80 % 0.00 % httpd 10636 be/4 apache 3.76 K/s 0.00 B/s 0.00 % 0.00 % httpd 10716 be/4 mysql 105.35 K/s 0.00 B/s 5.92 % 0.00 % mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock 1988 be/4 root 18.81 K/s 0.00 B/s 0.00 % 0.00 % spamd_full.sock I also ran lsof -p for pid 10390 which was way up top under the top command and this is the bottom line where I can sort of see what request this was and it says CLOSE_WAIT: httpd 10390 apache 34u IPv6 315879 0t0 TCP default-domain.com:https->crawl-66-249-65-91.googlebot.com:42907 (CLOSE_WAIT) I'm still not sure what exactly is causing this all to happen though? I killed that service but %wa and load average remain high, I also stopped mysqld and other services. It really only goes down once I stop httpd altogether, and even then I can't start it without finding remaining hanging httpd processes via "netstat -tulpn", killing those or doing "killall -9 httpd" and after waiting a while for it to cycle through all those then doing /etc/init.d/httpd start

    Read the article

  • Oracle Big Data Software Downloads

    - by Mike.Hallett(at)Oracle-BI&EPM
    Companies have been making business decisions for decades based on transactional data stored in relational databases. Beyond that critical data, is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and photographs that can be mined for useful information. Oracle offers a broad integrated portfolio of products to help you acquire and organize these diverse data sources and analyze them alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data Connectors Downloads here, includes: Oracle SQL Connector for Hadoop Distributed File System Release 2.1.0 Oracle Loader for Hadoop Release 2.1.0 Oracle Data Integrator Companion 11g Oracle R Connector for Hadoop v 2.1 Oracle Big Data Documentation The Oracle Big Data solution offers an integrated portfolio of products to help you organize and analyze your diverse data sources alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data, Release 2.2.0 - E41604_01 zip (27.4 MB) Integrated Software and Big Data Connectors User's Guide HTML PDF Oracle Data Integrator (ODI) Application Adapter for Hadoop Apache Hadoop is designed to handle and process data that is typically from data sources that are non-relational and data volumes that are beyond what is handled by relational databases. Typical processing in Hadoop includes data validation and transformations that are programmed as MapReduce jobs. Designing and implementing a MapReduce job usually requires expert programming knowledge. However, when you use Oracle Data Integrator with the Application Adapter for Hadoop, you do not need to write MapReduce jobs. Oracle Data Integrator uses Hive and the Hive Query Language (HiveQL), a SQL-like language for implementing MapReduce jobs. Employing familiar and easy-to-use tools and pre-configured knowledge modules (KMs), the application adapter provides the following capabilities: Loading data into Hadoop from the local file system and HDFS Performing validation and transformation of data within Hadoop Loading processed data from Hadoop to an Oracle database for further processing and generating reports Oracle Database Loader for Hadoop Oracle Loader for Hadoop is an efficient and high-performance loader for fast movement of data from a Hadoop cluster into a table in an Oracle database. It pre-partitions the data if necessary and transforms it into a database-ready format. Oracle Loader for Hadoop is a Java MapReduce application that balances the data across reducers to help maximize performance. Oracle R Connector for Hadoop Oracle R Connector for Hadoop is a collection of R packages that provide: Interfaces to work with Hive tables, the Apache Hadoop compute infrastructure, the local R environment, and Oracle database tables Predictive analytic techniques, written in R or Java as Hadoop MapReduce jobs, that can be applied to data in HDFS files You install and load this package as you would any other R package. Using simple R functions, you can perform tasks such as: Access and transform HDFS data using a Hive-enabled transparency layer Use the R language for writing mappers and reducers Copy data between R memory, the local file system, HDFS, Hive, and Oracle databases Schedule R programs to execute as Hadoop MapReduce jobs and return the results to any of those locations Oracle SQL Connector for Hadoop Distributed File System Using Oracle SQL Connector for HDFS, you can use an Oracle Database to access and analyze data residing in Hadoop in these formats: Data Pump files in HDFS Delimited text files in HDFS Hive tables For other file formats, such as JSON files, you can stage the input in Hive tables before using Oracle SQL Connector for HDFS. Oracle SQL Connector for HDFS uses external tables to provide Oracle Database with read access to Hive tables, and to delimited text files and Data Pump files in HDFS. Related Documentation Cloudera's Distribution Including Apache Hadoop Library HTML Oracle R Enterprise HTML Oracle NoSQL Database HTML Recent Blog Posts Big Data Appliance vs. DIY Price Comparison Big Data: Architecture Overview Big Data: Achieve the Impossible in Real-Time Big Data: Vertical Behavioral Analytics Big Data: In-Memory MapReduce Flume and Hive for Log Analytics Building Workflows in Oozie

    Read the article

  • Apache config file. Redirect permanent gives 403 error

    - by Homunculus Reticulli
    I am changing my domain from foo.com to foobar.org. I used a Redirect permanent in my apache config file, and then restarted apache. When I try to access the old domain foo.com, I get a 403 error. This is what my apache config file looks like: <VirtualHost *:80> ServerName foo.com #ServerAlias www.foo.com #ServerAdmin [email protected] Redirect permanent / http://www.foobar.org/ DocumentRoot /path/to/project/foo/web DirectoryIndex index.php # CustomLog with format nickname LogFormat "%h %l %u %t \"%r\" %>s %b" common CustomLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foo.access.log" common LogLevel notice ErrorLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foo.errors.log" <Directory /> Order Deny,Allow Deny from all </Directory> <Files ~ "^\.ht"> Order allow,deny Deny from all </Files> <Directory /path/to/project/foo/web> Options -Indexes -Includes AllowOverride All Allow from All RewriteEngine On # We check if the .html version is here (cacheing) RewriteRule ^$ index.html [QSA] RewriteRule ^([^.])$ $1.html [QSA] RewriteCond %{REQUEST_FILENAME} !-f # No, so we redirect to our front end controller RewriteRule ^(.*)$ index.php [QSA,L] </Directory> <Directory /path/to/project/foo/web/uploads> Options -ExecCGI -FollowSymLinks -Indexes -Includes AllowOverride None php_flag engine off </Directory> Alias /sf /lib/vendor/symfony/symfony-1.3.8/data/web/sf <Directory /lib/vendor/symfony/symfony-1.3.8/data/web/sf> # Alias /sf /lib/vendor/symfony/symfony-1.4.19/data/web/sf # <Directory /lib/vendor/symfony/symfony-1.4.19/data/web/sf> Options -Indexes -Includes AllowOverride All Allow from All </Directory> </VirtualHost> Can anyone spot what I may be doing wrong?. The site foobar.org does exist so I don't know why this error occurs - help?

    Read the article

  • hadoop implementing a generic list writable

    - by Guruprasad Venkatesh
    I am working on building a map reduce pipeline of jobs(with one MR job's output feeding to another as input). The values being passed around are fairly complex, in that there are lists of different types and hash maps with values as lists. Hadoop api does not seem to have a ListWritable. Am trying to write a generic one, but it seems i can't instantiate a generic type in my readFields implementation, unless i pass in the class type itself: public class ListWritable<T extends Writable> implements Writable { private List<T> list; private Class<T> clazz; public ListWritable(Class<T> clazz) { this.clazz = clazz; list = new ArrayList<T>(); } @Override public void write(DataOutput out) throws IOException { out.writeInt(list.size()); for (T element : list) { element.write(out); } } @Override public void readFields(DataInput in) throws IOException{ int count = in.readInt(); this.list = new ArrayList<T>(); for (int i = 0; i < count; i++) { try { T obj = clazz.newInstance(); obj.readFields(in); list.add(obj); } catch (InstantiationException e) { e.printStackTrace(); } catch (IllegalAccessException e) { e.printStackTrace(); } } } } But hadoop requires all writables to have a no argument constructor to read the values back. Has anybody tried to do the same and solved this problem? TIA.

    Read the article

  • Hadoop File Read

    - by user3684584
    Hadoop Distributed Cache Wordcount example in hadoop 2.2.0. Copied file into hdfs filesystem to be used inside setup of mapper class. protected void setup(Context context) throws IOException,InterruptedException { Path[] uris = DistributedCache.getLocalCacheFiles(context.getConfiguration()); cacheData=new HashMap(); for(Path urifile: uris) { try { BufferedReader readBuffer1 = new BufferedReader(new FileReader(urifile.toString())); String line; while ((line=readBuffer1.readLine())!=null) { System.out.println("**************"+line); cacheData.put(line,line); } readBuffer1.close(); } catch (Exception e) { System.out.println(e.toString()); } } } Inside Driver Main class Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf,args).getRemainingArgs(); if (otherArgs.length != 3) { System.err.println("Usage: wordcount <in> <out>"); System.exit(2); } Job job = new Job(conf, "word_count"); job.setJarByClass(WordCount.class); job.setMapperClass(Map.class); job.setReducerClass(Reduce.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); Path outputpath=new Path(otherArgs[1]); outputpath.getFileSystem(conf).delete(outputpath,true); FileOutputFormat.setOutputPath(job,outputpath); System.out.println("CachePath****************"+otherArgs[2]); DistributedCache.addCacheFile(new URI(otherArgs[2]),job.getConfiguration()); System.exit(job.waitForCompletion(true) ? 0 : 1); But getting exception java.io.FileNotFoundException: file:/home/user12/tmp/mapred/local/1408960542382/cache (No such file or directory) So Cache functionality not working properly. Any Idea ?

    Read the article

  • Apache Derby running in Tomcat shutdown issues

    - by Luke
    I have set up Derby Network Server to be hosted within a Tomcat environment. This works great. However, when I shut down Tomcat I get the following errors: 04/01/2011 10:41:41 AM org.apache.catalina.core.StandardService stop INFO: Stopping service Catalina 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc SEVERE: The web application [/derby] registered the JBDC driver [org.apache.derby.jdbc.ClientDriver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc SEVERE: The web application [/derby] registered the JBDC driver [org.apache.derby.jdbc.AutoloadedDriver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [derby.NetworkServerStarter] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [NetworkServerThread_4] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [DRDAConnThread_5] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [DRDAConnThread_13] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.coyote.http11.Http11Protocol destroy INFO: Stopping Coyote HTTP/1.1 on http-8080 I'm currently starting and stopping Tomcat with the following commands: ./catalina run ./catalina stop Is there a better way to shutdown Tomcat with Derby or can this be solved by a configuration change?

    Read the article

  • Apache > 2.2.22 rewrite rule not working?

    - by EBAH
    since yesterday I'm trying to figure out how to fix the following: running phpipam (http://www.phpipam.net/) with WAMP (Windows environment). The problem I am facing is related with RewriteRule functionality, so forget phpipam for a moment and concentrate on few lines of code. Here is the directory structure of my test website that emulate the first steps phpipam does (you can download http://goo.gl/ksvuGc): C:\wamp\www\rewrite-tst\ C:\wamp\www\rewrite-tst\.htaccess C:\wamp\www\rewrite-tst\index.php C:\wamp\www\rewrite-tst\install C:\wamp\www\rewrite-tst\install\index.php It seems that the following rewrite rule in .htaccess doesn't work: C:\wamp\www\rewrite-tst\.htaccess # install RewriteRule ^install$ install/ [R] RewriteRule ^install/$ index.php?page=install When opening C:\wamp\www\rewrite-tst\index.php the first step check the URL for "install" argument. Since the URL is: http://localhost/rewrite-tst no arguments are supplied and the browser is redirected to: header("Location: /rewrite-tst/install/") At this point the browser opens the page: C:\wamp\www\rewrite-tst\install\index.php >> http://localhost/rewrite-tst/install Apache, thanks to C:\wamp\www\rewrite-tst.htaccess should intercept this URL and redirect to: http://localhost/rewrite-tst/index.php?page=install Here are my tries: Win Apache 2.2.22: works Win Apache 2.4.4: KO Win Apache 2.4.6: KO In the attached zip file you can also find two traces from apache RewriteLog which I can't understand very well. Why Apache 2.4 doesn't work on Windows? Is it possible that there's a bug on Windows version of Apache (2.4.4 and 2.4.6) or am I wrong someway? Thanks for your help!!! Evan -- UPDATE 12 oct 2013 Now I'm really confused! Working on Linux, Kubuntu 13.04. Linux Apache 2.2.22: works Linux Apache 2.4.6: KO I guess there's something wrong in my rules at this point, or some change happened from Apache 2.2 to 2.4 ...

    Read the article

  • Apache Derby running within Tomcat causes shutdown issues

    - by Luke
    I have set up Derby Network Server to be hosted within a Tomcat environment. This works great. However, when I shut down Tomcat I get the following errors: 04/01/2011 10:41:41 AM org.apache.catalina.core.StandardService stop INFO: Stopping service Catalina 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc SEVERE: The web application [/derby] registered the JBDC driver [org.apache.derby.jdbc.ClientDriver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc SEVERE: The web application [/derby] registered the JBDC driver [org.apache.derby.jdbc.AutoloadedDriver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [derby.NetworkServerStarter] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [NetworkServerThread_4] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [DRDAConnThread_5] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [DRDAConnThread_13] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.coyote.http11.Http11Protocol destroy INFO: Stopping Coyote HTTP/1.1 on http-8080 I'm currently starting and stopping Tomcat with the following commands: ./catalina run ./catalina stop Is there a better way to shutdown Tomcat with Derby or can this be solved by a configuration change?

    Read the article

  • java.lang.ClassNotFoundException: org.apache.struts2.dispatcher.FilterDispatcher

    - by user714698
    log when start the tomcat Apr 28, 2011 10:52:57 AM org.apache.catalina.core.AprLifecycleListener init INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: D:\software\jdk1.5.0_06\bin;.;C:\WINDOWS\system32;C:\WINDOWS;D:/software/jdk1.5.0_06/bin/../jre/bin/client;D:/software/jdk1.5.0_06/bin/../jre/bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\Program Files\TortoiseSVN\bin.;D:\software\jdk1.5.0_06\bin;D:\software\Ant 1.7\bin;D:\software\Axis2-1.5.4\axis2-1.5.4-bin\axis2-1.5.4\bin;C:\Program Files\IDM Computer Solutions\UltraEdit\ Apr 28, 2011 10:52:58 AM org.apache.tomcat.util.digester.SetPropertiesRule begin WARNING: [SetPropertiesRule]{Server/Service/Engine/Host/Context} Setting property 'source' to 'org.eclipse.jst.jee.server:StrutsHelloWorld' did not find a matching property. Apr 28, 2011 10:52:58 AM org.apache.tomcat.util.digester.SetPropertiesRule begin WARNING: [SetPropertiesRule]{Server/Service/Engine/Host/Context} Setting property 'source' to 'org.eclipse.jst.jee.server:StrutsHelloWorld1' did not find a matching property. Apr 28, 2011 10:53:00 AM org.apache.coyote.http11.Http11Protocol init INFO: Initializing Coyote HTTP/1.1 on http-8080 Apr 28, 2011 10:53:00 AM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 3657 ms Apr 28, 2011 10:53:00 AM org.apache.catalina.core.StandardService start INFO: Starting service Catalina Apr 28, 2011 10:53:00 AM org.apache.catalina.core.StandardEngine start INFO: Starting Servlet Engine: Apache Tomcat/6.0.32 Apr 28, 2011 10:53:01 AM org.apache.catalina.core.StandardContext filterStart SEVERE: Exception starting filter struts2 java.lang.ClassNotFoundException: org.apache.struts2.dispatcher.FilterDispatcher at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1680) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1526) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:269) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:422) at org.apache.catalina.core.ApplicationFilterConfig.<init>(ApplicationFilterConfig.java:115) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4071) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4725) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053) at org.apache.catalina.core.StandardHost.start(StandardHost.java:840) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) at org.apache.catalina.core.StandardService.start(StandardService.java:525) at org.apache.catalina.core.StandardServer.start(StandardServer.java:754) at org.apache.catalina.startup.Catalina.start(Catalina.java:595) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) Apr 28, 2011 10:53:01 AM org.apache.catalina.core.StandardContext start SEVERE: Error filterStart Apr 28, 2011 10:53:01 AM org.apache.catalina.core.StandardContext start SEVERE: Context [/StrutsHelloWorld1] startup failed due to previous errors log4j:WARN No appenders could be found for logger (com.opensymphony.xwork2.config.providers.XmlConfigurationProvider). log4j:WARN Please initialize the log4j system properly. Apr 28, 2011 10:53:05 AM org.apache.coyote.http11.Http11Protocol start INFO: Starting Coyote HTTP/1.1 on http-8080 Apr 28, 2011 10:53:05 AM org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening on /0.0.0.0:8009 Apr 28, 2011 10:53:05 AM org.apache.jk.server.JkMain start INFO: Jk running ID=0 time=0/46 config=null Apr 28, 2011 10:53:05 AM org.apache.catalina.startup.Catalina start INFO: Server startup in 4951 ms

    Read the article

  • Hadoop Hive web interface options

    - by Garethr
    I've been experimenting with Hive for some data mining activities and would like to make it easily available to less command line orientated colleagues. Hive does now ship with a web interface (http://wiki.apache.org/hadoop/Hive/HiveWebInterface) but it's very basic at this stage. My question is does a visually polished and fully featured interface (either desktop or preferably web based) to Hive exist yet? Are their any open source efforts outside the Hive project working on this?

    Read the article

  • Very basic question about Hadoop and compressed input files

    - by Luis Sisamon
    I have started to look into Hadoop. If my understanding is right i could process a very big file and it would get split over different nodes, however if the file is compressed then the file could not be split and wold need to be processed by a single node (effectively destroying the advantage of running a mapreduce ver a cluster of parallel machines). My question is, assuming the above is correct, is it possible to split a large file manually in fixed-size chunks, or daily chunks, compress them and then pass a list of compressed input files to perform a mapreduce?

    Read the article

  • Having two sets of input combined on hadoop

    - by aeolist
    I have a rather simple hadoop question which i'll try to present with an example say you have a list of strings and a large file and you want each mapper to process a piece of the file and one of the strings in a grep like program. how are you supposed to do that? I am under the impression that the number of mappers is a result of the inputsplits produced. I could run subsequent jobs, one for each string, but it seems kinda... messy?

    Read the article

  • Hadoop write directory

    - by FaultyJuggler
    Simple question but reading through documentation and configuration I can't quite seem to figure it out. How do I A) know where hadoop is writing to on the local disk and B) change that For initial testing I setup HDFS on a 20gb linux VM - to it we've added a 500gb networked drive for moving towards prototyping the full system. So now how do I point HDFS at that drive, or do I simply move the home directory/install with some slight change in setup and restart the process?

    Read the article

  • Hadoop On Azure C#Isotope

    - by Sreesankar
    During the initial release of the HadoopOnAzure, Microsoft had provided the C#Isotope SDK as a programatic interface to the Hadoop cluster on the Azure. After the HDInsight release this is removed from the downoads. More over while trying with the previous version of the sdk we get a 500 - Internal Server Error. Any idea if this services is disabled? If so whats the alternative way to programatically interface with the HDInsight Cluster on Azure?

    Read the article

  • Efficient way to store a graph for calculation in Hadoop

    - by user337499
    I am currently trying to perform calculations like clustering coefficient on huge graphs with the help of Hadoop. Therefore I need an efficient way to store the graph in a way that I can easily access nodes, their neighbors and the neighbors' neighbors. The graph is quite sparse and stored in a huge tab separated file where the first field is the node from which an edge goes to the second node in field two. Thanks in advance!

    Read the article

  • Changing the block size of a dfs file in Hadoop

    - by Sam
    I found that my map tasks is currently inefficient when parsing one particular set of files (total 2 TB). I'd like to change the block size of files in the Hadoop dfs (from 64MB to 128 MB). I can't find how to do it in the documentation for only one set of files and not the entire cluster, does anyone know the command that would change the block size when I upload it (ie copy from local to the dfs)? Thanks!

    Read the article

  • Apache logs: "::1 ... "OPTIONS * HTTP/1.0" 200 -

    - by Meltemi
    Just looking at logs of a not-so-busy site on one of our Apache servers and notice tons of these in the log: ::1 - - [15/Apr/2011:12:11:40 -0700] "OPTIONS * HTTP/1.0" 200 - ::1 - - [15/Apr/2011:12:11:41 -0700] "OPTIONS * HTTP/1.0" 200 - ::1 - - [15/Apr/2011:12:11:44 -0700] "OPTIONS * HTTP/1.0" 200 - They seem to appear multiple times just below the GET requests where Apache has served a page & its related images. what do they mean? what IP is "::1"? if they're benign can I suppress them?

    Read the article

  • Configure PHP and Apache in Windows 7

    - by manxing
    I installed apache server successfully in Window 7, 32bit system. It showed "It works" in the webpage. I also configured <?php phpinfo(); ?>file as info.php. But when I tried to open http://localhost/info.php in the browser, all I can get is exactly: <?php phpinfo(); ?>in plain text. I restarted Apache server everytime I made changes. Anyone can help with this? Many tnanks in advance!

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >