Search Results

Search found 371 results on 15 pages for 'cassandra clark'.

Page 5/15 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Are there any examples/tutorials of using Spring 3.0 with Cassandra as a backend?

    - by zeroDivisible
    Hello, As I had written in title, I am trying to learn Spring 3.0 (I already know Django, Pylons and few simpler MVC frameworks) and try to use Cassandra as a backend for my web application. Are there any real world examples of doing this? Or maybe some tutorials? I know about the existence of documentation of both technologies, yet I am looking for something "faster" to read and get me rolling.

    Read the article

  • Is there any documentation for the Cassandra Erlang interface?

    - by Zubair
    I have looked everywhere, and to use Cassandra from Erlang you end up having to download (amongst others): boost thrift : and then you have generate the erlang library by hand, and then copy lib files and beams files. Once you have the whole thing working there is absolutely zero documentation anywhere. If anyone could show me some user friendly documentation it would be much appreciated.

    Read the article

  • Pig_Cassandra integration caused - ERROR 1070: Could not resolve CassandraStorage using imports:

    - by Le Dude
    I'm following basic Pig, Cassandra, Hadoop installation. Everything works just fine as a stand alone. No error. However when I tried to run the example file provided by Pig_cassandra example, I got this error. [root@localhost pig]# /opt/cassandra/apache-cassandra-1.1.6/examples/pig/bin/pig_cassandra -x local -x local /opt/cassandra/apache-cassandra-1.1.6/examples/pig/example-script.pig Using /opt/pig/pig-0.10.0/pig-0.10.0-withouthadoop.jar. 2012-10-24 21:14:58,551 [main] INFO org.apache.pig.Main - Apache Pig version 0.10.0 (r1328203) compiled Apr 19 2012, 22:54:12 2012-10-24 21:14:58,552 [main] INFO org.apache.pig.Main - Logging error messages to: /opt/pig/pig_1351138498539.log 2012-10-24 21:14:59,004 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: file:/// 2012-10-24 21:14:59,472 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1070: Could not resolve CassandraStorage using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.] Details at logfile: /opt/pig/pig_1351138498539.log Here is the log file Pig Stack Trace --------------- ERROR 1070: Could not resolve CassandraStorage using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.] org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1000: Error during parsing. Could not resolve CassandraStorage using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.] at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1597) at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1540) at org.apache.pig.PigServer.registerQuery(PigServer.java:540) at org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:970) at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:386) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:189) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:165) at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84) at org.apache.pig.Main.run(Main.java:555) at org.apache.pig.Main.main(Main.java:111) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) Caused by: Failed to parse: Cannot instantiate: CassandraStorage at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:184) at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1589) ... 14 more Caused by: java.lang.RuntimeException: Cannot instantiate: CassandraStorage at org.apache.pig.impl.PigContext.instantiateFuncFromSpec(PigContext.java:510) at org.apache.pig.parser.LogicalPlanBuilder.validateFuncSpec(LogicalPlanBuilder.java:791) at org.apache.pig.parser.LogicalPlanBuilder.buildFuncSpec(LogicalPlanBuilder.java:780) at org.apache.pig.parser.LogicalPlanGenerator.func_clause(LogicalPlanGenerator.java:4583) at org.apache.pig.parser.LogicalPlanGenerator.load_clause(LogicalPlanGenerator.java:3115) at org.apache.pig.parser.LogicalPlanGenerator.op_clause(LogicalPlanGenerator.java:1291) at org.apache.pig.parser.LogicalPlanGenerator.general_statement(LogicalPlanGenerator.java:789) at org.apache.pig.parser.LogicalPlanGenerator.statement(LogicalPlanGenerator.java:507) at org.apache.pig.parser.LogicalPlanGenerator.query(LogicalPlanGenerator.java:382) at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:175) ... 15 more Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 1070: Could not resolve CassandraStorage using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.] at org.apache.pig.impl.PigContext.resolveClassName(PigContext.java:495) at org.apache.pig.impl.PigContext.instantiateFuncFromSpec(PigContext.java:507) ... 24 more ================================================================================ I googled around and got to this point from other stackoverflow user that identified the potential problem but not the solution. Cassandra and pig integration cause error during startup I believe my configuration is correct and the path has already been defined properly. I didn't change anything in the pig_cassandra file. I'm not quite sure how to proceed from here. Please help?

    Read the article

  • Whats the difference between Paxos and W+R>=N in Cassandra?

    - by user1128016
    Dynamo-like databases (e.g. Cassandra) provide ability to enforce consistency by means of quorum, i.e. a number of synchronously written replicas (W) and a number of replicas to read (R) should be chosen in such a way that W+RN where N is a replication factor. On the other hand, PAXOS-based systems like Zookeeper are also used as a consistent fault-tolerant storage. What is the difference between these two approaches? Does PAXOS provide guarantees that are not provided by W+RN schema?

    Read the article

  • Scalable distributed file system for blobs like images and other documents

    - by Pinnacle
    Cassandra & HBase both do not efficiently support storage of blobs like images. Storing directly on HDFS stresses the Namenode because of huge number of files. Facebook uses Haystack for images and attachments storage, but this is not open source. So is Lustre a good choice for distributed blob storage? I have read that Amazon S3 is used by many, but this would cost money and personally, I would not like to rely on third party system. What are other suggestions?

    Read the article

  • Searches (and general querying) with HBase and/or Cassandra (best practices?)

    - by alexeypro
    I have User model object with quite few fields (properties, if you wish) in it. Say "firstname", "lastname", "city" and "year-of-birth". Each user also gets "unique id". I want to be able to search by them. How do I do that properly? How to do that at all? My understanding (will work for pretty much any key-value storage -- first goes key, then value) u:123456789 = serialized_json_object ("u" as a simple prefix for user's keys, 123456789 is "unique id"). Now, thinking that I want to be able to search by firstname and lastname, I can save in: f:Steve = u:384734807,u:2398248764,u:23276263 f:Alex = u:12324355,u:121324334 so key is "f" - which is prefix for firstnames, and "Steve" is actual firstname. For "u:Steve" we save as value all user id's who are "Steve's". That makes every search very-very easy. Querying by few fields (properties) -- say by firstname (i.e. "Steve") and lastname (i.e. "l:Anything") is still easy - first get list of user ids from "f:Steve", then list from "l:Anything", find crossing user ids, an here you go. Problems (and there are quite a few): Saving, updating, deleting user is a pain. It has to be atomic and consistent operation. Also, if we have size of value limited to some value - then we are in (potential) trouble. And really not of an answer here. Only zipping the list of user ids? Not too cool, though. What id we want to add new field to search by. Eventually. Say by "city". We certainly can do the same way "c:Los Angeles" = ..., "c:Chicago" = ..., but if we didn't foresee all those "search choices" from the very beginning, then we will have to be able to create some night job or something to go by all existing User records and update those "c:CITY" for them... Quite a big job! Problems with locking. User "u:123" updates his name "Alex", and user "u:456" updates his name "Alex". They both have to update "f:Alex" with their id's. That means either we get into overwriting problem, or one update will wait for another (and imaging if there are many of them?!). What's the best way of doing that? Keeping in mind that I want to search by many fields? P.S. Please, the question is about HBase/Cassandra/NoSQL/Key-Value storages. Please please - no advices to use MySQL and "read about" SELECTs; and worry about scaling problems "later". There is a reason why I asked MY question exactly the way I did. :-)

    Read the article

  • Document-oriented vs Column-oriented database fit

    - by user1007922
    I have a data-intensive application that desperately needs a database make-over. The general data model: There are records with RIDs, grouped together by group IDs (GID). The records have arbitrary data fields, (maybe 5-15) with a few of them mandatory and the rest optional, and thus sparse. The general use model: There are LOTS and LOTS of Writes. Millions to Billions of records are stored. Very often, they are associated with new GIDs, but sometimes, they are associated with existing GIDs. There aren't as many reads, but when they happen, they need to be pretty fast or at least constant speed regardless of the database size. And when the reads happen, it will need to retrieve all the records/RIDs with a certain GID. I don't have a need to search by the record field values. Primarily, I will need to query by the GID and maybe RID. What database implementation should I use? I did some initial research between document-oriented and column-oriented databases and it seems the document-oriented ones are a good fit, model-wise. I could store all the records together under the same document key using the GID. But I don't really have any use for their ability to search the document contents itself. I like the simplicity and scalability of column-oriented databases like Cassandra, but how should I model my data in this paradigm for optimal performance? Should my key be the GID and should I create a column for each record/RID? (there maybe thousands or hundreds of thousands of records in a group/GID). Or should my key be the RID and ensure each row has a column for the GID value? What results in faster writes and reads under this model?

    Read the article

  • Cassandra API equivalent of "SELECT ... FROM ... WHERE id IN ('...', '...', '...');"

    - by knorv
    Assume the following data set: id age city phone == === ==== ===== alfred 30 london 3281283 jeff 43 sydney 2342734 joe 29 tokyo 1283881 kelly 54 new york 2394929 molly 20 london 1823881 rob 39 sydney 4928381 To get the following result set .. id age phone == === ===== alfred 30 3281283 joe 29 1283881 molly 20 1823881 .. using SQL one would issue .. SELECT id, age, phone FROM dataset WHERE id IN ('alfred', 'joe', 'molly'); What is the corresponding Cassandra API call that would yield the same result set in one command?

    Read the article

  • Scuttlebutt Reconciliation in the paper “Efficient Reconciliation and Flow Control for Anti-Entropy Protocols”

    - by soulmachine
    I am reading the paper "Efficient Reconciliation and Flow Control for Anti-Entropy Protocols"! , I couldn't clearly understand Section 3.2 "Scuttlebutt Reconciliation". Here I extract out some sentences from the paper, which especially confuse me. If gossip messages were unlimited in size, then the sets contains the exact differences,just like with precise reconciliation. Scuttlebutt requires that if a certain delta (r; k; v; n) is omitted, then all the deltas with higher version numbers for the same r should be omitted as well. Scuttlebutt satises the global invariant C(p;q) for any two processes p and q:

    Read the article

  • Trying to install datastax opscenter - Failed to load application: cannot import name _parse

    - by gansbrest
    I'm not familiar with python, maybe someone could explain what's going on here? ec2-user@prod-opscenter-01:~ % java -version java version "1.7.0_45" Java(TM) SE Runtime Environment (build 1.7.0_45-b18) Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode) ec2-user@prod-opscenter-01:~ % python -V Python 2.6.8 ec2-user@prod-opscenter-01:~ % openssl version OpenSSL 1.0.1e-fips 11 Feb 2013 And now the error ec2-user@prod-opscenter-01:~ % sudo /etc/init.d/opscenterd start Starting Cassandra cluster manager opscenterd Starting opscenterdUnhandled Error Traceback (most recent call last): File "/usr/lib64/python2.6/site-packages/twisted/application/app.py", line 652, in run runApp(config) File "/usr/lib64/python2.6/site-packages/twisted/scripts/twistd.py", line 23, in runApp _SomeApplicationRunner(config).run() File "/usr/lib64/python2.6/site-packages/twisted/application/app.py", line 386, in run self.application = self.createOrGetApplication() File "/usr/lib64/python2.6/site-packages/twisted/application/app.py", line 451, in createOrGetApplication application = getApplication(self.config, passphrase) --- <exception caught here> --- File "/usr/lib64/python2.6/site-packages/twisted/application/app.py", line 462, in getApplication application = service.loadApplication(filename, style, passphrase) File "/usr/lib64/python2.6/site-packages/twisted/application/service.py", line 405, in loadApplication application = sob.loadValueFromFile(filename, 'application', passphrase) File "/usr/lib64/python2.6/site-packages/twisted/persisted/sob.py", line 210, in loadValueFromFile exec fileObj in d, d File "bin/start_opscenter.py", line 1, in <module> from opscenterd import opscenterd_tap File "/usr/lib/python2.6/site-packages/opscenterd/opscenterd_tap.py", line 37, in <module> File "/usr/lib/python2.6/site-packages/opscenterd/OpsCenterdService.py", line 13, in <module> File "/usr/lib/python2.6/site-packages/opscenterd/ClusterServices.py", line 22, in <module> File "/usr/lib/python2.6/site-packages/opscenterd/WebServer.py", line 40, in <module> File "/usr/lib/python2.6/site-packages/opscenterd/Agents.py", line 18, in <module> exceptions.ImportError: cannot import name _parse Failed to load application: cannot import name _parse Maybe there are open source alternatives to monitoring cassandra I should look at? Thanks a lot

    Read the article

  • How to achieve zero down time

    - by Hiral Lakdavala
    For an application we want to achieve zero database and application down time using Active Active configuration. Our dB is Oracle Following are my questions: How can we achieve active active configuration in Oracle? Will introducing Cassandra/HBase(or any other No SQL dbs) cloud help in zero downtime or it is only for fast retrieval of data in a large db? Any other options? Thanks and Regards, Hiral

    Read the article

  • postgreSQL vs Cassandra vs MongoDB vs Voldemart ?

    - by ramonrails
    Which database to decide upon? Any comparisions? Existing: postgresql Issues Not easily scalable horizontal. Needs sharding etc Clustering does not solve the data growth problem Looking for: Any database that is easily horizontally scalable Cassandra (Twitter uses that?) MongoDB (rapidly gaining popularity) Voldemart Other? Why? Data growing with snowball effect existing postgresql locks table etc for vaccuum tasks periodically Archiving data is tideous currently Human interaction involved in existing archive, vaccuum, ... process periodically Need a 'set it. forget it. just add another server when data grows more.' type of solution

    Read the article

  • CentOS: OpsCenter does not see other node's agent

    - by Alice
    I'm new with Apache Cassandra. I am trying to install a little sample cluster using two CentOS server. I followed the documentation (Tarball installation) and the nodes are up. However, when I go to OpsCenter, the nodes cannot see each other's agent (there is always "1 of 2 agents connected"..I tried to fix, but nothing change). I tried both to disable and enable SSL, I tried to set the incoming_interface in opscenter.conf, I tried almost everything the network suggested to me, but the problem persisted. Now, I have SSL enabled, and agent log tell me: "There was an error when attempting to load stored rollups." Is there someone that could help me, please?

    Read the article

  • Can't upgrade my Ubuntu server, it gets stuck on openjdk-6-jre-headless

    - by Jean-Nicolas Boulay Desjardins
    I am using Ubuntu Server. When I do: apt-get upgrade it gets stuck on: Setting up openjdk-6-jre-headless (6b20-1.9.7-0ubuntu1) ... Why? And what can I do to stop it? I tried removing it with apt-get... I get this error: E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem. So then I tried this: dpkg --purge openjdk-6-jre-headless I got this: dpkg: dependency problems prevent removal of openjdk-6-jre-headless: openjdk-6-jre-lib depends on openjdk-6-jre-headless (>= 6b17). ca-certificates-java depends on openjdk-6-jre-headless (>= 6b16-1.6.1-2) | java6-runtime-headless; however: Package openjdk-6-jre-headless is to be removed. Package java6-runtime-headless is not installed. Package openjdk-6-jre-headless which provides java6-runtime-headless is to be removed. ca-certificates-java depends on openjdk-6-jre-headless (>= 6b16-1.6.1-2) | java6-runtime-headless; however: Package openjdk-6-jre-headless is to be removed. Package java6-runtime-headless is not installed. Package openjdk-6-jre-headless which provides java6-runtime-headless is to be removed. dpkg: error processing openjdk-6-jre-headless (--purge): dependency problems - not removing Errors were encountered while processing: openjdk-6-jre-headless The thing is I think my DB is using it... Not sure... I am using Cassandra with Thrift... Yes, it's getting a bit more complex... # dpkg --configure -a I get: dpkg: dependency problems prevent configuration of openjdk-6-jre: openjdk-6-jre depends on openjdk-6-jre-headless (>= 6b20-1.9.7-0ubuntu1); however: Package openjdk-6-jre-headless is not configured yet. dpkg: error processing openjdk-6-jre (--configure): dependency problems - leaving unconfigured Processing triggers for libc-bin ... ldconfig deferred processing now taking place dpkg: dependency problems prevent configuration of libaccess-bridge-java: libaccess-bridge-java depends on default-jre | openjdk-6-jre | sun-java6-jre; however: Package default-jre is not installed. Package openjdk-6-jre is not configured yet. Package sun-java6-jre is not installed. dpkg: error processing libaccess-bridge-java (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of icedtea-6-jre-cacao: icedtea-6-jre-cacao depends on openjdk-6-jre-headless (= 6b20-1.9.7-0ubuntu1); however: Package openjdk-6-jre-headless is not configured yet. dpkg: error processing icedtea-6-jre-cacao (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libaccess-bridge-java-jni: libaccess-bridge-java-jni depends on libaccess-bridge-java (>= 1.26.2-5); however: Package libaccess-bridge-java is not configured yet. dpkg: error processing libaccess-bridge-java-jni (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: openjdk-6-jre libaccess-bridge-java icedtea-6-jre-cacao libaccess-bridge-java-jni Thanks again for any help.

    Read the article

  • MySQL hangs if connection comes from outside the LAN

    - by Subito
    I have a MySQL Server operating just fine if I access him from his local LAN (192.168.100.0/24). If I try to access hin from another LAN (192.168.113.0/24 in this case) it hangs for a really long time before delivering the result. SHOW PROCESSLIST; shows this process in Sleep, State empty. If I strace -p this process I get the following Output (23512 is the PID of the corresponding mysqld process): Process 23512 attached - interrupt to quit restart_syscall(<... resuming interrupted call ...>) = 1 fcntl(10, F_GETFL) = 0x2 (flags O_RDWR) fcntl(10, F_SETFL, O_RDWR|O_NONBLOCK) = 0 accept(10, {sa_family=AF_INET, sin_port=htons(51696), sin_addr=inet_addr("192.168.113.4")}, [16]) = 33 fcntl(10, F_SETFL, O_RDWR) = 0 rt_sigaction(SIGCHLD, {SIG_DFL, [CHLD], SA_RESTORER|SA_RESTART, 0x7f9ce7ca34f0}, {SIG_DFL, [CHLD], SA_RESTORER|SA_RESTART, 0x7f9ce7ca34f0}, ) = 0 getpeername(33, {sa_family=AF_INET, sin_port=htons(51696), sin_addr=inet_addr("192.168.113.4")}, [16]) = 0 getsockname(33, {sa_family=AF_INET, sin_port=htons(3306), sin_addr=inet_addr("192.168.100.190")}, [16]) = 0 open("/etc/hosts.allow", O_RDONLY) = 64 fstat(64, {st_mode=S_IFREG|0644, st_size=580, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9ce9839000 read(64, "# /etc/hosts.allow: list of host"..., 4096) = 580 read(64, "", 4096) = 0 close(64) = 0 munmap(0x7f9ce9839000, 4096) = 0 open("/etc/hosts.deny", O_RDONLY) = 64 fstat(64, {st_mode=S_IFREG|0644, st_size=880, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9ce9839000 read(64, "# /etc/hosts.deny: list of hosts"..., 4096) = 880 read(64, "", 4096) = 0 close(64) = 0 munmap(0x7f9ce9839000, 4096) = 0 getsockname(33, {sa_family=AF_INET, sin_port=htons(3306), sin_addr=inet_addr("192.168.100.190")}, [16]) = 0 fcntl(33, F_SETFL, O_RDONLY) = 0 fcntl(33, F_GETFL) = 0x2 (flags O_RDWR) setsockopt(33, SOL_SOCKET, SO_RCVTIMEO, "\36\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 16) = 0 setsockopt(33, SOL_SOCKET, SO_SNDTIMEO, "<\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 16) = 0 fcntl(33, F_SETFL, O_RDWR|O_NONBLOCK) = 0 setsockopt(33, SOL_IP, IP_TOS, [8], 4) = 0 setsockopt(33, SOL_TCP, TCP_NODELAY, [1], 4) = 0 futex(0x7f9cea5c9564, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x7f9cea5c9560, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 futex(0x7f9cea5c6fe0, FUTEX_WAKE_PRIVATE, 1) = 1 poll([{fd=10, events=POLLIN}, {fd=12, events=POLLIN}], 2, -1) = 1 ([{fd=10, revents=POLLIN}]) fcntl(10, F_GETFL) = 0x2 (flags O_RDWR) fcntl(10, F_SETFL, O_RDWR|O_NONBLOCK) = 0 accept(10, {sa_family=AF_INET, sin_port=htons(51697), sin_addr=inet_addr("192.168.113.4")}, [16]) = 31 fcntl(10, F_SETFL, O_RDWR) = 0 rt_sigaction(SIGCHLD, {SIG_DFL, [CHLD], SA_RESTORER|SA_RESTART, 0x7f9ce7ca34f0}, {SIG_DFL, [CHLD], SA_RESTORER|SA_RESTART, 0x7f9ce7ca34f0}, ) = 0 getpeername(31, {sa_family=AF_INET, sin_port=htons(51697), sin_addr=inet_addr("192.168.113.4")}, [16]) = 0 getsockname(31, {sa_family=AF_INET, sin_port=htons(3306), sin_addr=inet_addr("192.168.100.190")}, [16]) = 0 open("/etc/hosts.allow", O_RDONLY) = 33 fstat(33, {st_mode=S_IFREG|0644, st_size=580, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9ce9839000 read(33, "# /etc/hosts.allow: list of host"..., 4096) = 580 read(33, "", 4096) = 0 close(33) = 0 munmap(0x7f9ce9839000, 4096) = 0 open("/etc/hosts.deny", O_RDONLY) = 33 fstat(33, {st_mode=S_IFREG|0644, st_size=880, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9ce9839000 read(33, "# /etc/hosts.deny: list of hosts"..., 4096) = 880 read(33, "", 4096) = 0 close(33) = 0 munmap(0x7f9ce9839000, 4096) = 0 getsockname(31, {sa_family=AF_INET, sin_port=htons(3306), sin_addr=inet_addr("192.168.100.190")}, [16]) = 0 fcntl(31, F_SETFL, O_RDONLY) = 0 fcntl(31, F_GETFL) = 0x2 (flags O_RDWR) setsockopt(31, SOL_SOCKET, SO_RCVTIMEO, "\36\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 16) = 0 setsockopt(31, SOL_SOCKET, SO_SNDTIMEO, "<\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 16) = 0 fcntl(31, F_SETFL, O_RDWR|O_NONBLOCK) = 0 setsockopt(31, SOL_IP, IP_TOS, [8], 4) = 0 setsockopt(31, SOL_TCP, TCP_NODELAY, [1], 4) = 0 futex(0x7f9cea5c9564, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x7f9cea5c9560, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 futex(0x7f9cea5c6fe0, FUTEX_WAKE_PRIVATE, 1) = 1 poll([{fd=10, events=POLLIN}, {fd=12, events=POLLIN}], 2, -1^C <unfinished ...> Process 23512 detached This output repeats itself until the answer gets send. It could take up to 15 Minutes until the request gets served. In the local LAN its a matter of Milliseconds. Why is this and how can I debug this further? [Edit] tcpdump shows a ton of this: 14:49:44.103107 IP cassandra-test.mysql > 192.168.X.6.64626: Flags [S.], seq 1434117703, ack 1793610733, win 14600, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0 14:49:44.135187 IP 192.168.X.6.64625 > cassandra-test.mysql: Flags [P.], seq 106:145, ack 182, win 4345, length 39 14:49:44.135293 IP cassandra-test.mysql > 192.168.X.6.64625: Flags [P.], seq 182:293, ack 145, win 115, length 111 14:49:44.167025 IP 192.168.X.6.64624 > cassandra-test.mysql: Flags [.], ack 444, win 4280, length 0 14:49:44.168933 IP 192.168.X.6.64626 > cassandra-test.mysql: Flags [.], ack 1, win 4390, length 0 14:49:44.169088 IP cassandra-test.mysql > 192.168.X.6.64626: Flags [P.], seq 1:89, ack 1, win 115, length 88 14:49:44.169672 IP 192.168.X.6.64625 > cassandra-test.mysql: Flags [P.], seq 145:171, ack 293, win 4317, length 26 14:49:44.169726 IP cassandra-test.mysql > 192.168.X.6.64625: Flags [P.], seq 293:419, ack 171, win 115, length 126 14:49:44.275111 IP 192.168.X.6.64626 > cassandra-test.mysql: Flags [P.], seq 1:74, ack 89, win 4368, length 73 14:49:44.275131 IP cassandra-test.mysql > 192.168.X.6.64626: Flags [.], ack 74, win 115, length 0 14:49:44.275149 IP 192.168.X.6.64625 > cassandra-test.mysql: Flags [P.], seq 171:180, ack 419, win 4286, length 9 14:49:44.275189 IP cassandra-test.mysql > 192.168.X.6.64626: Flags [P.], seq 89:100, ack 74, win 115, length 11 14:49:44.275264 IP 192.168.X.6.64625 > cassandra-test.mysql: Flags [P.], seq 180:185, ack 419, win 4286, length 5 14:49:44.275281 IP cassandra-test.mysql > 192.168.X.6.64625: Flags [.], ack 185, win 115, length 0 14:49:44.275295 IP cassandra-test.mysql > 192.168.X.6.64625: Flags [F.], seq 419, ack 185, win 115, length 0 14:49:44.275650 IP 192.168.X.6.64625 > cassandra-test.mysql: Flags [F.], seq 185, ack 419, win 4286, length 0 14:49:44.275660 IP cassandra-test.mysql > 192.168.X.6.64625: Flags [.], ack 186, win 115, length 0 14:49:44.275910 IP 192.168.X.6.64627 > cassandra-test.mysql: Flags [S], seq 2336421549, win 8192, options [mss 1351,nop,wscale 2,nop,nop,sackOK], length 0 14:49:44.275921 IP cassandra-test.mysql > 192.168.X.6.64627: Flags [S.], seq 3289359778, ack 2336421550, win 14600, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0

    Read the article

  • Java error: Bad version number in .class file error when trying to run Cassandra on OS X

    - by Sam Lee
    I am trying to get Cassandra to work on OS X. When I run bin/cassandra, I get the following error: ~/apache-cassandra-incubating-0.4.1-src > bin/cassandra -f Listening for transport dt_socket at address: 8888 Exception in thread "main" java.lang.UnsupportedClassVersionError: Bad version number in .class file at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:675) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:124) at java.net.URLClassLoader.defineClass(URLClassLoader.java:260) at java.net.URLClassLoader.access$100(URLClassLoader.java:56) at java.net.URLClassLoader$1.run(URLClassLoader.java:195) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:316) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:280) at java.lang.ClassLoader.loadClass(ClassLoader.java:251) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:374) From what I could determine by searching, this error is related to incompatible versions of Java. However, as far as I can tell, I have the latest version of Java: ~/apache-cassandra-incubating-0.4.1-src > java -version java version "1.6.0_13" Java(TM) SE Runtime Environment (build 1.6.0_13-b03-211) Java HotSpot(TM) 64-Bit Server VM (build 11.3-b02-83, mixed mode) ~/apache-cassandra-incubating-0.4.1-src > javac -version javac 1.6.0_13 ~/Downloads/apache-cassandra-incubating-0.4.1-src > echo $JAVA_HOME /System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home Any ideas on what I'm doing wrong?

    Read the article

  • Best architecture for a social media app

    - by Sky
    Hey guys, Im working on promising project that develops a new social media app for web and mobile. We are at begin defining functionalities. Nevertheless, I'm thinking ahead on architecture. So I'm asking: 1 - Whats the best plataform to develop the core of this aplication that will have a Rest API interface. 2 - Whats the best database that will scale and grow with my application. As far as I researched, these were the answers I found most interesting: For database: Cassandra NoSQL DB, amazing scalabilty, amazing write performance, good read performance (will be improved on 0.6). I think i will choose that one. Zookeer for transactions on Cassandra. I think that 2 technologies rly good for that propose. What do you think guys? On the front end that will serve the REST API, i dont have a final candidate. For this one i have questions based on Perfomance X Scalabilty X Fast Development/Maintenance. Java or .Net As far as I researched, brings the best balance of this requisits. Python, pearl and Rail, has the best (Fast Development/Maintenance), but sux on all other. C or C++ I dont even consider, because its (Fast Development/Maintenance) sux... So what do you guy think about it?

    Read the article

  • Sortie de la version 0.7 de Apache Cassandra, le SGBD NoSQL phare du Cloud Computing, utilisé par Facebook et Twitter

    Sortie de la version 0.7 de Apache Cassandra Le SGBD NoSQL phare du Cloud Computing, utilisé par Facebook et Twitter Mise à jour du 12/01/2011 par Idelways La fondation Apache vient d'annoncer la disponibilité de la version 0.7 de Cassandra, le système de gestion de base de donnée de type NoSQL conçu pour supporter des quantités massives de données. Parmi les nouveautés de cette version, l'arrivée des Indexs Secondaires, une manière efficace d'exécuter des requêtes sur des noeud locaux de stockages côté client. Autre nouveauté, le support des « Large Row », qui permettent d'avoir jusqu'à 2 milliards de co...

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >