Search Results

Search found 4291 results on 172 pages for 'cluster analysis'.

Page 129/172 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • How do I hook all http get requests and distinguish downloaded (by standard download manager) files

    - by Ivan
    I want to write a Firefox add-on for advanced history tracking and bookmarking which will send URLs the browser meets during usage (and all the metadata available about the context) to a web service which will keep track of them storing in an SQL database for further access and analysis. I'd like to divide URLs tracked into 5 groups: those I explicitly click to bookmark, those I download by Firefox standard built-in download manager, all other URLs accessed, all URLs met on all viewed pages as hrefs, all other URLs mentioned in HTML sources of all viewed pages. Any ideas of how to get those in an extension?

    Read the article

  • Help regarding no sql databases like hadoop, hbase etc

    - by user560370
    I am new to the distributed NoSQL databases like Hadoop, Cassandra, etc. I have few questions for which I seek an expert advice: Can you list problems/challenges one will generally face when making a shift from the present conventional database like MySQL to these large cluster-based databases? What are the difficulties, if any, when one needs to adapt to a newer version of these open source projects? Can you list out the things which are generally stored/kept in memcached for fast rendering of the page? How can I understand the source code of open-source projects so that I can build on it and maybe give back to the community? Above questions may sound to be idiotic and basic but please it's a request for the experts to answer the above questions in detailed and to best of their abilities.

    Read the article

  • Full GC real time is much more that user+sys times

    - by Stas
    Hi. We have a Web Java based application running on JBoss with allowed maximum heap size of about 1.2 GB (total machine physical memory is 2 GB). At some point the application stops responding (to clients) for several minutes. After some analysis we found out that the culprit is the Full GC. Here's an excerpt from the verbose GC log: 74477.402: [Full GC [PSYoungGen: 3648K-0K(332160K)] [PSOldGen: 778476K-589497K(819200K)] 782124K-589497K(1151360K) [PSPermGen: 102671K-102671K(171328K)], 646.1546860 secs] [Times: user=3.84 sys=3.72, real=646.17 secs] What I don't understand is how is it possible that the real time spent on Full GC is about 11 minutes (646 seconds), while user+sys times are just 7.5 seconds. 7.5 seconds sound to me much more logical time to spend for cleaning <200 MB from the old generation. Where does all the other time go? Thanks a lot.

    Read the article

  • efficient video format/codec for sparse & binary blob tracking

    - by user391339
    I am working on a blob tracking project and have many high-definition videos that I would like to reduce in size for storage and downstream tracking/shape-analysis. I want to use a lossless method that takes advantage of the black and white nature of the video as well as the fact that not much is moving between individual frames. The videos are quite sparse, with 5 to 10 b&w blobs per frame occupying <30% of the space in total, with each blob moving <5-10% of the field of view between frames and not changing shape too much between 2-3 frames. I will work in Python, Matlab, or LabView for this project, and could use a batch utility if available. It may be worthwhile to export the files as compressed image stacks if a proper video format can't be found. What are the pros and cons of this? A video codec uses correlations between neighboring frames, so it should be more efficient, but not if the wrong one is chosen or if it is improperly configured.

    Read the article

  • Cocos 2D v1.0.1 : Crash at changing CCMenuItemImage normal image

    - by Max
    I retrieve a crash log file, which, after XCode analysis on my archive show the problematic line of code : Date/Time: 2012-12-08 23:48:08.930 +0100 OS Version: iPhone OS 5.1.1 (9B206) Report Version: 104 Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x00000000, 0x00000000 Crashed Thread: 0 Last Exception Backtrace: 0 CoreFoundation 0x31a4088f __exceptionPreprocess + 163 1 libobjc.A.dylib 0x3188b259 objc_exception_throw + 33 2 CoreFoundation 0x31a40789 +[NSException raise:format:] + 1 3 Foundation 0x374c73a3 -[NSAssertionHandler handleFailureInMethod:object:file:lineNumber:description:] + 91 4 Killer 0x0017ed35 -[CCSprite initWithFile:] (CCSprite.m:201) 5 Killer 0x0017e419 +[CCSprite spriteWithFile:] (CCSprite.m:93) 6 Killer 0x00123101 -[Player makeZombie] (Player.m:1363) 7 Killer 0x00105a51 -[PlayScene endOfKilling:] (PlayScene.m:1438) Which clearly indicates the second of the two following lines is crashing: NSLog(@"images %@ %@",self.zombieImage,self.zombieImageDown); [self.characterSprite setNormalImage:[CCSprite spriteWithFile:self.zombieImage]]; I know that the crash seem to happend when the user is touching the corresponding CCMeanuItemImage, is there a problem if the user is touching it, while we change the normal et selected images of it ? Is this the right manner to change its image (i do it several times during the game) ? Thanks for your ideas

    Read the article

  • FlockDB - What is it? And best cases for it uses.

    - by Guru
    Just came across FlockDB graph database. Details at github /flockDB. Twitter claims it uses FlockDB for the following: Twitter runs FlockDB on a large cluster of machines. we use it to store social graphs (who follows whom, who blocks whom) and secondary indices at twitter. At first glance, setup and trying it doesn't look straight forward. Have anyone already used it / setup this? If so, please answer the following general queries. What kind of applications is it better suited for? (Twitter claims it is simple and very rough, it remains to see what it meant though) How is FlockDB better than other graph db / noSQL db. Have you setup FlockDB, used it for a application? Early advices any? Note: I am evaluating the FlockDB and other graph databases mainly for learning them. Perhaps, I will build an application for that.

    Read the article

  • J2EE/EJB + service locator: is it safe to cache EJB Home lookup result ?

    - by Guillaume
    In a J2EE application, we are using EJB2 in weblogic. To avoid losing time building the initial context and looking up EJB Home interface, I'm considering the Service Locator Pattern. But after a few search on the web I found that event if this pattern is often recommended for the InitialContext caching, there are some negative opinion about the EJB Home caching. Questions: Is it safe to cache EJB Home lookup result ? What will happen if one my cluster node is no more working ? What will happen if I install a new version of the EJB without refreshing the service locator's cache ?

    Read the article

  • Horizontal Scaling of Tomcat in Microsoft Azure

    - by Fabe
    Hey everyone, I am working on this quiet a while, but still no conclustion. I want to do horizontal scaling of Tomcat instances in Microsoft Azure (1,2,3,... Tomcat instances for one service). I read lots of articles about session replication, clustering,... with Tomcat. Since Azure does not support Multicasts, there is no easy way to cluster Tomcat. Also sticky sessions is no options, because Azure does round robin load balancing. Setting up two services - one with Terracotta or Apache mod_jk - and the other with Tomcat instances seems overkill for me (if even doable)... Is this even possible? Thanks in advance for reading and answering my question. Every comment/idea is highly appreciated.

    Read the article

  • How to analyse Dalvik GC behaviour?

    - by HRJ
    I am developing an application on Android. It is a long running application that continuously processes sensor data. While running the application I see a lot of GC messages in the logcat; about one every second. This is most probably because of objects being created and immediately de-referenced in a loop. How do I find which objects are being created and released immediately? All the java heap analysis tools that I have tried(*) are bothered with the counts and sizes of objects on the heap. While they are useful, I am more interested in finding out the site where temporary short-lived objects get created the most. (*) I tried jcat and Eclipse MAT. I couldn't get hat to work on the Android heap-dumps; it complained of an unsupported dump file version.

    Read the article

  • How should I build a simple database package for my python application?

    - by Carson Myers
    I'm building a database library for my application using sqlite3 as the base. I want to structure it like so: db/ __init__.py users.py blah.py etc.py So I would do this in Python: import db db.users.create('username', 'password') I'm suffering analysis paralysis (oh no!) about how to handle the database connection. I don't really want to use classes in these modules, it doesn't really seem appropriate to be able to create a bunch of "users" objects that can all manipulate the same database in the same ways -- so inheriting a connection is a no-go. Should I have one global connection to the database that all the modules use, and then put this in each module: #users.py from db_stuff import connection Or should I create a new connection for each module and keep that alive? Or should I create a new connection for every transaction? How are these database connections supposed to be used? The same goes for cursor objects: Do I create a new cursor for each transaction? Create just one for each database connection?

    Read the article

  • How to insert variables in R twitteR updates?

    - by analyticsPierce
    Hello, I am using the twitteR package in R to update my twitter status with results from analysis. The static tweet function works: library(twitteR) sess = initSession('username','password') tweet = tweet('I am a tweet', sess) However, when I add a variable to display some specific results I get an error. library(twitteR) sess = initSession('username','password') res = c(3,5,8) msg = cat('Results are: ', res, ', that is nice right?') tweet = tweet(msg, sess) Results in: Error in twFromJSON(rawToChar(out)) : Error: Client must provide a 'status' parameter with a value. Any suggestions are appreciated.

    Read the article

  • how to optimize sql server table for faster response?

    - by Thomas
    i found a in a table there are 50 thousands records and it takes one minute when we fetch data from sql server table just by issuing a sql. there are one primary key that means a already a cluster index is there. i just do not understand why it takes one minute. beside index what are the ways out there to optimize a table to get the data faster. in this situation what i need to do for faster response. also tell me how we can write always a optimize sql. please tell me all the steps in detail for optimization. thanks.

    Read the article

  • kmeans based on mapreduce by python

    - by user3616059
    I am going to write a mapper and reducer for the kmeans algorithm, I think the best course of action to do is putting the distance calculator in mapper and sending to reducer with the cluster id as key and coordinates of row as value. In reducer, updating the centroids would be performed. I am writing this by python. As you know, I have to use Hadoop streaming to transfer data between STDIN and STOUT. according to my knowledge, when we print (key + "\t"+value), it will be sent to reducer. Reducer will receive data and it calculates the new centroids but when we print new centroids, I think it does not send them to mapper to calculate new clusters and it just send it to STDOUT and as you know, kmeans is a iterative program. So, my questions is whether Hadoop streaming suffers of doing iterative programs and we should employ MRJOB for iterative programs?

    Read the article

  • How can I compute the average cost for this solution of the element uniqueness problem?

    - by Alceu Costa
    In the book Introduction to the Design & Analysis of Algorithms, the following solution is proposed to the element uniqueness problem: ALGORITHM UniqueElements(A[0 .. n-1]) // Determines whether all the elements in a given array are distinct // Input: An array A[0 .. n-1] // Output: Returns "true" if all the elements in A are distinct // and false otherwise. for i := 0 to n - 2 do for j := i + 1 to n - 1 do if A[i] = A[j] return false return true How can I compute the average cost (i.e. number of comparisons for a given n) for this algorithm? What is a reasonable assumption about the input?

    Read the article

  • Spreatsheet:WriteExcel create Chart

    - by yaohung
    Hi, I used csv2xls.pl to convert a text log into .xls file, and then apply create chart function as following: my $chart3 = $workbook-add_chart( type = 'line' , embedded = 1); Configure the series. $chart3-add_series( categories = '=Sheet1!$B$2:$B$64', values = '=Sheet1!$C$2:$C$64', name = 'Test data series 1', ); Add some labels. $chart3-set_title( name = 'Bridge Rate Analysis' ); $chart3-set_x_axis( name = 'Packet Size ' ); $chart3-set_y_axis( name = 'BVI Rate' ); Insert the chart into the main worksheet. $worksheet-insert_chart( 'G2', $chart3 ); ========== I can see the chart in .xls file, however, all the data is in text format, not number, therefore, the chart looks wrong. I am wondering can you tell me how to convert text into number before apply this create chart function? One other thing is any idea how to apply sorting on the .xls file before create chart? Thanks. Yaohung

    Read the article

  • How should I capture clickstream data?

    - by editor
    I'd like to start using clickstream analysis to improve a dynamic site's user experience. I'd like to rule out two options: parameterizing URLs (index.php?src=http://www.example.com) and immediate database logging. The former makes pretty ugly URLs and isn't great for SEO and the latter might slow down page render when there are lots of concurrent users. Assuming these aren't viable options, I think I'm left with doing an asynchronous POST to a server side script that runs a database query and returns a 204 (no data) response. Is this the best option for capturing clickstream data?

    Read the article

  • If I take a large datatype. Will it affect performance in sql server

    - by Shantanu Gupta
    If i takes larger datatype where i know i should have taken datatype that was sufficient for possible values that i will insert into a table will affect any performance in sql server in terms of speed or any other way. eg. IsActive (0,1,2,3) not more than 3 in any case. I know i must take tinyint but due to some reasons consider it as compulsion, i am taking every numeric field as bigint and every character field as nVarchar(Max) Please give statistics if possible, to let me try to overcoming that compulsion. I need some solid analysis that can really make someone rethink before taking any datatype.

    Read the article

  • Asterisk: Originate API - Which card to use to detect busy/ringing/answer event for FXO

    - by spkhaira
    I want to use Originate API of Asterisk to place an outbound call on a FXO channel, for testing purpose I am using X100P card and, as expected, card is not able to detect if the number is busy/ringing or when it is answered. I want to know which card should I use so that I can get such basic events ... I am not really interested in detailed call progress analysis for answering machine or live voice. I just need basic busy/ringing and answer events and maybe a dis-connect event. Thanks.

    Read the article

  • Use C function in C++ program; "multiply-defined" error

    - by eom
    I am trying to use this code for the Porter stemming algorithm in a C++ program I've already written. I followed the instructions near the end of the file for using the code as a separate module. I created a file, stem.c, that ends after the definition and has extern int stem(char * p, int i, int j) ... It worked fine in Xcode but it does not work for me on Unix with gcc 4.1.1--strange because usually I have no problem moving between the two. I get the error ld: fatal: symbol `stem(char*, int, int)' is multiply-defined: (file /var/tmp//ccrWWlnb.o type=FUNC; file /var/tmp//cc6rUXka.o type=FUNC); ld: fatal: File processing errors. No output written to cluster I've looked online and it seems like there are many things I could have wrong, but I'm not sure what combination of a header file, extern "C", etc. would work.

    Read the article

  • ASP.NET Web API crash MSVCR90.dll

    - by user858931
    I have a web api app developed on VS2010. The application calls an external program to run and it runs just fine if I execute in VS2010. But then when I deploy the web api to IIS 7 and 7.5, in some cases, it crashes. Below is the details I got from Event Viewer/Appliation: Fault bucket , type 0 Event Name: BEX Response: Not available Cab Id: 0 Problem signature: P1: TestProgram.exe P2: 1.0.4728.17141 P3: 50c76dea P4: MSVCR90.dll P5: 9.0.30729.6161 P6: 4dace5b9 P7: 0003024a P8: c0000417 P9: 00000000 P10: Attached files: These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_InferenceGenerat_95abb43ace91480da6b8f27f9937db667bc58f_7bb1549d Analysis symbol: Rechecking for solution: 0 Report Id: da8f304e-44c0-11e2-b4e8-0026b97a5242 Report Status: 0 Any idea why it happens and how to fix it? Thanks.

    Read the article

  • NOT LIKE not working on comparison to a column

    - by rodling
    Data is fairly large and takes few minutes to run it every time, so its taking a lot of time debugging this problem. When I run like concat('%',T.item,'%') on smaller data it seems to identify items properly. However, when I run it on the main DB (the code shown), it still shows many(maybe even all) of the exceptions. EDIT: it seems when i add NOT it stops identifying items select distinct T.comment from (select comment, source, item from data, non_informative where ticker != "O" and source != 7 and source != 6) as T where T.comment not like concat('%',T.item,'%') order by T.comment; comment and source are in data, item is in non_informative Some items from T.item: 'Stock Analysis -', '#InsideTrades', 'IIROC Trade' Example comment which should be removed '#InsideTrades #4 | MACNAB CRAIG (Director,Officer,Chief Executive Officer): Filed Form 4 for $NNN (NATIONAL RETA' Can't seem to figure out it why shows all the items

    Read the article

  • Handling null values with PowerShell dates

    - by Tim Ferrill
    I'm working on a module to pull data from Oracle into a PowerShell data table, so I can automate some analysis and perform various actions based on the results. Everything seems to be working, and I'm casting columns into specific types based on the column type in Oracle. The problem I'm having has to do with null dates. I can't seem to find a good way to capture that a date column in Oracle has a null value. Is there any way to cast a [datetime] as null or empty?

    Read the article

  • Does declaring many identical anonymous classes waste memory in java?

    - by depsypher
    I recently ran across the following snippet in an existing codebase I'm working on and added the comment you see there. I know this particular piece of code can be rewritten to be cleaner, but I just wonder if my analysis is correct. Will java create a new class declaration and store it in perm gen space for every call of this method, or will it know to reuse an existing declaration? protected List<Object> extractParams(HibernateObjectColumn column, String stringVal) { // FIXME: could be creating a *lot* of anonymous classes which wastes perm-gen space right? return new ArrayList<Object>() { { add(""); } }; }

    Read the article

  • How does Contract.Exists add value?

    - by Scott Bilas
    I am just starting to learn about the code contracts library that comes standard with VS2010. One thing I am running into right away is what some of the contract clauses really mean. For example, how are these two statements different? Contract.Requires(!mycollection.Any(a => a.ID == newID)); Contract.Requires(!Contract.Exists(mycollection, a => a.ID == newID)); In other words, what does Contract.Exists do in practical purposes, either for a developer using my function, or for the static code analysis system?

    Read the article

  • DB2 increase bufferpool size and compressed tables not equal better performance. Why?

    - by Mestika
    Hi, I’m working on tuning and increasing the performance of my IBM DB2 version 9.7 database. I’ve been searching around the net for the last couple of days and learned that if I created my tables in COMPRESS mode and created one more bufferpool and set both of them to access 1024mb, then the performance in my queries should increase because of the less I/Os to the disks. However, when I run my time analysis, the performance Decrease. I added the new additions to my regular database with the indexes I’ve used all the time. Each time I search google I come up with the statement that: Increased bufferpool size and several bufferpools AND a table compression SHOULD prove to get better performance. I’m very puzzled about the total unexpected result. Are there some tuning mechanisms I’ve forgot or does anyone have a explanation for this odd behavior? Sincerely Mestika

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >