Search Results

Search found 13608 results on 545 pages for 'performance dashboard'.

Page 486/545 | < Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >

  • Architecture for database analytics

    - by David Cournapeau
    Hi, We have an architecture where we provide each customer Business Intelligence-like services for their website (internet merchant). Now, I need to analyze those data internally (for algorithmic improvement, performance tracking, etc...) and those are potentially quite heavy: we have up to millions of rows / customer / day, and I may want to know how many queries we had in the last month, weekly compared, etc... that is the order of billions entries if not more. The way it is currently done is quite standard: daily scripts which scan the databases, and generate big CSV files. I don't like this solutions for several reasons: as typical with those kinds of scripts, they fall into the write-once and never-touched-again category tracking things in "real-time" is necessary (we have separate toolset to query the last few hours ATM). this is slow and non-"agile" Although I have some experience in dealing with huge datasets for scientific usage, I am a complete beginner as far as traditional RDBM go. It seems that using column-oriented database for analytics could be a solution (the analytics don't need most of the data we have in the app database), but I would like to know what other options are available for this kind of issues.

    Read the article

  • Drawbacks with using Class Methods in Objective C.

    - by RickiG
    Hi I was wondering if there are any memory/performance drawbacks, or just drawbacks in general, with using Class Methods like: + (void)myClassMethod:(NSString *)param { // much to be done... } or + (NSArray*)myClassMethod:(NSString *)param { // much to be done... return [NSArray autorelease]; } It is convenient placing a lot of functionality in Class Methods, especially in an environment where I have to deal with memory management(iPhone), but there is usually a catch when something is convenient? An example could be a thought up Web Service that consisted of a lot of classes with very simple functionality. i.e. TomorrowsXMLResults; TodaysXMLResults; YesterdaysXMLResults; MondaysXMLResults; TuesdaysXMLResults; . . . n I collect a ton of these in my Web Service Class and just instantiate the web service class and let methods on this class call Class Methods on the 'Results' Classes. The classes are simple but they handle large amount of Xml, instantiate lots of objects etc. I guess I am asking if Class Methods lives or are treated different on the stack and in memory than messages to instantiated objects? Or are they just instantiated and pulled down again behind the scenes and thus, just a way of saving a few lines of code?

    Read the article

  • Generic InBetween Function.

    - by Luiscencio
    I am tired of writing x > min && x < max so i wawnt to write a simple function but I am not sure if I am doing it right... actually I am not cuz I get an error: bool inBetween<T>(T x, T min, T max) where T:IComparable { return (x > min && x < max); } errors: Operator '>' cannot be applied to operands of type 'T' and 'T' Operator '<' cannot be applied to operands of type 'T' and 'T' may I have a bad understanding of the where part in the function declaring note: for those who are going to tell me that I will be writing more code than before... think on readability =) any help will be appreciated EDIT deleted cuz it was resolved =) ANOTHER EDIT so after some headache I came out with this (ummm) thing following @Jay Idea of extreme readability: public static class test { public static comparision Between<T>(this T a,T b) where T : IComparable { var ttt = new comparision(); ttt.init(a); ttt.result = a.CompareTo(b) > 0; return ttt; } public static bool And<T>(this comparision state, T c) where T : IComparable { return state.a.CompareTo(c) < 0 && state.result; } public class comparision { public IComparable a; public bool result; public void init<T>(T ia) where T : IComparable { a = ia; } } } now you can compare anything with extreme readability =) what do you think.. I am no performance guru so any tweaks are welcome

    Read the article

  • Best XML format for log events in terms of tool support for data mining and visualization?

    - by Thorbjørn Ravn Andersen
    We want to be able to create log files from our Java application which is suited for later processing by tools to help investigate bugs and gather performance statistics. Currently we use the traditional "log stuff which may or may not be flattened into text form and appended to a log file", but this works the best for small amounts of information read by a human. After careful consideration the best bet has been to store the log events as XML snippets in text files (which is then treated like any other log file), and then download them to the machine with the appropriate tool for post processing. I'd like to use as widely supported an XML format as possible, and right now I am in the "research-then-make-decision" phase. I'd appreciate any help both in terms of XML format and tools and I'd be happy to write glue code to get what I need. What I've found so far: log4j XML format: Supported by chainsaw and Vigilog. Lilith XML format: Supported by Lilith Uninvestigated tools: Microsoft Log Parser: Apparently supports XML. OS X log viewer: plus there is a lot of tools on http://www.loganalysis.org/sections/parsing/generic-log-parsers/ Any suggestions?

    Read the article

  • Research and replace Word Rtf

    - by Perello
    I'm working on an application which has a workflow for postal mails. These postal mails are generated according to my application business rules. Models are in html or Rtf and it works perfectly as long the user do not create the rtf with word. This is not within the specs, but my hierarchy would welcome a Word compatibility if it don't involve too much work, and it would please and ease the life of our customer. The Rtf models have tags which are replaced by application values. In most RTF, tags are not splitted, so the search and replace works perfectly. I wish to be handle word with few modifications. Example data : [[FooBuzz]] in most rtf it's not splited. In word 2003 : {\rtlch\fcs1 \af0 \ltrch\fcs0 \insrsid5517131 [[}{\rtlch\fcs1 \af0 \ltrch\fcs0 \insrsid2708730 FooBuzz}{\rtlch\fcs1 \af0 \ltrch\fcs0 \insrsid5517131 ]]} And their word (word 2007) splitted also Foo{garbage inside} Buzz. So i wish to be able to handle common RTF perfectly, and detect tags even if they are splitted. I have 2 constraints. First no regression, second it has to stay simple. Performance is not an issue here. I'm using symfony 1.4. The actual relevant research code part : $regExpression = '/\[\[([^\[\]]*)\]\]/'; preg_match_all($regExpression, $sTemplate, $outKeys);

    Read the article

  • How can I handle dynamic calculated attributes in a model in Django?

    - by bullfish
    In Django I calculate the breadcrumb (a list of fathers) for an geographical object. Since it is not going to change very often, I am thinking of pre calculating it once the object is saved or initialized. 1.) What would be better? Which solution would have a better performance? To calculate it at _init_ or to calculate it when the object is saved (the object takes about 500-2000 characters in the DB)? 2.) I tried to overwrite the _init_ or save() methods but I don't know how to use attributes of the just saved object. Accessing *args, **kwargs did not work. How can I access them? Do I have to save, access the father and then save again? 3.) If I decide to save the breadcrumb. Whats the best way to do it? I used http://www.djangosnippets.org/snippets/1694/ and have crumb = PickledObjectField(). Thats the method to calculate the attribute crumb() def _breadcrumb(self): breadcrumb = [ ] x = self while True: x = x.father try: if hasattr(x, 'country'): breadcrumb.append(x.country) elif hasattr(x, 'region'): breadcrumb.append(x.region) elif hasattr(x, 'city'): breadcrumb.append(x.city) else: break except: break breadcrumb.reverse() return breadcrumb Thats my save-Method: def save(self,*args, **kwargs): # how can I access the father ob the object? father = self.father # does obviously not work father = kwargs['father'] # does not work either # the breadcrumb gets calculated here self.crumb = self._breadcrumb(father) super(GeoObject, self).save(*args,**kwargs) Please help me out. I am working on this for days now. Thank you.

    Read the article

  • Possible to capture all events in a web browser?

    - by David
    I am working on a pet project and am at the research stage. Quick summary I am trying to intercept all form submits, onclick, and every single keydown. My library of choice is either jquery, or jquery + prototypejs. I figure I can batch up the events into a queue/stack and send it back to the server in time interval batches to keep performance relatively stable. Concerns Form submits and change's would be relatively simple.. Something like $("form :inputs").bind("change", function() { ... record event... }); But how to ensure I get precedence over the applications handlers as I have a habit of putting return false on a lot of my form handlers when there is a validation issue. As I understand it, that effectively stops the event in its tracks. My project For my smaller remote clients I will put their products onto a VPS or run it in my home data center. Currently I use a basic authentication system, given a username/password they see the website and then hopefully send me somewhat sane notes on what is broken or should be tweaked. As a better solution, I've written a simple proxy web server that does the above but allows me to have one DNS entry and then depending on credentials it makes an internal request relaying headers and re-writing URLS as needed. Every single html/text or application/* request is compressed and shoved into a sqlite table so I can partially replay what they've done. Now I am shifting to the frontend and would like to capture clicks, keydown's, and submits on everything on the page.

    Read the article

  • How does someone without a CS degree get an interview in a sluggish economy?

    - by Anon
    I've been programming off and on since 4th grade (for about 20 years). Technology is one of my passions but after working in the field for a couple years out of High School, I spent nine months and $15,000 getting an accredited certificate in music performance instead of CS. I've been doing lots of self study but I think a CS degree is overkill for most line of business applications. Even so, HR departments can't be expected to know that... How does one get their foot in the proverbial door without a degree, especially in a smaller "fly-over country" market? ...or... Where can I get the cheapest/easiest degree that will pass muster (including testing out of as much as possible)? Don't get me wrong, I'm down with learning new things but I don't necessarily need the expense or coaching to motivate me. EDIT Consolidating good answers: Networking/User Groups Portfolio/Open Source Contributions Look for hybrid jobs (How I got my start :) ) Seek un-elitist companies/hiring managers. (Play the numbers game) Start my own business. (This is a bit challenging for a family man but a very good answer. My reason for searching is to reduce my commute thereby allowing more time to cultivate income on the side) Avail myself of political subsidies to constituents in the teachers' unions ;) .

    Read the article

  • PHP Socket Server vs node.js: Web Chat

    - by Eliasdx
    I want to program a HTTP WebChat using long-held HTTP requests (Comet), ajax and websockets (depending on the browser used). Userdatabase is in mysql. Chat is written in PHP except maybe the chat stream itself which could also be written in javascript (node.js): I don't want to start a php process per user as there is no good way to send the chat messages between these php childs. So I thought about writing an own socket server in either PHP or node.js which should be able to handle more then 1000 connections (chat users). As a purely web developer (php) I'm not much familiar with sockets as I usually let web server care about connections. The chat messages won't be saved on disk nor in mysql but in RAM as an array or object for best speed. As far as I know there is no way to handle multiple connections at the same time in a single php process (socket server), however you can accept a great amount of socket connections and process them successive in a loop (read and write; incoming message - write to all socket connections). The problem is that there will most-likely be a lag with ~1000 users and mysql operations could slow the whole thing down which will then affect all users. My question is: Can node.js handle a socket server with better performance? Node.js is event-based but I'm not sure if it can process multiple events at the same time (wouldn't that need multi-threading?) or if there is just an event queue. With an event queue it would be just like php: process user after user. I could also spawn a php process per chat room (much less users) but afaik there are singlethreaded IRC servers which are also capable to handle thousands of users. (written in c++ or whatever) so maybe it's also possible in php. I would prefer PHP over Node.js because then the project would be php-only and not a mixture of programming languages. However if Node can process connections simultaneously I'd probably choose it.

    Read the article

  • Is there any appreciable difference between if and if-else?

    - by Drew
    Given the following code snippets, is there any appreciable difference? public boolean foo(int input) { if(input > 10) { doStuff(); return true; } if(input == 0) { doOtherStuff(); return true; } return false; } vs. public boolean foo(int input) { if(input > 10) { doStuff(); return true; } else if(input == 0) { doOtherStuff(); return true; } else { return false; } } Or would the single exit principle be better here with this piece of code... public boolean foo(int input) { boolean toBeReturned = false; if(input > 10) { doStuff(); toBeReturned = true; } else if(input == 0) { doOtherStuff(); toBeReturned = true; } return toBeReturned; } Is there any perceptible performance difference? Do you feel one is more or less maintainable/readable than the others?

    Read the article

  • High Runtime for Dictionary.Add for a large amount of items

    - by aaginor
    Hi folks, I have a C#-Application that stores data from a TextFile in a Dictionary-Object. The amount of data to be stored can be rather large, so it takes a lot of time inserting the entries. With many items in the Dictionary it gets even worse, because of the resizing of internal array, that stores the data for the Dictionary. So I initialized the Dictionary with the amount of items that will be added, but this has no impact on speed. Here is my function: private Dictionary<IdPair, Edge> AddEdgesToExistingNodes(HashSet<NodeConnection> connections) { Dictionary<IdPair, Edge> resultSet = new Dictionary<IdPair, Edge>(connections.Count); foreach (NodeConnection con in connections) { ... resultSet.Add(nodeIdPair, newEdge); } return resultSet; } In my tests, I insert ~300k items. I checked the running time with ANTS Performance Profiler and found, that the Average time for resultSet.Add(...) doesn't change when I initialize the Dictionary with the needed size. It is the same as when I initialize the Dictionary with new Dictionary(); (about 0.256 ms on average for each Add). This is definitely caused by the amount of data in the Dictionary (ALTHOUGH I initialized it with the desired size). For the first 20k items, the average time for Add is 0.03 ms for each item. Any idea, how to make the add-operation faster? Thanks in advance, Frank

    Read the article

  • Rewriting a for loop in pure NumPy to decrease execution time

    - by Statto
    I recently asked about trying to optimise a Python loop for a scientific application, and received an excellent, smart way of recoding it within NumPy which reduced execution time by a factor of around 100 for me! However, calculation of the B value is actually nested within a few other loops, because it is evaluated at a regular grid of positions. Is there a similarly smart NumPy rewrite to shave time off this procedure? I suspect the performance gain for this part would be less marked, and the disadvantages would presumably be that it would not be possible to report back to the user on the progress of the calculation, that the results could not be written to the output file until the end of the calculation, and possibly that doing this in one enormous step would have memory implications? Is it possible to circumvent any of these? import numpy as np import time def reshape_vector(v): b = np.empty((3,1)) for i in range(3): b[i][0] = v[i] return b def unit_vectors(r): return r / np.sqrt((r*r).sum(0)) def calculate_dipole(mu, r_i, mom_i): relative = mu - r_i r_unit = unit_vectors(relative) A = 1e-7 num = A*(3*np.sum(mom_i*r_unit, 0)*r_unit - mom_i) den = np.sqrt(np.sum(relative*relative, 0))**3 B = np.sum(num/den, 1) return B N = 20000 # number of dipoles r_i = np.random.random((3,N)) # positions of dipoles mom_i = np.random.random((3,N)) # moments of dipoles a = np.random.random((3,3)) # three basis vectors for this crystal n = [10,10,10] # points at which to evaluate sum gamma_mu = 135.5 # a constant t_start = time.clock() for i in range(n[0]): r_frac_x = np.float(i)/np.float(n[0]) r_test_x = r_frac_x * a[0] for j in range(n[1]): r_frac_y = np.float(j)/np.float(n[1]) r_test_y = r_frac_y * a[1] for k in range(n[2]): r_frac_z = np.float(k)/np.float(n[2]) r_test = r_test_x +r_test_y + r_frac_z * a[2] r_test_fast = reshape_vector(r_test) B = calculate_dipole(r_test_fast, r_i, mom_i) omega = gamma_mu*np.sqrt(np.dot(B,B)) # write r_test, B and omega to a file frac_done = np.float(i+1)/(n[0]+1) t_elapsed = (time.clock()-t_start) t_remain = (1-frac_done)*t_elapsed/frac_done print frac_done*100,'% done in',t_elapsed/60.,'minutes...approximately',t_remain/60.,'minutes remaining'

    Read the article

  • Event feed implementation - will it scale?

    - by SlappyTheFish
    Situation: I am currently designing a feed system for a social website whereby each user has a feed of their friends' activities. I have two possible methods how to generate the feeds and I would like to ask which is best in terms of ability to scale. Events from all users are collected in one central database table, event_log. Users are paired as friends in the table friends. The RDBMS we are using is MySQL. Standard method: When a user requests their feed page, the system generates the feed by inner joining event_log with friends. The result is then cached and set to timeout after 5 minutes. Scaling is achieved by varying this timeout. Hypothesised method: A task runs in the background and for each new, unprocessed item in event_log, it creates entries in the database table user_feed pairing that event with all of the users who are friends with the user who initiated the event. One table row pairs one event with one user. The problems with the standard method are well known – what if a lot of people's caches expire at the same time? The solution also does not scale well – the brief is for feeds to update as close to real-time as possible The hypothesised solution in my eyes seems much better; all processing is done offline so no user waits for a page to generate and there are no joins so database tables can be sharded across physical machines. However, if a user has 100,000 friends and creates 20 events in one session, then that results in inserting 2,000,000 rows into the database. Question: The question boils down to two points: Is this worst-case scenario mentioned above problematic, i.e. does table size have an impact on MySQL performance and are there any issues with this mass inserting of data for each event? Is there anything else I have missed?

    Read the article

  • How to code a 'Next in Results' within search results in PHP

    - by thebluefox
    Right, bit of a head scratcher, although I've got a feeling there's an obvious answer and I'm just not seeing the wood for the trees. Baiscally, using Solr as a search engine for my site, bringing back 15 results per page. When you click on a result, you get a detail page, that has a "Next in Results" link on it, which obviously forwards you on to the next result. Whats the best way of doing this? I've come up with a few solutions but they're either too inpractical or just don't work. I could store all the ids in a session array, then grab the one after the current one and put that in the link. But with possibly hundreds/thousands of results, the memory that array would need, and the performance hit of dealing with it isn't practical. I could take the same approach and put it into the db, but I'll still have to deal with a potentially huge array when I grab them out of the db. Or; I could do the search again, only returning the id's, and grab the one after the one we're currently looking at. I think this could be the best option? Although it does seem kind of messy, namely because of when I have to select the id thats on a different 'page' (ie the 16th, 31st etc result). Unless I pass through where it was in the results, and select from there, but that still doesn't seem like the right way to do it. I'm really sorry if this is just complete nonsense, any help is massively appreciated as always, Cheers guys!

    Read the article

  • VB6 Game Development

    - by CVS-2600Hertz-wordpress-com
    Hi All, I am developing a game in VB6 (plz don't ask me why :) ). The storyboard is ready and a rough implementation is underway. I am following a "pure-software-rendering" approach. (i.e. no DirectX, no openGL etc.) Amongst many others, the following "serious" problems exist: 2D alpha transparency reqd. to implement overlays. Parallax implementation to give depth-of-field illusion. Capturing mouse-scroll events globally (as in FPS-es; mapping them to changing weapon). Async sound play with absolute "near-zero-lag". Any ideas anyone. Please suggest any well documented library/ocx or sample-code. Plz do suggest solutions with good performance and as little overhead as possible. Also, anyone who has developed any games, and would be open to sharing her/his code would be highly appreciated. (any well-acknowledged VB games whose source-code i can study??) UPDATE: Here is a screen shot of GearHead Garage. This picture ought to describe what i was attempting in words above... :) Thank You

    Read the article

  • How to control virtual memory management in linux?

    - by chmike
    I'm writing a program that uses an mmap file to hold a huge buffer organized as an array of 64 MB blocks. The blocks are used to aggregate data received from different hosts through the network. As a consequence the total data size written in each block is not known in advance. Most of the time it is only 2MB but in some cases it can be up to 20MB or more. The data doesn't stay long in the buffer. 90% is deleted after less than a second and the rest is transmitted to another host. I would like to know if there is a way to tell the virtual memory manager that ram pages are not dirty anymore when data is deleted. Should I use mmap and munmap when a block is used and released to control the virtual memory ? What would be the overhead of doing this ? Also, some colleagues expressed concerns about the performance impact of allocating such a big mmap space. I expect it to behave like a swap file so that only dirty pages are to be considered.

    Read the article

  • Design patterns for Agent / Actor based concurrent design.

    - by nso1
    Recently i have been getting into alternative languages that support an actor/agent/shared nothing architecture - ie. scala, clojure etc (clojure also supports shared state). So far most of the documentation that I have read focus around the intro level. What I am looking for is more advanced documentation along the gang of four but instead shared nothing based. Why ? It helps to grok the change in design thinking. Simple examples are easy, but in a real world java application (single threaded) you can have object graphs with 1000's of members with complex relationships. But with agent based concurrency development it introduces a whole new set of ideas to comprehend when designing large systems. ie. Agent granularity - how much state should one agent manage - implications on performance etc or are their good patterns for mapping shared state object graphs to agent based system. tips on mapping domain models to design. Discussions not on the technology but more on how to BEST use the technology in design (real world "complex" examples would be great).

    Read the article

  • Are Dynamic Prepared Statements Bad? (with php + mysqli)

    - by John
    I like the flexibility of Dynamic SQL and I like the security + improved performance of Prepared Statements. So what I really want is Dynamic Prepared Statements, which is troublesome to make because bind_param and bind_result accept "fixed" number of arguments. So I made use of an eval() statement to get around this problem. But I get the feeling this is a bad idea. Here's example code of what I mean // array of WHERE conditions $param = array('customer_id'=>1, 'qty'=>'2'); $stmt = $mysqli->stmt_init(); $types = ''; $bindParam = array(); $where = ''; $count = 0; // build the dynamic sql and param bind conditions foreach($param as $key=>$val) { $types .= 'i'; $bindParam[] = '$p'.$count.'=$param["'.$key.'"]'; $where .= "$key = ? AND "; $count++; } // prepare the query -- SELECT * FROM t1 WHERE customer_id = ? AND qty = ? $sql = "SELECT * FROM t1 WHERE ".substr($where, 0, strlen($where)-4); $stmt->prepare($sql); // assemble the bind_param command $command = '$stmt->bind_param($types, '.implode(', ', $bindParam).');'; // evaluate the command -- $stmt->bind_param($types,$p0=$param["customer_id"],$p1=$param["qty"]); eval($command); Is that last eval() statement a bad idea? I tried to avoid code injection by encapsulating values behind the variable name $param. Does anyone have an opinion or other suggestions? Are there issues I need to be aware of?

    Read the article

  • Pump Messages During Long Operations + C# (it is urgent)

    - by Newbie
    Hi I have a web service that is doing huge computation and is taking more than a minute. I have generated the proxy file of the web service and then from my client end I am using the dll(of course I generated the proxy dll). My client side code is TimeSeries3D t = new TimeSeries3D(); int portfolioId = 4387919; string[] str = new string[2]; str[0] = "MKT_CAP"; DateRange dr = new DateRange(); dr.mStartDate = DateTime.Today; dr.mEndDate = DateTime.Today; Service1 sc = new Service1(); t = sc.GetAttributesForPortfolio(portfolioId, true, str, dr); But since it is taking to much time for the server to compute, after 1 minute I am receiving an error message The CLR has been unable to transition from COM context 0x33caf30 to COM context 0x33cb0a0 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations. Kindly guide me what to do? It is very urgent. Thanks

    Read the article

  • How many users are sufficient to make a heavy load for web application

    - by galymzhan
    I have a web application, which has been suffering high load recent days. The application runs on single server which has 8-core Intel CPU and 4gb of RAM. Software: Drupal 5 (Apache 2, PHP5, MySQL5) running on Debian. After reaching 500 authenticated and 200 anonymous users (simultaneous), the application drastically decreases its performance up to total failure. The biggest load comes from authenticated users, who perform activities, causing insert/update/deletes on db. I think mysql is a bottleneck. Is it normal to slow down on such number of users? EDIT: I forgot to mention that I did some kind of profiling. I runned commands top, htop and they showed me that all memory was being used by MySQL! After some time MySQL starts to perform terribly slow, site goes down, and we have to restart/stop apache to reduce load. Administrators said that there was about 200 active mysql connections at that moment. The worst point is that we need to solve this ASAP, I can't do deep profiling analysis/code refactoring, so I'm considering 2 ways: my tables are MyIsam, I heard they use table-level locking which is very slow, is it right? could I change it to Innodb without worry? what if I take MySQL, and move it to dedicated machine with a lot of RAM?

    Read the article

  • optimize 2D array in C++

    - by Hristo
    I'm dealing with a 2D array with the following characteristics: const int cols = 500; const int rows = 100; int arr[rows][cols]; I access array arr in the following manner to do some work: for(int k = 0; k < T; ++k) { // for each trainee myscore[k] = 0; for(int i = 0; i < N; ++i) { // for each sample for(int j = 0; j < E[i]; ++j) { // for each expert myscore[k] += delta(i, anotherArray[k][i], arr[j][i]); } } } So I am worried about the array 'arr' and not the other one. I need to make this more cache-friendly and also boost the speed. I was thinking perhaps transposing the array but I wasn't sure how to do that. My implementation turns out to only work for square matrices. How would I make it work for non-square matrices? Also, would mapping the 2D array into a 1D array boost the performance? If so, how would I do that? Finally, any other advice on how else I can optimize this... I've run out of ideas, but I know that arr[j][i] is the place where I need to make changes because I'm accessing columns by columns instead of rows by rows so that is not cache friendly at all. Thanks, Hristo

    Read the article

  • MySQL Database Design with Internationalization

    - by Some name
    Hello, I'm going to start work on a medium sized application, and i'm planning it's db design. One thing that I'm not sure about is this. I will have many tables which will need internationalization, such as: "membership_options, gender_options, language_options etc" Each of these tables will share common i18n fields, like: "title, alternative_title, short_description, description" In your opinion which is the best way to do it? Have an i18n table with the same fields for each of the tables that will need them? or do something like: Membership table Gender table ---------------- -------------- id | created_at id | created_at 1 - 22.03.2001 1 - 14.08.2002 2 - 22.03.2001 2 - 14.08.2002 General translation table ------------------------- record_id | table_name | string_name | alternative_title| .... |id_language 1 - membership regular null 1 (english) 1 - membership normale null 2 (italian) 1 - gender man null 1(english) 1 -gender uomo null 2(italian) This would avoid me repeating something like: membership_translation table ----------------------------- membership_id | name | alternative_title | id_lang 1 regular null 1 1 normale null 2 gender_translation table ----------------------------- gender_id | name | alternative_title | id_lang 1 man null 1 1 uomo null 2 and so on, so i would probably reduce the number of db tables, but i'm not sure about performance.I'm not much of a DB designer, so please let me know.

    Read the article

  • Rails: creating a custom data type, to use with generator classes and a bunch of questions related t

    - by Shyam
    Hi, After being productive with Rails for some weeks, I learned some tricks and got some experience with the framework. About 10 days ago, I figured out it is possible to build a custom data type for migrations by adding some code in the Table definition. Also, after learning a bit about floating points (and how evil they are) vs integers, the money gem and other possible solutions, I decided I didn't WANT to use the money gem, but instead try to learn more about programming and finding a solution myself. Some suggestions said that I should be using integers, one for the whole numbers and one for the cents. When playing in script/console, I discovered how easy it is to work with calculations and arrays. But, I am talking to much (and the reason I am, is to give some sufficient background). Right now, while playing with the scaffold generator (yes, I use it, because I like they way I can quickly set up a prototype while I am still researching my objectives), I like to use a DRY method. In my opinion, I should build a custom "object", that can hold two variables (Fixnum), one for the whole, one for the cents. In my big dream, I would be able to do the following: script/generate scaffold Cake name:string description:text cost:mycustom Where mycustom should create two integer columns (one for wholes, one for cents). Right now I could do this by doing: script/generate scaffold Cake name:string description:text cost_w:integer cost_c:integer I had also had an idea that would be creating a "cost model", which would hold two columns of integers and create a cost_id column to my scaffold. But wouldn't that be an extra table that would cause some kind of performance penalty? And wouldn't that be defy the purpose of the Cake model in the first place, because the costs are an attribute of individual Cake entries? The reason why I would want to have such a functionality because I am thinking of having multiple "costs" inside my rails application. Thank you for your feedback, comments and answers! I hope my message got through as understandable, my apologies for incorrect grammar or weird sentences as English is not my native language.

    Read the article

  • Given a typical Rails 3 environment, why am I unable to execute any tests?

    - by Tom
    I'm working on writing simple unit tests for a Rails 3 project, but I'm unable to actually execute any tests. Case in point, attempting to run the test auto-generated by Rails fails: require 'test_helper' class UserTest < ActiveSupport::TestCase # Replace this with your real tests. test "the truth" do assert true end end Results in the following error: <internal:lib/rubygems/custom_require>:29:in `require': no such file to load -- test_helper (LoadError) from <internal:lib/rubygems/custom_require>:29:in `require' from user_test.rb:1:in `<main>' Commenting out the require 'test_helper' line and attempting to run the test results in this error: user_test.rb:3:in `<main>': uninitialized constant Object::ActiveSupport (NameError) The action pack gems appear to be properly installed and up to date: actionmailer (3.0.3, 2.3.5) actionpack (3.0.3, 2.3.5) activemodel (3.0.3) activerecord (3.0.3, 2.3.5) activeresource (3.0.3, 2.3.5) activesupport (3.0.3, 2.3.5) Ruby is at 1.9.2p0 and Rails is at 3.0.3. The sample dump of my test directory is as follows: /fixtures /functional /integration /performance /unit -- /helpers -- user_helper_test.rb -- user_test.rb test_helper.rb I've never seen this problem before - I've run the typical rake tasks for preparing the test environment. I have nothing out of the ordinary in my application or environment configuration files, nor have I installed any unusual gems that would interfere with the test environment. Edit Xavier Holt's suggestion, explicitly specifying the path to the test_helper worked; however, this revealed an issue with ActiveSupport. Now when I attempt to run the test, I receive the following error message (as also listed above): user_test.rb:3:in `<main>': uninitialized constant Object::ActiveSupport (NameError) But as you can see above, Action Pack is all installed and update to date.

    Read the article

  • Converting ntext to nvcharmax(max) - Getting around size limitation

    - by Overflew
    Hi all, I'm trying to change an existing SQL NText column to nvcharmax(max), and encountering an error on the size limit. There's a large amount of existing data, some of which is more than the 8k limit, I believe. We're looking to convert this, so that the field is searchable in LINQ. The 2x SQL statements I've tried are: update Table set dataNVarChar = convert(nvarchar(max), dataNtext) where dataNtext is not null update Table set dataNVarChar = cast(dataNtext as nvarchar(max)) where dataNtext is not null And the error I get is: Cannot create a row of size 8086 which is greater than the allowable maximum row size of 8060. This is using SQL Server 2008. Any help appreciated, Thanks. Update / Solution: The marked answer below is correct, and SQL 2008 can change the column to the correct data type in my situation, and there are no dramas with the LINQ-utilising application we use on top of it: alter table [TBL] alter column [COL] nvarchar(max) I've also been advised to follow it up with: update [TBL] set [COL] = [COL] Which completes the conversion by moving the data from the lob structure to the table (if the length in less than 8k), which improves performance / keeps things proper.

    Read the article

< Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >