Search Results

Search found 14545 results on 582 pages for 'design patterns'.

Page 215/582 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • handling long running large transactions with perl dbi

    - by 1stdayonthejob
    I've got a large transaction comprising of getting lots of data from database A, do some manipulations with this data, then inserting the manipulated data into database B. I've only got permissions to select in database A but I can create tables and insert/update etc in database B. The manipulation and insertion part is written in perl and already in use for loading data into database B from other data sources, so all that's required is to get the necessary data from database A and using it to initialize the perl classes. How can I go about doing this so I can easily track back and pick up from where the error happened if any error occurs during the manipulation or insertion procedures (database disconnection, problems with class initialization because of invalid values, hard disk failure etc...)? Doing the transaction in one go doesn't seem like a good option because the amount data from database A means it would take at least a day or 2 for data manipulation and insertion into database B. The data from database A can be grouped into around 1000 groups using unique keys, with each key containing 1000s of rows each. One way I thought I could do is to write a script that does commits per group, meaning I've got to track which group has already been inserted into database B. The only way I can think of to track the progress of which groups have been processed or not is either in a log file or in a table in database B. A second way I thought could work is to dump all the necessary fields needed for loading the classes for manipulation and insertion into a flatfile, read the file to initialize the classes and insert into database B. This also means that I got to do some logging, but should narrow it down to the exact row in the flatfile if any error occurs. The script will look something like this: use strict; use warnings; use DBI; #connect to database A my $dbh = DBI->connect('dbi:oracle:my_db', $user, $password, { RaiseError => 1, AutoCommit => 0 }); #statement to get data based on group unique key my $sth = $dbh->prepare($my_sql); my @groups; #I have a list of this already open my $fh, '>>', 'my_logfile' or die "can't open logfile $!"; eval { foreach my $g (@groups){ #subroutine to check if group has already been processed, either from log file or from database table next if is_processed($g); $sth->execute($g); my $data = $sth->fetchall_arrayref; #manipulate $data, then use it to load perl classes for insertion into database B #. #. #. } print $fh "$g\n"; }; if ($@){ $dbh->rollback; die "something wrong...rollback"; } So if any errors do occur, I can just run this script again and it should skip the groups or rows that have been processed and continue. Both these methods is just variations on the same theme, and both require going back to where I've been tracking my progress (in table or file), skip the ones that've been commited to database B and process the remaining data. I'm sure there's a better way of doing this but am struggling to think of other solutions. Is there another way of handling large transactions between databases that require data manipulation between getting data out from one and inserting into another? The process doesn't need to be all in Perl, as long as I can reuse the perl classes for manipulating and inserting the data into the database.

    Read the article

  • Proper reconstitution of Aggregate objects in the Repository?

    - by Jebb
    Assuming that no ORM (e.g. Doctrine) is used inside the Repository, my question is what is the proper way of instantiating the Aggregate objects? Is it instantiating the child objects directly inside the Repository and just assign it to the Aggregate Root through its setters or the Aggregate Root is responsible of constructing its child entities/objects? Example 1: class UserRepository { // Create user domain entity. $user = new User(); $user->setName('Juan'); // Create child object orders entity. $orders = new Orders($orders); $user->setOrders($orders); } Example 2: class UserRepository { // Create user domain entity. $user = new User(); $user->setName('Juan'); // Get orders. $orders = $ordersDao->findByUser(1); $user->setOrders($orders); } whereas in example 2, instantiation of orders are taken care inside the user entity.

    Read the article

  • where to enlist transaction with parent child delete (repository or bll)?

    - by Caroline Showden
    My app uses a business layer which calls a repository which uses linq to sql. I have an Item class that has an enum type property and an ItemDetail property. I need to implement a delete method that: (1) always delete the Item (2) if the item.type is XYZ and the ItemDetail is not null, delete the ItemDetail as well. My question is where should this logic be housed? If I have it in my business logic which I would prefer, this involves two separate repository calls, each of which uses a separate datacontext. I would have to wrap both calls is a System.Transaction which (in sql 2005) get promoted to a distributed transaction which is not ideal. I can move it all to a single repository call and the transaction will be handled implicitly by the datacontext but feel that this is really business logic so does not belong in the repository. Thoughts? Carrie

    Read the article

  • Hibernate : Opinions in Composite PK vs Surrogate PK

    - by Albert Kam
    As i understand it, whenever i use @Id and @GeneratedValue on a Long field inside JPA/Hibernate entity, i'm actually using a surrogate key, and i think this is a very nice way to define a primary key considering my not-so-good experiences in using composite primary keys, where : there are more than 1 business-value-columns combination that become a unique PK the composite pk values get duplicated across the table details cannot change the business value inside that composite PK I know hibernate can support both types of PK, but im left wondering by my previous chats with experienced colleagues where they said that composite PK is easier to deal with when doing complex SQL queries and stored procedure processes. They went on saying that when using surrogate keys will complicate things when doing joining and there are several condition when it's impossible to do some stuffs when using surrogate keys. Although im sorry i cant explain the detail here since i was not clear enough when they explain it. Maybe i'll put more details next time. Im currently trying to do a project, and want to try out surrogate keys, since it's not getting duplicated across tables, and we can change the business-column values. And when the need for some business value combination uniqueness, i can use something like : @Table(name="MY_TABLE", uniqueConstraints={ @UniqueConstraint(columnNames={"FIRST_NAME", "LAST_NAME"}) // name + lastName combination must be unique But im still in doubt because of the previous discussion about the composite key. Could you share your experiences in this matter ? Thank you !

    Read the article

  • Constructing human readable sentences based on a survey

    - by Joshua
    The following is a survey given to course attendees to assess an instructor at the end of the course. Communication Skills 1. The instructor communicated course material clearly and accurately. Yes No 2. The instructor explained course objectives and learning outcomes. Yes No 3. In the event of not understanding course materials the instructor was available outside of class. Yes No 4. Was instructor feedback and grading process clear and helpful? Yes No 5. Do you feel that your oral and written skills have improved while in this course? Yes No We would like to summarize each attendees selection based on the choices chosen by him. If the provided answers were [No, No, Yes, Yes, Yes]. Then we would summarize this as "The instructor was not able to summarize course objectives and learning outcomes clearly, but was available for usually helpful outside of class. The instructor feedback and grading process was clear and helpful and I feel that my oral and written skills have improved because of this course. Based on the selections chosen by the attendee the summary would be quite different. This leads to many answers based on the choices selected and the number of such questions in the survey. The questions are usually provided by the training organization. How do you come up with a generic solution so that this can be effectively translated into a human readable form. I am looking for tools or libraries (java based), suggestions which will help me create such human readable output. I would like to hide the complexity from the end users as much as possible.

    Read the article

  • Suggest Cassandra data model for an existing schema

    - by Andriy Bohdan
    Hello guys! I hope there's someone who can help me suggest a suitable data model to be implemented using nosql database Apache Cassandra. More of than I need it to work under high loads and large amounts of data. Simplified I have 3 types of objects: Product Tag ProductTag Product: key - string key name - string .... - some other fields Tag: key - string key name - unique tag words ProductTag: product_key - foreign key referring to product tag_key - foreign key referring to tag rating - this is rating of tag for this product Each product may have 0 or many tags. Tag may be assigned to 1 or many products. Means relation between products and tags is many-to-many in terms of relational databases. Value of "rating" is updated "very" often. I need to be run the following queries Select objects by keys Select tags for product ordered by rating Select products by tag order by rating Update rating by product_key and tag_key The most important is to make these queries really fast on large amounts of data, considering that rating is constantly updated.

    Read the article

  • "Public" nested classes or not

    - by Frederick
    Suppose I have a class 'Application'. In order to be initialised it takes certain settings in the constructor. Let's also assume that the number of settings is so many that it's compelling to place them in a class of their own. Compare the following two implementations of this scenario. Implementation 1: class Application { Application(ApplicationSettings settings) { //Do initialisation here } } class ApplicationSettings { //Settings related methods and properties here } Implementation 2: class Application { Application(Application.Settings settings) { //Do initialisation here } class Settings { //Settings related methods and properties here } } To me, the second approach is very much preferable. It is more readable because it strongly emphasises the relation between the two classes. When I write code to instantiate Application class anywhere, the second approach is going to look prettier. Now just imagine the Settings class itself in turn had some similarly "related" class and that class in turn did so too. Go only three such levels and the class naming gets out out of hand in the 'non-nested' case. If you nest, however, things still stay elegant. Despite the above, I've read people saying on StackOverflow that nested classes are justified only if they're not visible to the outside world; that is if they are used only for the internal implementation of the containing class. The commonly cited objection is bloating the size of containing class's source file, but partial classes is the perfect solution for that problem. My question is, why are we wary of the "publicly exposed" use of nested classes? Are there any other arguments against such use?

    Read the article

  • Re-using aggregate level formulas in SQL - any good tactics?

    - by Cade Roux
    Imagine this case, but with a lot more component buckets and a lot more intermediates and outputs. Many of the intermediates are calculated at the detail level, but a few things are calculated at the aggregate level: DECLARE @Profitability AS TABLE ( Cust INT NOT NULL ,Category VARCHAR(10) NOT NULL ,Income DECIMAL(10, 2) NOT NULL ,Expense DECIMAL(10, 2) NOT NULL ) ; INSERT INTO @Profitability VALUES ( 1, 'Software', 100, 50 ) ; INSERT INTO @Profitability VALUES ( 2, 'Software', 100, 20 ) ; INSERT INTO @Profitability VALUES ( 3, 'Software', 100, 60 ) ; INSERT INTO @Profitability VALUES ( 4, 'Software', 500, 400 ) ; INSERT INTO @Profitability VALUES ( 5, 'Hardware', 1000, 550 ) ; INSERT INTO @Profitability VALUES ( 6, 'Hardware', 1000, 250 ) ; INSERT INTO @Profitability VALUES ( 7, 'Hardware', 1000, 700 ) ; INSERT INTO @Profitability VALUES ( 8, 'Hardware', 5000, 4500 ) ; SELECT Cust ,Profit = SUM(Income - Expense) ,Margin = SUM(Income - Expense) / SUM(Income) FROM @Profitability GROUP BY Cust SELECT Category ,Profit = SUM(Income - Expense) ,Margin = SUM(Income - Expense) / SUM(Income) FROM @Profitability GROUP BY Category SELECT Profit = SUM(Income - Expense) ,Margin = SUM(Income - Expense) / SUM(Income) FROM @Profitability Notice how the same formulae have to be used at the different aggregation levels. This results in code duplication. I have thought of using UDFs (either scalar or table valued with an OUTER APPLY, since many of the final results may share intermediates which have to be calculated at the aggregate level), but in my experience the scalar and multi-statement table-valued UDFs perform very poorly. Also thought about using more dynamic SQL and applying the formulas by name, basically. Any other tricks, techniques or tactics to keeping these kinds of formulae which need to be applied at different levels in sync and/or organized?

    Read the article

  • Memcache key generation strategy

    - by Maxim Veksler
    Given function f1 which receives n String arguments, would be considered better random key generation strategy for memcache for the scenario described below ? Our Memcache client does internal md5sum hashing on the keys it gets public class MemcacheClient { public Object get(String key) { String md5 = Md5sum.md5(key) // Talk to memcached to get the Serialization... return memcached(md5); } } First option public static String f1(String s1, String s2, String s3, String s4) { String key = s1 + s2 + s3 + s4; return get(key); } Second option /** * Calculate hash from Strings * * @param objects vararg list of String's * * @return calculated md5sum hash */ public static String stringHash(Object... strings) { if(strings == null) throw new NullPointerException("D'oh! Can't calculate hash for null"); MD5 md5sum = new MD5(); // if(prevHash != null) // md5sum.Update(prevHash); for(int i = 0; i < strings.length; i++) { if(strings[i] != null) { md5sum.Update("_" + strings[i] + "_"); // Convert to String... } else { // If object is null, allow minimum entropy by hashing it's position md5sum.Update("_" + i + "_"); } } return md5sum.asHex(); } public static String f1(String s1, String s2, String s3, String s4) { String key = stringHash(s1, s2, s3, s4); return get(key); } Note that the possible problem with the second option is that we are doing second md5sum (in the memcache client) on an already md5sum'ed digest result. Thanks for reading, Maxim.

    Read the article

  • SDL side-scroller scrolls inconsistantly

    - by SDLFunTimes
    So I'm working on an upgrade from my previous project (that I posted here for code review) this time implementing a repeating background (like what is used on cartoons) so that SDL doesn't have to load really big images for a level. There's a strange inconsistency in the program, however: the first time the user scrolls all the way to the right 2 less panels are shown than is specified. Going backwards (left) the correct number of panels is shown (that is the panels repeat the number of times specified in the code). After that it appears that going right again (once all the way at the left) the correct number of panels is shown and same going backwards. Here's some selected code and here's a .zip of all my code constructor: Game::Game(SDL_Event* event, SDL_Surface* scr, int level_w, int w, int h, int bpp) { this->event = event; this->bpp = bpp; level_width = level_w; screen = scr; w_width = w; w_height = h; //load images and set rects background = format_surface("background.jpg"); person = format_surface("person.png"); background_rect_left = background->clip_rect; background_rect_right = background->clip_rect; current_background_piece = 1; //we are displaying the first clip rect_in_view = &background_rect_right; other_rect = &background_rect_left; person_rect = person->clip_rect; background_rect_left.x = 0; background_rect_left.y = 0; background_rect_right.x = background->w; background_rect_right.y = 0; person_rect.y = background_rect_left.h - person_rect.h; person_rect.x = 0; } and here's the move method which is probably causing all the trouble: void Game::move(SDLKey direction) { if(direction == SDLK_RIGHT) { if(move_screen(direction)) { if(!background_reached_right()) { //move background right background_rect_left.x += movement_increment; background_rect_right.x += movement_increment; if(rect_in_view->x >= 0) { //move the other rect in to fill the empty space SDL_Rect* temp; other_rect->x = -w_width + rect_in_view->x; temp = rect_in_view; rect_in_view = other_rect; other_rect = temp; current_background_piece++; std::cout << current_background_piece << std::endl; } if(background_overshoots_right()) { //sees if this next blit is past the surface //this is used only for re-aligning the rects when //the end of the screen is reached background_rect_left.x = 0; background_rect_right.x = w_width; } } } else { //move the person instead person_rect.x += movement_increment; if(get_person_right_side() > w_width) { //person went too far right person_rect.x = w_width - person_rect.w; } } } else if(direction == SDLK_LEFT) { if(move_screen(direction)) { if(!background_reached_left()) { //moves background left background_rect_left.x -= movement_increment; background_rect_right.x -= movement_increment; if(rect_in_view->x <= -w_width) { //swap the rect in view SDL_Rect* temp; rect_in_view->x = w_width; temp = rect_in_view; rect_in_view = other_rect; other_rect = temp; current_background_piece--; std::cout << current_background_piece << std::endl; } if(background_overshoots_left()) { background_rect_left.x = 0; background_rect_right.x = w_width; } } } else { //move the person instead person_rect.x -= movement_increment; if(person_rect.x < 0) { //person went too far left person_rect.x = 0; } } } } without the rest of the code this doesn't make too much sense. Since there is too much of it I'll upload it here for testing. Anyway does anyone know how I could fix this inconsistency?

    Read the article

  • Codechef practice question help needed - find trailing zeros in a factorial

    - by manugupt1
    I have been working on this for 24 hours now, trying to optimize it. The question is how to find the number of trailing zeroes in factorial of a number in range of 10000000 and 10 million test cases in about 8 secs. The code is as follows: #include<iostream> using namespace std; int count5(int a){ int b=0; for(int i=a;i>0;i=i/5){ if(i%15625==0){ b=b+6; i=i/15625; } if(i%3125==0){ b=b+5; i=i/3125; } if(i%625==0){ b=b+4; i=i/625; } if(i%125==0){ b=b+3; i=i/125; } if(i%25==0){ b=b+2; i=i/25; } if(i%5==0){ b++; } else break; } return b; } int main(){ int l; int n=0; cin>>l; //no of test cases taken as input int *T = new int[l]; for(int i=0;i<l;i++) cin>>T[i]; //nos taken as input for the same no of test cases for(int i=0;i<l;i++){ n=0; for(int j=5;j<=T[i];j=j+5){ n+=count5(j); //no of trailing zeroes calculted } cout<<n<<endl; //no for each trialing zero printed } delete []T; } Please help me by suggesting a new approach, or suggesting some modifications to this one.

    Read the article

  • GRAPH PROBLEM: find an algorithm to determine the shortest path from one point to another in a recta

    - by newba
    I'm getting such an headache trying to elaborate an appropriate algorithm to go from a START position to a EXIT position in a maze. For what is worth, the maze is rectangular, maxsize 500x500 and, in theory, is resolvable by DFS with some branch and bound techniques ... 10 3 4 7 6 3 3 1 2 2 1 0 2 2 2 4 2 2 5 2 2 1 3 0 2 2 2 2 1 3 3 4 2 3 4 4 3 1 1 3 1 2 2 4 2 2 1 Output: 5 1 4 2 Explanation: Our agent looses energy every time he gives a step and he can only move UP, DOWN, LEFT and RIGHT. Also, if the agent arrives with a remaining energy of zero or less, he dies, so we print something like "Impossible". So, in the input 10 is the initial agent's energy, 3 4 is the START position (i.e. column 3, line 4) and we have a maze 7x6. Think this as a kind of labyrinth, in which I want to find the exit that gives the agent a better remaining energy (shortest path). In case there are paths which lead to the same remaining energy, we choose the one which has the small number of steps, of course. I need to know if a DFS to a maze 500x500 in the worst case is feasible with these limitations and how to do it, storing the remaining energy in each step and the number of steps taken so far. The output means the agent arrived with remaining energy= 5 to the exit pos 1 4 in 2 steps. If we look carefully, in this maze it's also possible to exit at pos 3 1 (column 3, row 1) with the same energy but with 3 steps, so we choose the better one. With these in mind, can someone help me some code or pseudo-code? I have troubles working this around with a 2D array and how to store the remaining energy, the path (or number of steps taken)....

    Read the article

  • Most Astonishing Violation of the Principle of Least Astonishment

    - by Adam Liss
    The Principle of Least Astonishment suggests that a system should operate as a user would expect it to, as much as possible. In other words, it should never "astonish" the user with unexpected behavior. In your experience as the "astonishee," what types of systems are the worst offenders, and if you were the project manager, how would you correct the problem? Bonus if your answer describes how you'd retrain the developers!

    Read the article

  • How do we greatly optimize our MySQL database (or replace it) when using joins?

    - by jkaz
    Hi there, This is the first time I'm approaching an extremely high-volume situation. This is an ad server based on MySQL. However, the query that is used incorporates a lot of JOINs and is generally just slow. (This is Rails ActiveRecord, btw) sel = Ads.find(:all, :select = '*', :joins = "JOIN campaigns ON ads.campaign_id = campaigns.id JOIN users ON campaigns.user_id = users.id LEFT JOIN countries ON countries.campaign_id = campaigns.id LEFT JOIN keywords ON keywords.campaign_id = campaigns.id", :conditions = [flashstr + "keywords.word = ? AND ads.format = ? AND campaigns.cenabled = 1 AND (countries.country IS NULL OR countries.country = ?) AND ads.enabled = 1 AND campaigns.dailyenabled = 1 AND users.uenabled = 1", kw, format, viewer['country'][0]], :order = order, :limit = limit) My questions: Is there an alternative database like MySQL that has JOIN support, but is much faster? (I know there's Postgre, still evaluating it.) Otherwise, would firing up a MySQL instance, loading a local database into memory and re-loading that every 5 minutes help? Otherwise, is there any way I could switch this entire operation to Redis or Cassandra, and somehow change the JOIN behavior to match the (non-JOIN-able) nature of NoSQL? Thank you!

    Read the article

  • Tooltips with infinite timeout?

    - by romkyns
    I'm thinking of setting the timeout on all my tooltips in a WinForms application to infinity (or an extremely large value). The motivation is that it's annoying for the user if the tooltip disappears while I'm still reading it, without providing any extra value whatsoever as far as I can tell. Normally I wouldn't ask something like this on StackOverflow, but the overwhelming majority of all software sets timeouts on tooltips, so it makes me wonder whether perhaps there is some important consideration I'm missing? Or is this just an old convention that nobody gives further thought to? If you would hate infinite timeout as opposed to a short timeout, please explain why. (If you just think tooltips are a bad idea altogether then that's a separate consideration; this question is specifically about the infinite timeout.)

    Read the article

  • should I ever put a major version number into a C#/Java namespace?

    - by Andrew Patterson
    I am designing a set of 'service' layer objects (data objects and interface definitions) for a WCF web service (that will be consumed by third party clients i.e. not in-house, so outside my direct control). I know that I am not going to get the interface definition exactly right - and am wanting to prepare for the time when I know that I will have to introduce a breaking set of new data objects. However, the reality of the world I am in is that I will also need to run my first version simultaneously for quite a while. The first version of my service will have URL of http://host/app/v1service.svc and when the times comes by new version will live at http://host/app/v2service.svc However, when it comes to the data objects and interfaces, I am toying with putting the 'major' version of the interface number into the actual namespace of the classes. namespace Company.Product.V1 { [DataContract(Namespace = "company-product-v1")] public class Widget { [DataMember] string widgetName; } public interface IFunction { Widget GetWidgetData(int code); } } When the time comes for a fundamental change to the service, I will introduce some classes like namespace Company.Product.V2 { [DataContract(Namespace = "company-product-v2")] public class Widget { [DataMember] int widgetCode; [DataMember] int widgetExpiry; } public interface IFunction { Widget GetWidgetData(int code); } } The advantages as I see it are that I will be able to have a single set of code serving both interface versions, sharing functionality where possible. This is because I will be able to reference both interface versions as a distinct set of C# objects. Similarly, clients may use both interface versions simultaneously, perhaps using V1.Widget in some legacy code whilst new bits move on to V2.Widget. Can anyone tell why this is a stupid idea? I have a nagging feeling that this is a bit smelly.. notes: I am obviously not proposing every single new version of the service would be in a new namespace. Presumably I will do as many non-breaking interface changes as possible, but I know that I will hit a point where all the data modelling will probably need a significant rewrite. I understand assembly versioning etc but I think this question is tangential to that type of versioning. But I could be wrong.

    Read the article

  • Tinyxml Multi Task

    - by shaimagz
    I have a single xml file and every new thread of the program (BHO) is using the same Tinyxml file. every time a new window is open in the program, it runs this code: const char * xmlFileName = "C:\\browsarityXml.xml"; TiXmlDocument doc(xmlFileName); doc.LoadFile(); //some new lines in the xml.. and than save: doc.SaveFile(xmlFileName); The problem is that after the first window is adding new data to the xml and saves it, the next window can't add to it. although the next one can read the data in the xml, it can't write to it. I thought about two possibilities to make it work, but I don't know how to implement them: Destroy the doc object when I'm done with it. Some function in Tinyxml library to unload the file. Any help or better understanding of the problem will be great. Thanks.

    Read the article

  • UnitOfWork & StrcutureMap & Desktop Application

    - by Afshin Gh
    When developing Web App and using StrcutureMap i normally use HybridHttpOrThreadLocalScoped for my UnitOfWork Part. (unit of work starts up per web request and ...) What about a desktop app(windows service, console, ...)? There is no session and request in a desktop app. How should i manage UoW in this situation? any good reference or article? I don't want to make it singleton. What should I do?

    Read the article

  • Perl vs Python: implementation of algorithms to deal with advanced data structures

    - by user350571
    I'm learning perl and everytime I search for perl stuff in the internet I get some random page with people saying that perl should die because code written in it looks like a lesson in steganography. Then they say that python is clean and stuff like that. Now, I know that those comparisons are always stupid and made by fellows that feel that languages are a extension of their boring personality so, let me ask instead: can you give me the implementation of a widely known algorithm to deal with a data structure like red-black trees in both languages so I can compare?

    Read the article

  • Why use SQL database?

    - by martinthenext
    I'm not quite sure stackoverflow is a place for such a general question, but let's give it a try. Being exposed to the need of storing application data somewhere, I've always used MySQL or sqlite, just because it's always done like that. As it seems like the whole world is using these databases, most of all software products, frameworks, etc. It is rather hard for a beginning developer like me to ask a question - why? Ok, say we have some object-oriented logic in our application, and objects are related to each other somehow. We need to map this logic to the storage logic, so we need relations between database objects too. This leads us to using relational database and I'm ok with that - to put it simple, our database rows sometimes will need to have references to other tables' rows. But why do use SQL language for interaction with such a database? SQL query is a text message. I can understand this is cool for actually understanding what it does, but isn't it silly to use text table and column names for a part of application that no one ever seen after deploynment? If you had to write a data storage from scratch, you would have never used this kind of solution. Personally, I would have used some 'compiled db query' bytecode, that would be assembled once inside a client application and passed to the database. And it surely would name tables and colons by id numbers, not ascii-strings. In the case of changes in table structure those byte queries could be recompiled according to new db schema, stored in XML or something like that. What are the problems of my idea? Is there any reason for me not to write it myself and to use SQL database instead?

    Read the article

  • iphone: caching and updating xml fields

    - by pJosh
    Thanks for your help. Here I have another question. I get the data through XMLParsing, now I want to store it in iphone's cache, and the XML Fields are updates every 12 hours.how can i check that XML Fields are change or not? and how can I store the data in iphone's cache memory so that evry it does not has to interact with web. Can anybody please help me???

    Read the article

  • I cannot grok MVC, what it is, and what it is not?

    - by Hao
    I cannot grok what MVC is, what mindset or programming model should I acquire so MVC stuff can instantly "lightbulb" on my head? If not instantly, what simple programs/projects should I try to do first so I can apply the neat things MVC brings to programming. OOP is intuitive and easier, object is all around us, and the benefits of code reuse using OOP-paradigm instantly click to anyone. You can probably talk to anybody about OOP in a few minutes and lecture some examples and they would get it. While OOP somehow raise the intuitiveness aspect of programming, MVC seems to do the opposite. I'm getting negative thoughts that some future employers(or even clients) would look down upon me for not using MVC technology. Though I probably get the skinnable aspect of MVC, but when I try to apply it to my own project, I don't know where to start. And also some programmers even have diverging views on how to accomplish MVC properly. Take this for instance from Jeff's post about MVC: The view is simply how you lay the data out, how it is displayed. If you want a subset of some data, for example, my opinion is that is a responsibility of the model. So maybe some programmers use MVC, but they somehow inadvertently use the View or the Controller to extract a subset of data. Why we can't have a definitive definition of what and how to accomplish MVC properly? And also, when I search for MVC .NET programs, most of it applies to web programs, not desktop apps, this intrigue me further. My guess is, this is most advantageous to web apps, there's not much problem about intermixed view(html) and controller(program code) in desktop apps.

    Read the article

  • O(log n) algorithm for computing rank of union of two sorted lists?

    - by Eternal Learner
    Given two sorted lists, each containing n real numbers, is there a O(log?n) time algorithm to compute the element of rank i (where i coresponds to index in increasing order) in the union of the two lists, assuming the elements of the two lists are distinct? I can think of using a Merge procedure to merge the 2 lists and then find the A[i] element in constant time. But the Merge would take O(n) time. How do we solve it in O(log n) time?

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >