Search Results

Search found 9916 results on 397 pages for 'counting sort'.

Page 339/397 | < Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >

  • Refining Search Results [PHP/MySQL]

    - by Dae
    I'm creating a set of search panes that allow users to tweak their results set after submitting a query. We pull commonly occurring values in certain fields from the results and display them in order of their popularity - you've all seen this sort of thing on eBay. So, if a lot of rows in our results were created in 2009, we'll be able to click "2009" and see only rows created in that year. What in your opinion is the most efficient way of applying these filters? My working solution was to discard entries from the results that didn't match the extra arguments, like: while($row = mysql_fetch_assoc($query)) { foreach($_GET as $key => $val) { if($val !== $row[$key]) { continue 2; } } // Output... } This method should hopefully only query the database once in effect, as adding filters doesn't change the query - MySQL can cache and reuse one data set. On the downside it makes pagination a bit of a headache. The obvious alternative would be to build any additional criteria into the initial query, something like: $sql = "SELECT * FROM tbl MATCH (title, description) AGAINST ('$search_term')"; foreach($_GET as $key => $var) { $sql .= " AND ".$key." = ".$var; } Are there good reasons to do this instead? Or are there better options altogether? Maybe a temporary table? Any thoughts much appreciated!

    Read the article

  • How are developers using source control, I am trying to find the most efficient way to do source con

    - by RJ
    I work in a group of 4 .Net developers. We rarely work on the same project at the same time but it does happen from time to time.We use TFS for source control. My most recent example is a project I just placed into production last night that included 2 WCF services and a web application front end. I worked out of a branch called "prod" because the application is brand new and has never seen the light of day. Now that the project is live, I need to branch off the prod branch for features, bugs, etc... So what is the best way to do this? Do I simple create a new branch and sort of archive the old branch and never use it again? Do I branch off and then merge my branch changes back into the prod branch when I want to deploy to production? And what about the file and assembly version. They are currently at 1.0.0.0. When do they change and why? If I fix a small bug, which number changes if any? If I add a feature, which number changes if any? What I am looking for is what you have found to be the best way to efficiently manage source control. Most places I have worked always seem to bang heads with the source control system in on way or another and I would just like to find out what you have found that works the best.

    Read the article

  • MySQL 5.5.8 Gets Periodic Lag

    - by CYREX
    Am using MySQL 5.5.8 on an Ubuntu system and every X amount of time it creates a huge lag that lasts a couple of seconds. Then all goes back to normal until the next lag. The time period varies but it looks like it happen periodically. Am using InnoDB. It is like hiccups in the MySQL. What could be creating this sort of periodic problem. Do not have any cron jobs or process running every time the X period happens. The X period could be between 30 minutes to 2 hours. So for example it could happen every 30 minutes for the next 12 hours or it could happen every 2 hours for the next 8 hours. key_buffer_size = 256M max_allowed_packet = 1M table_cache = 1024 table_open_cache = 1024 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 32M thread_cache_size = 128 query_cache_size= 128M log-slow-queries = slow.log long_query_time = 5 log-queries-not-using-indexes # Try number of CPU's*2 for thread_concurrency thread_concurrency = 4 max_connections=512 #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /usr/local/mysql/data # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 1G #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 64M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 0 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 16M [myisamchk] key_buffer_size = 64M sort_buffer_size = 64M read_buffer = 2M write_buffer = 2M There are about 200+ tables divided in 3 databases. The most written too is in InnoDB. The other ones are more read. Several of the tables in the InnoDB have more than 2 million records. The other databases top at about 400 thousand records and do not change so often. The PC is a Core 2 Duo 8400 with 4GB RAM, 32Bit Ubuntu.

    Read the article

  • What's the best way to "shuffle" a table of database records?

    - by Darth
    Say that I have a table with a bunch of records, which I want to randomly present to users. I also want users to be able to paginate back and forth, so I have to perserve some sort of order, at least for a while. The application is basically only AJAX and it uses cache for already visited pages, so even if I always served random results, when the user tries to go back, he will get the previous page, because it will load from the local cache. The problem is, that if I return only random results, there might be some duplicates. Each page contains 6 results, so to prevent this, I'd have to do something like WHERE id NOT IN (1,2,3,4 ...) where I'd put all the previously loaded IDs. Huge downside of that solution is that it won't be possible to cache anything on the server side, as every user will request different data. Alternate solution might be to create another column for ordering the records, and shuffle it every insert time unit here. The problem here is, I'd need to set random number out of a sequence to every record in the table, which would take as many queries as there are records. I'm using Rails and MySQL if that's of any relevance.

    Read the article

  • Suggestions for a django db structure

    - by rh0dium
    Hi Say I have the unknown number of questions. For example: Is the sky blue [y/n] What date were your born on [date] What is pi [3.14] What is a large integ [100] Now each of these questions poses a different but very type specific answer (boolean, date, float, int). Natively django can happily deal with these in a model. class SkyModel(models.Model): question = models.CharField("Is the sky blue") answer = models.BooleanField(default=False) class BirthModel(models.Model): question = models.CharField("What date were your born on") answer = models.DateTimeField(default=today) class PiModel(models.Model) question = models.CharField("What is pi") answer = models.FloatField() But this has the obvious problem in that each question has a specific model - so if we need to add a question later I have to change the database. Yuck. So now I want to get fancy - How do a set up a model where by the answer type conversion happens automagically? ANSWER_TYPES = ( ('boolean', 'boolean'), ('date', 'date'), ('float', 'float'), ('int', 'int'), ('char', 'char'), ) class Questions(models.model): question = models.CharField(() answer = models.CharField() answer_type = models.CharField(choices = ANSWER_TYPES) default = models.CharField() So in theory this would do the following: When I build up my views I look at the type of answer and ensure that I only put in that value. But when I want to pull that answer back out it will return the data in the format specified by the answer_type. Example 3.14 comes back out as a float not as a str. How can I perform this sort of automagic transformation? Or can someone suggest a better way to do this? Thanks much!!

    Read the article

  • Perl 500 internal server error. Something wrong with my code.

    - by Nitish
    I get a 500 internal server error when I try to run the code below in a web server which supports perl: #! /usr/bin/perl use LWP; my $ua = LWP::UserAgent->new; $ua->agent("TestApp/0.1 "); $ua->env_proxy(); my $req = HTTP::Request->new(POST => 'http://www.google.com/loc/json'); $req->content_type('application/jsonrequest'); $req->content('{ "cell_towers": [{"location_area_code": "55000", "mobile_network_code": "95", "cell_id": "20491", "mobile_country_code": "404"}], "version": "1.1.0", "request_address": "true"}'); my $res = $ua->request($req); if ($res->is_success) { print $res->content,"\n"; } else { print $res->status_line, "\n"; return undef; } But there is no error when I run the code below: #! /usr/bin/perl use CGI::Carp qw(fatalsToBrowser); print "Content-type: text/html\n\n"; print "<HTML>\n"; print "<HEAD><TITLE>Hello World!</TITLE></HEAD>\n"; print "<BODY>\n"; print "<H2>Hello World!</H2> <br /> \n"; foreach $key (sort keys(%ENV)) { print "$key = $ENV{$key}<p>" ; } print "</BODY>\n"; print "</HTML>\n"; So I think there is some problem with my code. When I run the first perl script in my local machine with the -wc command, it says that the syntax is OK. Help me please.

    Read the article

  • Elegant Disjunctive Normal Form in Django

    - by Mike
    Let's say I've defined this model: class Identifier(models.Model): user = models.ForeignKey(User) key = models.CharField(max_length=64) value = models.CharField(max_length=255) Each user will have multiple identifiers, each with a key and a value. I am 100% sure I want to keep the design like this, there are external reasons why I'm doing it that I won't go through here, so I'm not interested in changing this. I'd like to develop a function of this sort: def get_users_by_identifiers(**kwargs): # something goes here return users The function will return all users that have one of the key=value pairs specified in **kwargs. Here's an example usage: get_users_by_identifiers(a=1, b=2) This should return all users for whom a=1 or b=2. I've noticed that the way I've set this up, this amounts to a disjunctive normal form...the SQL query would be something like: SELECT DISTINCT(user_id) FROM app_identifier WHERE (key = "a" AND value = "1") OR (key = "b" AND value = "2") ... I feel like there's got to be some elegant way to take the **kwargs input and do a Django filter on it, in just 1-2 lines, to produce this result. I'm new to Django though, so I'm just not sure how to do it. Here's my function now, and I'm completely sure it's not the best way to do it :) def get_users_by_identifiers(**identifiers): users = [] for key, value in identifiers.items(): for identifier in Identifier.objects.filter(key=key, value=value): if not identifier.user in users: users.append(identifier.user) return users Any ideas? :) Thanks!

    Read the article

  • What's the most efficient way to load data from a file to a collection on-demand?

    - by Dan
    I'm working on a java project that will allows users to parse multiple files with potentially thousands of lines. The information parsed will be stored in different objects, which then will be added to a collection. Since the GUI won't require to load ALL these objects at once and keep them in memory, I'm looking for an efficient way to load/unload data from files, so that data is only loaded into the collection when a user requests it. I'm just evaluation options right now. I've also thought of the case where, after loading a subset of the data into the collection, and presenting it on the GUI, the best way to reload the previously observed data. Re-run the parser/Populate collection/Populate GUI? or probably find a way to keep the collection into memory, or serialize/deserialize the collection itself? I know that loading/unloading subsets of data can get tricky if some sort of data filtering is performed. Let's say that I filter on ID, so my new subset will contain data from two previous analyzed subsets. This would be no problem is I keep a master copy of the whole data in memory. I've read that google-collections are good and efficient when handling big amounts of data, and offer methods that simplify lots of things so this might offer an alternative to allow me to keep the collection in memory. This is just general talking. The question on what collection to use is a separate and complex thing. Do you know what's the general recommendation on this type of task? I'd like to hear what you've done with similar scenarios. I can provide more specifics if needed.

    Read the article

  • Best Practice for Summary Footer (and the like) in MVC

    - by benpage
    Simple question on best practice. Say I have: public class Product { public string Name { get; set; } public string Price { get; set; } public int CategoryID { get; set; } public bool IsAvailable { get; set; } } and i have a view using IEnumerable< Product as the model, and i iterate through the Products on the page and want to show the total of the prices at the end of the list, should I use: <%= Model.Sum(x=> x.Price) %> or should I use some other method? This may extend to include more involved things like: <%= Model.Where(x=> x.CategoryID == 5 && x.IsAvailable).Sum(x=> x.Price) %> and even <% foreach (Product p in Model.Where(x=> x.IsAvailable) {%> -- insert html -- <% } %> <% foreach (Product p in Model.Where(x=> !x.IsAvailable) {%> -- insert html -- <% } %> I guess this comes down to should I have that sort of code within my view, or should i be passing it to my view in ViewData? Or perhaps some other way?

    Read the article

  • PHP Outputting File Attachments with Headers

    - by OneNerd
    After reading a few posts here I formulated this function which is sort of a mishmash of a bunch of others: function outputFile( $filePath, $fileName, $mimeType = '' ) { // Setup $mimeTypes = array( 'pdf' => 'application/pdf', 'txt' => 'text/plain', 'html' => 'text/html', 'exe' => 'application/octet-stream', 'zip' => 'application/zip', 'doc' => 'application/msword', 'xls' => 'application/vnd.ms-excel', 'ppt' => 'application/vnd.ms-powerpoint', 'gif' => 'image/gif', 'png' => 'image/png', 'jpeg' => 'image/jpg', 'jpg' => 'image/jpg', 'php' => 'text/plain' ); // Send Headers //-- next line fixed as per suggestion -- header('Content-Type: ' . $mimeTypes[$mimeType]); header('Content-Disposition: attachment; filename="' . $fileName . '"'); header('Content-Transfer-Encoding: binary'); header('Accept-Ranges: bytes'); header('Cache-Control: private'); header('Pragma: private'); header('Expires: Mon, 26 Jul 1997 05:00:00 GMT'); readfile($filePath); } I have a php page (file.php) which does something like this (lots of other code stripped out): // I run this thru a safe function not shown here $safe_filename = $_GET['filename']; outputFile ( "/the/file/path/{$safe_filename}", $safe_filename, substr($safe_filename, -3) ); Seems like it should work, and it almost does, but I am having the following issues: When its a text file, I am getting a strange symbol as the first letter in the text document When its a word doc, it is corrupt (presumably that same first bit or byte throwing things off). I presume all other file types will be corrupt - have not even tried them Any ideas on what I am doing wrong? Thanks - UPDATE: changed line of code as suggested - still same issue.

    Read the article

  • Does it exist: smart pointer, owned by one object allowing access.

    - by Noah Roberts
    I'm wondering if anyone's run across anything that exists which would fill this need. Object A contains an object B. It wants to provide access to that B to clients through a pointer (maybe there's the option it could be 0, or maybe the clients need to be copiable and yet hold references...whatever). Clients, lets call them object C, would normally, if we're perfect developers, be written carefully so as to not violate the lifetime semantics of any pointer to B they might have...but we're not perfect, in fact we're pretty dumb half the time. So what we want is for object C to have a pointer to object B that is not "shared" ownership but that is smart enough to recognize a situation in which the pointer is no longer valid, such as when object A is destroyed or it destroys object B. Accessing this pointer when it's no longer valid would cause an assertion/exception/whatever. In other words, I wish to share access to data in a safe, clear way but retain the original ownership semantics. Currently, because I've not been able to find any shared pointer in which one of the objects owns it, I've been using shared_ptr in place of having such a thing. But I want clear owneship and shared/weak pointer doesn't really provide that. Would be nice further if this smart pointer could be attached to member variables and not just hold pointers to dynamically allocated memory regions. If it doesn't exist I'm going to make it, so I first want to know if someone's already released something out there that does it. And, BTW, I do realize that things like references and pointers do provide this sort of thing...I'm looking for something smarter.

    Read the article

  • Appropriate uses of Monad `fail` vs. MonadPlus `mzero`

    - by jberryman
    This is a question that has come up several times for me in the design code, especially libraries. There seems to be some interest in it so I thought it might make a good community wiki. The fail method in Monad is considered by some to be a wart; a somewhat arbitrary addition to the class that does not come from the original category theory. But of course in the current state of things, many Monad types have logical and useful fail instances. The MonadPlus class is a sub-class of Monad that provides an mzero method which logically encapsulates the idea of failure in a monad. So a library designer who wants to write some monadic code that does some sort of failure handling can choose to make his code use the fail method in Monad or restrict his code to the MonadPlus class, just so that he can feel good about using mzero, even though he doesn't care about the monoidal combining mplus operation at all. Some discussions on this subject are in this wiki page about proposals to reform the MonadPlus class. So I guess I have one specific question: What monad instances, if any, have a natural fail method, but cannot be instances of MonadPlus because they have no logical implementation for mplus? But I'm mostly interested in a discussion about this subject. Thanks! EDIT: One final thought occured to me. I recently learned (even though it's right there in the docs for fail) that monadic "do" notation is desugared in such a way that pattern match failures, as in (x:xs) <- return [] call the monad's fail. It seems like the language designers must have been strongly influenced by the prospect of some automatic failure handling built in to haskell's syntax in their inclusion of fail in Monad.

    Read the article

  • How to implement a collection (list, map?) of complicated strings in Java?

    - by Alex Cheng
    Hi all. I'm new here. Problem -- I have something like the following entries, 1000 of them: args1=msg args2=flow args3=content args4=depth args6=within ==> args5=content args1=msg args2=flow args3=content args4=depth args6=within args7=distance ==> args5=content args1=msg args2=flow args3=content args6=within ==> args5=content args1=msg args2=flow args3=content args6=within args7=distance ==> args5=content args1=msg args2=flow args3=flow ==> args4=flowbits args1=msg args2=flow args3=flow args5=content ==> args4=flowbits args1=msg args2=flow args3=flow args6=depth ==> args4=flowbits args1=msg args2=flow args3=flow args6=depth ==> args5=content args1=msg args2=flow args4=depth ==> args3=content args1=msg args2=flow args4=depth args5=content ==> args3=content args1=msg args2=flow args4=depth args5=content args6=within ==> args3=content args1=msg args2=flow args4=depth args5=content args6=within args7=distance ==> args3=content I'm doing some sort of suggestion method. Say, args1=msg args2=flow args3=flow == args4=flowbits If the sentence contains msg, flow, and another flow, then I should return the suggestion of flowbits. How can I go around doing it? I know I should scan (whenever a character is pressed on the textarea) a list or array for a match and return the result, but, 1000 entries, how should I implement it? I'm thinking of HashMap, but can I do something like this? <"msg,flow,flow","flowbits" Also, in a sentence the arguments might not be in order, so assuming that it's flow,flow,msg then I can't match anything in the HashMap as the key is "msg,flow,flow". What should I do in this case? Please help. Thanks a million!

    Read the article

  • How do I Order on common attribute of two models in the DB?

    - by Will
    If i have two tables Books, CDs with corresponding models. I want to display to the user a list of books and CDs. I also want to be able to sort this list on common attributes (release date, genre, price, etc.). I also have basic filtering on the common attributes. The list will be large so I will be using pagination in manage the load. items = [] items << CD.all(:limit => 20, :page => params[:page], :order => "genre ASC") items << Book.all(:limit => 20, :page => params[:page], :order => "genre ASC") re_sort(items,"genre ASC") Right now I am doing two queries concatenating them and then sorting them. This is very inefficient. Also this breaks down when I use paging and filtering. If I am on page 2 of how do I know what page of each table individual table I am really on? There is no way to determine this information without getting all items from each table. I have though that if I create a new Class called items that has a one to one relationship with either a Book or CD and do something like Item.all(:limit => 20, :page => params[:page], :include => [:books, :cds], :order => "genre ASC") However this gives back an ambiguous error. So can only be refined as Item.all(:limit => 20, :page => params[:page], :include => [:books, :cds], :order => "books.genre ASC") And does not interleave the books and CDs in a way that I want. Any suggestions.

    Read the article

  • What runs faster? Wordpress or Drupal 6.x?

    - by electblake
    So... I run a pretty large Wordpress blog. Currently it gets around 20k+ pageviews a day, and its always a struggle to keep the bad boy running quickly - I currently run a vps.net with CentOS 5.3 I am also Drupal developer by trade so I love the CMS Framework for its versatility and the portability (I can take work from one site and implement on another with great ease) MY QUESTION IS: What is faster then? Wordpress 3.x & Drupal 6.x I'd love to migrate my site to Drupal to be able to roll out new features etc (which I find awkward to do in Wordpress) but I am scared that Drupal may not be able to handle the traffic. Any opinions? I know that some major players use Drupal - as Dries documents well on his blog but I'm not under any illusions that Drupal can be a real hog. Thanks for any/all help! Please try to avoid server optimization talk unless it pertains to Wordpress or Drupal 6.x specifically, I love to learn more about optimizations but I do want to sort out which platform is quicker :) p.s - I realize the fastest option is to use a lower-level framework (with less overhead) like CakePHP etc but assume that isn't an option ;)

    Read the article

  • Why is UseCompatibleTextRendering needed here?

    - by HotOil
    Hi, I think I'm missing something fundamental. Please tell me what it is, if you can. I have developed a little C++ WinForms app using VS2008. So it is built using .NET 3.5 SP1. My development box is Win7, if that matters. The default value of UseCompatibleTextRendering property in WinForms controls is false in this version of VStudio. And this should not matter to me, I don't think. I don't have any custom-drawn text or controls. The app looks good running on my Win7 box. If I package it up (dragging along .NET 3.5) and install it on one of our WinXP desktops, the buttons and labels don't look good; the text is chopped off in them. If I set UseCompatibleTextRendering to true and then run it on the XP boxes, the text fits into the buttons and labels. My question is: Why? The installation puts .Net 3.5 on the XP boxes, so the app should be able to find and use the right version of WinForms, right? I should note that before I put my app + .NET 3.5 on these boxes, they have no .NET at all. They do not get automatic Microsoft updates; our IT guy gates the patches and upgrades. [ This sort of thing has happened before with apps I create.. they look/work great on the Engineering machines, because we maintain those and they mostly have up-to-date stuff. When they are run on the corporate boxes, they usually don't run and need the VCredist installed. ] Back to the question at hand: The text looks better with the UseCompatibleTextRendering set to false, so I'd rather keep it that way, if I can. I'd like to understand what might be missing on those XP boxes that is making the text not fit. Thanks S

    Read the article

  • How to Alphabetize a CSS file in Vim

    - by Kev
    I get a CSS file: div#header h1 { z-index: 101; color: #000; position: relative; line-height: 24px; margin-right: 48px; border-bottom: 1px solid #dedede; font-size: 18px; } div#header h2 { z-index: 101; color: #000; position: relative; line-height: 24px; margin-right: 48px; border-bottom: 1px solid #dedede; font-size: 18px; } I want to Alphabetize lines between the {...} div#header h1 { border-bottom: 1px solid #dedede; color: #000; font-size: 18px; line-height: 24px; margin-right: 48px; position: relative; z-index: 101; } div#header h2 { border-bottom: 1px solid #dedede; color: #000; font-size: 18px; line-height: 24px; margin-right: 48px; position: relative; z-index: 101; } I map F7 to do it nmap <F7> /{/+1<CR>vi{:sort<CR> But I need to press F7 over and over again to get the work done. If the CSS file is big, It's time-consuming & easily get bored. I want to get the cmds piped. So that, I only press F7 once! Any idea? thanks!

    Read the article

  • Best implementation of Java Queue?

    - by Georges Oates Larsen
    I am working (In java) on a recursive image processing algorithm that recursively traverses the pixels of the image, outward from a center point. Unfortunately... That causes stack overflows, so I have decided to switch to a Queue-based algorithm. Now, this is all fine and dandy -- But considering the fact that its queue will be analyzing THOUSANDS of pixels in a very short amount of time, while constantly popping and pushing, WITHOUT maintaining a predictable state (It could be anywhere between length 100, and 20000); The queue implementation needs to have significantly fast popping and pushing abilities. A linked list seems attractive due to its ability to push elements unto its self without rearranging anything else in the list, but in order for it to be fast enough, it would need easy access to both its head, AND its tail (or second-to-last node if it were not doubly-linked). Sadly, though I cannot find any information related to the underlying implementation of linked lists in Java, so it's hard to say if a linked list is really the way to go... This brings me to my question... What would be the best implementation of the Queue interface in Java for what I intend to do? (I do not wish to edit or even access anything other than the head and tail of the queue -- I do not wish to do any sort of rearranging, or anything. On the flip side, I DO intend to do a lot of pushing and popping, and the queue will be changing size quite a bit, so preallocating would be inefficient)

    Read the article

  • SQLAlchemy, one to many vs many to one

    - by sadvaw
    Dear Everyone, I have the following data: CREATE TABLE `groups` ( `bookID` INT NOT NULL, `groupID` INT NOT NULL, PRIMARY KEY(`bookID`), KEY( `groupID`) ); and a book table which basically has books( bookID, name, ... ), but WITHOUT groupID. There is no way for me to determine what the groupID is at the time of the insert for books. I want to do this in sqlalchemy. Hence I tried mapping Book to the books joined with groups on book.bookID=groups.bookID. I made the following: tb_groups = Table( 'groups', metadata, Column('bookID', Integer, ForeignKey('books.bookID'), primary_key=True ), Column('groupID', Integer), ) tb_books = Table( 'books', metadata, Column('bookID', Integer, primary_key=True), tb_joinedBookGroup = sql.join( tb_books, tb_groups, \ tb_books.c.bookID == tb_groups.c.bookID) and defined the following mapper: mapper( Group, tb_groups, properties={ 'books': relation(Book, backref='group') }) mapper( Book, tb_joinedBookGroup ) ... However, when I execute this piece of code, I realized that each book object has a field groups, which is a list, and each group object has books field which is a singular assigment. I think my definition here must have been causing sqlalchemy to be confused about the many-to-one vs one-to-many relationship. Can someone help me sort this out? My desired goal is g.books = [b, b, b, .. ] book.group = g, where g is an instance of group, and b is an instance of book

    Read the article

  • folder structure in a mercurial repo?

    - by ajsie
    I have just switched from svn to mercurial and have read some tutorials about it. I've still got some confusions that i hope you could help me to sort out. I wonder if I have understood the folder structure in a mercurial repo right. In a svn repo I usually have these folders: svn: branches (branches/chat, branches/new_login etc) tags (version1.0, version2.0 etc) sandbox trunk Should a branch actually be another clone of the original/central repo in mercurial? it seemed like that when I read the manual. And a tag is just a named identifier, but you should clone the original/central repo whenever you want to create a tag? How about the sandbox? should that be another clone too? So basically you just have in a repo all the folders/files that you would have in the trunk folder? mercurial: central repo: projects folders/files (not in any parentfolder) tag repo: cloned from central repo at a given moment for release (version1.0, version2.0 etc) branch repo: cloned from central repo for adding features (chat, new_login etc) sandbox repo: experimental repo (could be pushed to central repo, or just deleted) is this correct?

    Read the article

  • PHP - dynamic page via subdomain

    - by Phil Jackson
    Hi, im creating profile pages based on a subdomains using the wildcard DNS setting. Problem being, if the subdomain is incorrect, I want it to redirect to the same page but without the subdomain infront ie; if ( preg_match('/^(www\.)?([^.]+)\.domainname\.co.uk$/', $_SERVER['HTTP_HOST'], $match)) { $DISPLAY_NAME = $match[2]; $query = "SELECT * FROM `" . ACCOUNT_TABLE . "` WHERE DISPLAY_NAME = '$DISPLAY_NAME' AND ACCOUNT_TYPE = 'premium_account'"; $q = mysql_query( $query, $CON ) or die( "_error_" . mysql_error() ); if( mysql_num_rows( $q ) != 0 ) { }else{ mysql_close( $CON ); header("location: http://www.domainname.co.uk"); exit; } } I get a browser error: Firefox has detected that the server is redirecting the request for this address in a way that will never complete. I think its because when using header("location: http://www.domainname.co.uk"); it still puts the sub domain infront i.e. ; header("location: http://www.sub.domainname.co.uk"); Does anyone know how to sort this and/or what is the problem. Regards, Phil

    Read the article

  • What are salesforce.com and Apex like as an application development platform?

    - by mhollers
    I have recently discovered that salesforce.com is much more than an online CRM after coming across a Morrison's Case Study in which they develop a works management application. I've been trying it out with a view to recreating our own Works Management system on the platform. My background is in Microsoft and .Net, and the obvious 1st choice would be asp.net. However, there's only really myself with .net experience and my manager with a more legacy Synergy programming background, and I am self taught and am looking at evaluating other RAD options (eg Ironspeed). the nature of the business is in the main 2-5 concurrent construction type contracts that run for 3-5 yrs each, each requiring 15-50 system users. Traditionally we have used our character based Works Mangement system for everything and tweaked it for each contract. The Salesforce licensing model on the face of it suits this sort of flexibilty, but I'm worried about the development flexibilty/learning curve and all the issues that surround lock-in. There doesn't seem to be much neutral sober analysis of the platform on the web that isn't salesforce's own material/blogs Has anyone any experience of developing an application on salesforce as compared to the more 'traditional' .Net route?

    Read the article

  • Java/Hibernate using interfaces over the entities.

    - by Dennetik
    I am using annoted Hibernate, and I'm wondering whether the following is possible. I have to set up a series of interfaces representing the objects that can be persisted, and an interface for the main database class containing several operations for persisting these objects (... an API for the database). Below that, I have to implement these interfaces, and persist them with Hibernate. So I'll have, for example: public interface Data { public String getSomeString(); public void setSomeString(String someString); } @Entity public class HbnData implements Data, Serializable { @Column(name = "some_string") private String someString; public String getSomeString() { return this.someString; } public void setSomeString(String someString) { this.someString = someString; } } Now, this works fine, sort of. The trouble comes when I want nested entities. The interface of what I'd want is easy enough: public interface HasData { public Data getSomeData(); public void setSomeData(Data someData); } But when I implement the class, I can follow the interface, as below, and get an error from Hibernate saying it doesn't know the class "Data". @Entity public class HbnHasData implements HasData, Serializable { @OneToOne(cascade = CascadeType.ALL) private Data someData; public Data getSomeData() { return this.someData; } public void setSomeData(Data someData) { this.someData = someData; } } The simple change would be to change the type from "Data" to "HbnData", but that would obviously break the interface implementation, and thus make the abstraction impossible. Can anyone explain to me how to implement this in a way that it will work with Hibernate?

    Read the article

  • Web Applications Development: Security practices for Application design

    - by Shyam
    Hi, As I am creating more web applications that are targeted for multiple users, I figured out that I have to start thinking about user management and security. At a glance and in my ideal world, all users belong to a group. Permissions and access is thus defined per group (and inherited by the users of that group). Logically, I have my group of administrators, which are identified with a level "7" (integer) clearance. A group of webusers have for example level "1". This in generally all works great for me, but I need some kind of list that I have to keep in mind how I secure my system, and some general practices. I am not looking for a specific environment; I want to learn the why's and how's. An example is privilege escalation. If someone would be able to "push" themselves inside a group with higher privileges, for example the Administration, how can I prevent this, or what measures should I take to have some sort of precaution? I don't like in that case to walk into a caveat. My question is basically: where can I find a good resource, list, policy, book that explains the security of web applications, the why's, the how's and readable if you don't have any experience in the realm of advanced security? I prefer a free resource, as I believe I couldn't be the first one who thought about this. Thank you for your answers, comments and feedback.

    Read the article

  • What to call factory-like (java) methods used with immutable objects

    - by StaxMan
    When creating classes for "immutable objects" immutable meaning that state of instances can not be changed; all fields assigned in constructor) in Java (and similar languages), it is sometimes useful to still allow creation of modified instances. That is, using an instance as base, and creating a new instance that differs by just one property value; other values coming from the base instance. To give a simple example, one could have class like: public class Circle { final double x, y; // location final double radius; public Circle(double x, double y, double r) { this.x = x; this.y = y; this.r = r; } // method for creating a new instance, moved in x-axis by specified amount public Circle withOffset(double deltaX) { return new Circle(x+deltaX, y, radius); } } So: what should method "withOffset" be called? (note: NOT what its name ought to be -- but what is this class of methods called). Technically it is kind of a factory method, but somehow that does not seem quite right to me, since often factories are just given basic properties (and are either static methods, or are not members of the result type but factory type). So I am guessing there should be a better term for such methods. Since these methods can be used to implement "fluent interface", maybe they could be "fluent factory methods"? Better suggestions? EDIT: as suggested by one of answers, java.math.BigDecimal is a good example with its 'add', 'subtract' (etc) methods. Also: I noticed that there's this question (by Jon Skeet no less) that is sort of related (although it asks about specific name for method)

    Read the article

< Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >