Search Results

Search found 28685 results on 1148 pages for 'query performance'.

Page 307/1148 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • mySQL - Separate Lastname,Firstname and CompanyName entries from a single column

    - by Decalmo
    I've got a column in a database which contains company names, and customer names all in one field... what I'd like to do is keep the CompanyName column completely intact, but wherever there is a comma in the CompanyName I'd like to take that information and populate it into a FirstName and LastName field. So that basically... Before: CompanyName: Big Company Inc Smith, John Sue, Maggie After: CompanyName: Big Company Inc Smith, John Sue, Maggie LastName: Smith Sue FirstName: John Maggie This one is pretty dang tricky for me... Any help is greatly appreciated!

    Read the article

  • Of these 3 methods for reading linked lists from shared memory, why is the 3rd fastest?

    - by Joseph Garvin
    I have a 'server' program that updates many linked lists in shared memory in response to external events. I want client programs to notice an update on any of the lists as quickly as possible (lowest latency). The server marks a linked list's node's state_ as FILLED once its data is filled in and its next pointer has been set to a valid location. Until then, its state_ is NOT_FILLED_YET. I am using memory barriers to make sure that clients don't see the state_ as FILLED before the data within is actually ready (and it seems to work, I never see corrupt data). Also, state_ is volatile to be sure the compiler doesn't lift the client's checking of it out of loops. Keeping the server code exactly the same, I've come up with 3 different methods for the client to scan the linked lists for changes. The question is: Why is the 3rd method fastest? Method 1: Round robin over all the linked lists (called 'channels') continuously, looking to see if any nodes have changed to 'FILLED': void method_one() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { Data* current_item = channel_cursors[i]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[i] = static_cast<Data*>(current_item->next_.get(segment)); } } } Method 1 gave very low latency when then number of channels was small. But when the number of channels grew (250K+) it became very slow because of looping over all the channels. So I tried... Method 2: Give each linked list an ID. Keep a separate 'update list' to the side. Every time one of the linked lists is updated, push its ID on to the update list. Now we just need to monitor the single update list, and check the IDs we get from it. void method_two() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { if(update_cursor->state_ == NOT_FILLED_YET) { continue; } ::uint32_t update_id = update_cursor->list_id_; Data* current_item = channel_cursors[update_id]; if(current_item->state_ == NOT_FILLED_YET) { std::cerr << "This should never print." << std::endl; // it doesn't continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[update_id] = static_cast<Data*>(current_item->next_.get(segment)); update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } } Method 2 gave TERRIBLE latency. Whereas Method 1 might give under 10us latency, Method 2 would inexplicably often given 8ms latency! Using gettimeofday it appears that the change in update_cursor-state_ was very slow to propogate from the server's view to the client's (I'm on a multicore box, so I assume the delay is due to cache). So I tried a hybrid approach... Method 3: Keep the update list. But loop over all the channels continuously, and within each iteration check if the update list has updated. If it has, go with the number pushed onto it. If it hasn't, check the channel we've currently iterated to. void method_three() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { std::size_t idx = i; ACQUIRE_MEMORY_BARRIER; if(update_cursor->state_ != NOT_FILLED_YET) { //std::cerr << "Found via update" << std::endl; i--; idx = update_cursor->list_id_; update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } Data* current_item = channel_cursors[idx]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } found_an_update = true; log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[idx] = static_cast<Data*>(current_item->next_.get(segment)); } } } The latency of this method was as good as Method 1, but scaled to large numbers of channels. The problem is, I have no clue why. Just to throw a wrench in things: if I uncomment the 'found via update' part, it prints between EVERY LATENCY LOG MESSAGE. Which means things are only ever found on the update list! So I don't understand how this method can be faster than method 2. The full, compilable code (requires GCC and boost-1.41) that generates random strings as test data is at: http://pastebin.com/e3HuL0nr

    Read the article

  • Speeding Up Slow, CPU-Intensive Scrolling in WinForms

    - by S B
    How can I speed up the scrolling of UserControls in a WinForms app.? My main form has trouble scrolling quickly on slow machines--painting for each of the small scroll increments is CPU intensive. My form has roughly fifty UserControls (with multiple fields) positioned one below the other. I’ve tried intercepting OnScroll and UserPaint in order to eliminate some of the unnecessary re-paints for very small scroll events, but the underlying Paint gets called anyway. How can I streamline scrolling on slower machines?

    Read the article

  • mysql real escape string error

    - by user547995
    Code mysql_query("INSERT INTO Account(User, Pw, email) VALUES('mysql_real_escape_string($_POST[user])', '$pw','mysql_real_escape_string($_POST[email]) ) ") or die(mysql_error()); Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''mysql_real_escape_string(123) )' at line 1 Please help

    Read the article

  • MySQL table data transformation -- how can I dis-aggregate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • mongodb php finding a subobject and removing it, $inc other subobject

    - by Mark
    I'm trying to delete the subobject 'apples' from my documents and update the 'fruitInventory' property, decrease by the amount of apples. I'm confused on how to proceed, should I use dot notation or do a full text search for apples? I don't know if this matters but you can assume apples will always be in field 1. // Document 1 { "1": { "apples": 3, "fruitInventory": 21, "oranges": 12, "kiwis": 3, "lemons": 3 }, "2": { "bananas": 4, "fruitInventory": 12, "oranges": 8, }, "_id": "1" } // Document 2 { "1": { "apples": 5, "fruitInventory": 10, "oranges": 2, "pears": 3 }, "2": { "bananas": 4, "fruitInventory": 6, "cherries": 2, }, "_id": "2" } Result should be like this: // Document 1 { "1": { "fruitInventory": 18, "oranges": 12, "kiwis": 3, "lemons": "3" }, "2": { "bananas": 4, "fruitInventory": 12, "oranges": 8, }, "_id": "1" } // Document 2 { "1": { "fruitInventory": 5, "oranges": "2", "pears": "3" }, "2": { "bananas": 4, "fruitInventory": 6, "cherries": 2, }, "_id": "2" } Thanks in advance for your help.

    Read the article

  • Practical approach to concurrency control

    - by Industrial
    Hi everyone, I'd read this article recently and are very interested on how to make a practical approach to Concurrency control on a web server. The server will run CentOS + PHP + mySQL with Memcached. How would you set it up to work? http://saasinterrupted.com/2010/02/05/high-availability-principle-concurrency-control/ Thanks!

    Read the article

  • [MYSQL] Select users who own both a dog and a cat

    - by matte
    Hi, I have this sample table: CREATE TABLE `dummy` ( `id` int(11) NOT NULL AUTO_INCREMENT, `userId` int(11) NOT NULL, `pet` varchar(50) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=7 ; INSERT INTO `dummy` (`id`, `userId`, `pet`) VALUES(1, 1, 'dog'); INSERT INTO `dummy` (`id`, `userId`, `pet`) VALUES(2, 1, 'cat'); INSERT INTO `dummy` (`id`, `userId`, `pet`) VALUES(3, 2, 'dog'); INSERT INTO `dummy` (`id`, `userId`, `pet`) VALUES(4, 2, 'cat'); INSERT INTO `dummy` (`id`, `userId`, `pet`) VALUES(5, 3, 'cat'); INSERT INTO `dummy` (`id`, `userId`, `pet`) VALUES(6, 4, 'dog'); How can I write the statements below in mysql: Retrieve all users who own both a dog and a cat Retrieve all users who own a dog or a cat Retrieve all users who own only a cat Retrieve all users who doesn't own a cat Thanks!

    Read the article

  • How well do zippers perform in practice, and when should they be used?

    - by Rob
    I think that the zipper is a beautiful idea; it elegantly provides a way to walk a list or tree and make what appear to be local updates in a functional way. Asymptotically, the costs appear to be reasonable. But traversing the data structure requires memory allocation at each iteration, where a normal list or tree traversal is just pointer chasing. This seems expensive (please correct me if I am wrong). Are the costs prohibitive? And what under what circumstances would it be reasonable to use a zipper?

    Read the article

  • Lucene search taking TOOO long.

    - by Josh Handel
    I;m using Lucene.net (2.9.2.2) on a (currently) 70Gig index.. I can do a fairly complicated search and get all the document IDs back in 1 ~ 2 seconds.. But to actually load up all the hits (about 700 thousand in my test queries) takes 5+ minutes. We aren't using lucene for UI, this is a datastore between processes where we have hundreds of millions of pre-cached data elements, and the part I am working on exports a few specific fields from each found document. (ergo, pagination doesn't make since as this is an export between processes). My question is what is the best way to get all of the documents in a search result? currently I am using a custom collector that does a get on the document (with a MapFieldSelector) as its collecting.. I've also tried iterating through the list after the collector has finished.. but that was even worse. I'm open to ideas :-). Thanks in advance.

    Read the article

  • HTML Chrome Audit Specify Image Dimensions

    - by AKRamkumar
    I just started using the chrome developer tools for some basic html websites and I used the audit tool. I had two identical images, one with the height and width attribute, and one without. On the Resources section, both the latency and the download time were identical. However, the Audit showed Specify image dimensions (1) A width and height should be specified for all images in order to speed up page display. Does this actually help? And are there any other ways to speed up page time? This is only a splash page for the website I am building and as such it is only html, no css or javascript or anything. I have already compressed the images but I want to speed up load time even more. Is there a way?

    Read the article

  • PHP behaves weird, mixing up HTML markup.

    - by adardesign
    Thanks for looking on this problem. I have a page that is totally valid page, and there is a PHP loop that brings in a <li> for each entry of the table. When i check this page locally it looks 100% OK, but when veiwing the page online the left side bar (which creates this markup is broken randomly mixing <div>'s and <li>'s and i have no clue what the problem is. See page (problem is on the left side) php code <?php do { ?> <li class="clear-block" id="<?php echo $row_Recordset1['penSKU']; ?>"> <a title="Click to view the <?php echo $row_Recordset1['penName']; ?> collection" rel="<?php echo $row_Recordset1['penSKU']; ?>"> <img src="prodImages/small/<?php echo $row_Recordset1['penSKU']; ?>.png" alt="" /> <div class="prodInfoCntnr"> <div class="basicInfo"> <span class="prodName"><?php echo $row_Recordset1['penName']; ?></span> <span class="prodSku"><?php echo $row_Recordset1['penSKU']; ?></span> </div> <div class="secondaryInfo"> <span>As low as .<?php echo $row_Recordset1['price25000']; ?>¢ <!--<em>(R)</em>--></span> <div class="colorPlacholder" rel="<?php echo $row_Recordset1['penColors']; ?>"></div> </div> </div> <div class="additPenInfo"> <div class="imprintInfo"><span>Imprint area: </span><?php echo $row_Recordset1['imprintArea']; ?></div> <div class="colorInfo"><span>Available in: </span><?php echo $row_Recordset1['penColors']; ?></div> <table border="0" cellspacing="0" cellpadding="0"> <tr> <th>Amount</th> <th>500</th> <th>1,000</th> <th>2,500</th> <th>5,000</th> <th>10,000</th> <th>20,000</th> </tr> <tr> <td>Price <span>(R)</span></td> <td><?php echo $row_Recordset1['price500'];?>¢</td> <td><?php echo $row_Recordset1['price1000'];?>¢</td> <td><?php echo $row_Recordset1['price2500'];?>¢</td> <td><?php echo $row_Recordset1['price5000'];?>¢</td> <td>Please Contact</td> <td>Please Contact</td> </tr> </table> </div> </a> </li> <?php } while ($row_Recordset1 = mysql_fetch_assoc($Recordset1)); ?>

    Read the article

  • One large file or multiple small files?

    - by Dan
    I have an application (currently written in Python as we iron out the specifics but eventually it will be written in C) that makes use of individual records stored in plain text files. We can't use a database and new records will need to be manually added regularly. My question is this: would it be faster to have a single file (500k-1Mb) and have my application open, loop through, find and close a file OR would it be faster to have the records separated and named using some appropriate convention so that the application could simply loop over filenames to find the data it needs? I know my question is quite general so direction to any good articles on the topic are as appreciated as much as suggestions. Thanks very much in advance for your time, Dan

    Read the article

  • atomic operation cost

    - by osgx
    Hello What is the cost of the atomic operation? How much cycles does it consume? Will it pause other processors on SMP or NUMA, or will it block memory accesses? Will it flush reorder buffer in out-of-order CPU? What effects will be on the cache? Thanks.

    Read the article

  • displaying a group once in php mysql

    - by JPro
    I have some data like this : 1 TC1 PASS 2 TC2 FAIL 3 TC3 INCONC 4 TC1 FAIL 5 TC21 FAIL 6 TC4 PASS 7 TC3 PASS 8 TC2 FAIL 9 TC1 TIMEOUT 10 TC21 FAIL If I try the below code : <?php mysql_connect("localhost", "root", "pop") or die(mysql_error()); mysql_select_db("jpd") or die(mysql_error()); $oustanding_fails = mysql_query("select * from SELECT_PASS ") or die(mysql_error()); $resultSetArray = array(); $platform; while($row1 = mysql_fetch_array( $oustanding_fails )) { if(trim($row1['TESTCASE']) <> trim($platform)) { echo $row1['TESTCASE']."-"; $platform = $row1['TESTCASE']; } echo $row1['RESULT'] ."<br>"; } ?> to get a result like this : TC1 PASS FAIL TIMEOUT TC2 FAIL FAIL TC3 INCONC PASS TC4 PASS AND SO ON. I am unable to get the result I want. Any ideas where exactly I am making mistake? Thanks.

    Read the article

  • Are these tables too big for SQL Server or Oracle

    - by Jeffrey Cameron
    Hey all, I'm not much of a database guru so I would like some advice. Background We have 4 tables that are currently stored in Sybase IQ. We don't currently have any choice over this, we're basically stuck with what someone else decided for us. Sybase IQ is a column-oriented database that is perfect for a data warehouse. Unfortunately, my project needs to do a lot of transactional updating (we're more of an operational database) so I'm looking for more mainstream alternatives. Question Given these tables' dimensions, would anyone consider SQL Server or Oracle to be a viable alternative? Table 1 : 172 columns * 32 million rows Table 2 : 453 columns * 7 million rows Table 3 : 112 columns * 13 million rows Table 4 : 147 columns * 2.5 million rows Given the size of data what are the things I should be concerned about in terms of database choice, server configuration, memory, platform, etc.?

    Read the article

  • Mysql Groupby and Orderby problem

    - by luvboy
    Here is my data structure when i try this sql select rec_id, customer_id, dc_number, balance from payments where customer_id='IHS050018' group by dc_number order by rec_id desc; something is wrong somewhere, idk I need rec_id customer_id dc_number balance 2 IHS050018 DC3 -1 3 IHS050018 52 600 Thanx

    Read the article

  • Scalable Ticketing / Festival Website

    - by Luke Lowrey
    I've noticed major music festivals (at least in Australia) and other events that experience a peak in traffic when tickets go on sale have huge problems keeping their websites running well. I've seen a few different techniques used to try combat this such as short sessions and virtual queues but they dont seem to have much effect. If you were to design a website to sell a lot of tickets in a short amount of time how would you handle scalability? What technologies and programming techniques would you use?

    Read the article

  • Lazy/deferred loading of a CollectionViewSource?

    - by Shimmy
    When you create a CollectionViewSource in the Resources section, is the set Source loaded when the resources are initalized (i.e. when the Resources holder is inited) or when data is bound? Is there a xamly way to make a CollectionViewSource lazy-load? deferred-load? explicit-load?

    Read the article

  • Javascript Running slow in IE

    - by SharePoint Newbie
    Hi, Javascript is running extremely slow on IE on some pages in our site. Profiling seems to show that the following methods are taking the most time: (Method, count, inclusive time, exclusive time) JScript - window script block 2,332 237.98 184.98 getDimensions 4 33 33 eh 213 32 32 extend 446 30 30 tt_HideSrcTagsRecurs 1,362 26 26 String.split 794 18 18 $ 717 49 17 findElements 104 184.98 14 What does "JScript - window script block" do? We are using jquery and prototype. Thanks,

    Read the article

  • C++: how to truncate the double in efficient way?

    - by Arman
    Hello, I would like to truncate the float to 4 digits. Are there some efficient way to do that? My current solution is: double roundDBL(double d,unsigned int p=4) { unsigned int fac=pow(10,p); double facinv=1.0/static_cast<double>(fac); double x=static_cast<unsigned int>(x*fac)/facinv; return x; } but using pow and delete seems to me not so efficient. kind regards Arman.

    Read the article

  • A question about complex SQL statement

    - by william
    Table A has columns ID and AName, Table B has columns BName and ID. B.ID is foreign key of A.ID. Write a SQL statement to show data like: print column AName and C which describe whether Table B has ID in Table A, if exists 1 or else 0. So if A is: 1 aaa 2 bbb B is: something,2 output is: aaa,0 bbb,1

    Read the article

  • Optimizing Haskell code

    - by Masse
    I'm trying to learn Haskell and after an article in reddit about Markov text chains, I decided to implement Markov text generation first in Python and now in Haskell. However I noticed that my python implementation is way faster than the Haskell version, even Haskell is compiled to native code. I am wondering what I should do to make the Haskell code run faster and for now I believe it's so much slower because of using Data.Map instead of hashmaps, but I'm not sure I'll post the Python code and Haskell as well. With the same data, Python takes around 3 seconds and Haskell is closer to 16 seconds. It comes without saying that I'll take any constructive criticism :). import random import re import cPickle class Markov: def __init__(self, filenames): self.filenames = filenames self.cache = self.train(self.readfiles()) picklefd = open("dump", "w") cPickle.dump(self.cache, picklefd) picklefd.close() def train(self, text): splitted = re.findall(r"(\w+|[.!?',])", text) print "Total of %d splitted words" % (len(splitted)) cache = {} for i in xrange(len(splitted)-2): pair = (splitted[i], splitted[i+1]) followup = splitted[i+2] if pair in cache: if followup not in cache[pair]: cache[pair][followup] = 1 else: cache[pair][followup] += 1 else: cache[pair] = {followup: 1} return cache def readfiles(self): data = "" for filename in self.filenames: fd = open(filename) data += fd.read() fd.close() return data def concat(self, words): sentence = "" for word in words: if word in "'\",?!:;.": sentence = sentence[0:-1] + word + " " else: sentence += word + " " return sentence def pickword(self, words): temp = [(k, words[k]) for k in words] results = [] for (word, n) in temp: results.append(word) if n > 1: for i in xrange(n-1): results.append(word) return random.choice(results) def gentext(self, words): allwords = [k for k in self.cache] (first, second) = random.choice(filter(lambda (a,b): a.istitle(), [k for k in self.cache])) sentence = [first, second] while len(sentence) < words or sentence[-1] is not ".": current = (sentence[-2], sentence[-1]) if current in self.cache: followup = self.pickword(self.cache[current]) sentence.append(followup) else: print "Wasn't able to. Breaking" break print self.concat(sentence) Markov(["76.txt"]) -- module Markov ( train , fox ) where import Debug.Trace import qualified Data.Map as M import qualified System.Random as R import qualified Data.ByteString.Char8 as B type Database = M.Map (B.ByteString, B.ByteString) (M.Map B.ByteString Int) train :: [B.ByteString] -> Database train (x:y:[]) = M.empty train (x:y:z:xs) = let l = train (y:z:xs) in M.insertWith' (\new old -> M.insertWith' (+) z 1 old) (x, y) (M.singleton z 1) `seq` l main = do contents <- B.readFile "76.txt" print $ train $ B.words contents fox="The quick brown fox jumps over the brown fox who is slow jumps over the brown fox who is dead."

    Read the article

  • clarification on yslow rules

    - by ooo
    i ran yslow and i got a bad score on expires header: Here was the message: Grade F on Add Expires headers There are 45 static components without a far-future expiration date. i am using IIS on a hosted environment. what do i need to do on my css or js files to fix this issue ?

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >