Search Results

Search found 3758 results on 151 pages for 'efficient'.

Page 48/151 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • Using AWS or Azure, what to do about emails?

    - by Paul
    I'm coming from a background of paying a hosting company X amount per month for a server. This server comes with IIS, WebsitePanel and Smartermail all bundled together. When I create a new domain using WebsitePanel it automatically creates my email account. All I then need to do is configure my DNS to point to the server. I've decided that it is more cost efficient to move to AWS / Azure. Has anyone come from a similar background and moved onto a cloud system? I'd be interested to know what you did regarding emails. So far, these are the suggestions I've seen: Use Google Apps for each domain Use something like Elastic Email to sent out emails Launch a new instance and host an email server on that The first option seems like quite a lot of manual configuration, the second one works good with outgoing emails but what about receiving? Option 3 would make it less cost effective. What is your experience?

    Read the article

  • Is it possible to auto-mount sshfs

    - by Mark D
    Is it possible to auto-mount a remote FS using sshfs upon instantiating a proper VPN connection? Allow me to explain the scenario, I'm working remotely, to do that it helps if I can mount my home dir from a server in the office. To do that I need to vpn in. So within network manager I select the relevant VPN and connect. It connects but now I have to drop to the command line and mount my home dir on several machines. If I forget to do one machine my local dev environment isn't as efficient. I suppose I could write a quick bash script to do this but I'd rather get it running automatically when I connect.

    Read the article

  • Kernel panic when booting from USB

    - by maaartinus
    I downloaded ubuntu-11.04-desktop-amd64.iso and used Universal-USB-Installer-1.8.6.3.exe to format my USB stick und put the ISO on it. When I tried to install from it, I've got a kernel panic just like here, except for the version number (mine was 2.6.38-8-generic #42-ubuntu). My ISO image seems to work, as I installed it into a VMWare player without problems. Booting Linux from USB works surely too, as I did it some time ago with an older Ubuntu version. I can imagine things to try out, e.g., write the image again, try another version, pray, look for patches, etc. However, I'm looking for a time-efficient solution, something what most probably works. An advice of the sort "wait two weeks until it's surely fixed, then download again" is acceptable, I'm determined to switch to Linux, but it can wait a bit.

    Read the article

  • 2D fighting bounding boxes

    - by user36420
    I'm prototyping a 2D platformer/brawler game for uni and I'm having some trouble with creating collision/bounding boxes. This is most likely going to end up on a Vita so I do have some library constraints as well as performance implications. None of this has yet been implemented but is all theory. My idea was to have the artist create a sprite sheet for the character animation and then a second identical sprite sheet with the corresponding collisions in a solid colour (e.g green for where the character can be hit and red for dealing damage, near the foot if kicking etc.) With this, I would then parse the collision sheet and generate the various collisions required storing them in the character model. This is the point I feel would be most inefficient. While I think this is a possible solution, I was wondering if there was a more standard way of doing this or a more efficient way as I feel this would have severe performance problems.

    Read the article

  • Fixing IP Renewal After Laptop Suspend

    - by Cerin
    I have a laptop running Ubuntu 10.04, and I've noticed that with some wifi networks, but not all, I'm unable to reconnect after coming out of suspension. I've tried both Network Manager and Wicd management daemons, and both get through validation, but get hung up on "Acquiring IP address...". The only solution I've found is to reboot the machine, after which it acquires an IP address very quickly. What's the underlying issue here? What would be a more efficient way to resolve the problem? EDIT: I've noticed that if I open Wicd and manually press "Connect", it fails to acquire an IP. However, if I do nothing and let it automatically try and connect, it acquires an IP and connects just fine...

    Read the article

  • Wordpress : Automatically transfer media files to Amazon S3

    - by Ron Ranieri
    I've been using VPS to host 7 Wordpress websites, most of them require big storage but very little RAM and traffic. So I'm thinking of moving the static files(uploads folder) content to Amazon S3 and I'm looking for the most viable solution to this. I want every website to have their own bucket and newly uploaded media files automatically uploaded to Amazon S3 without using plugin. I'm ok with cron job, for example the files were uploaded first to my server, then transferred to S3 and deleted from my server every 24 hour. Or is there any way for me to change the default upload directory to my S3 bucket without sacrificing any Wordpress functionality(resize/title etc)? What do you think the most efficient way to do this? Currently I'm looking at this plus cron job but I would like to know better option if it exist.

    Read the article

  • OpenGL ES 2.0. Sprite Sheet Animation

    - by Project Dumbo Dev
    I've found a bunch of tutorials on how to make this work on Open GL 1 & 1.1 but I can't find it for 2.0. I would work it out by loading the texture and use a matrix on the vertex shader to move through the sprite sheet. I'm looking for the most efficient way to do it. I've read that when you do the thing I'm proposing you are constantly changing the VBO's and that that is not good. Edit: Been doing some research myself. Came upon this two Updating Texture and referring to the one before PBO's. I can't use PBO's since i'm using ES version of OpenGL so I suppose the best way is to make FBO's but, what I still don't get, is if I should create a Sprite atlas/batch and make a FBO/loadtexture for each frame of if I should load every frame into the buffer and change just de texture directions.

    Read the article

  • Why did the team at LMAX use Java and design the architecture to avoid GC at all cost?

    - by kadaj
    Why did the team at LMAX design the LMAX Disruptor in Java but all their design points to minimizing GC use? If one does not want to have GC run then why use a garbage collected language? Their optimizations, the level of hardware knowledge and the thought they put are just awesome but why Java? I'm not against Java or anything, but why a GC language? Why not use something like D or any other language without GC but allows efficient code? Is it that the team is most familiar with Java or does Java possess some unique advantage that I am not seeing? Say they develop it using D with manual memory management, what would be the difference? They would have to think low level (which they already are), but they can squeeze the best performance out of the system as it's native.

    Read the article

  • Point me to info about constructing filters (of lists)

    - by jah
    I would like some pointers to information which would help me understand how to go about providing the ability to filter a list of entities by their attributes as well as by attributes of related entities. As an example, imagine a web app which provides order management of some kind. Orders and related entities are stored in a relational database. And imagine that the app has an interface which lists the orders. The problem is: how does one allow the list to be filtered by, for example:- order number (an attribute) line item name (an attribute of a n-n related entity) some text in an administrative note related to the order (text found in an attribute of a 1-1 related entity) I'm trying to discover whether there is something like a standard, efficient way to construct the queries and the filtering form; or some possible strategies; or any theory on the topic; or some example code. My google foo fails me.

    Read the article

  • Unity: Render 2D textures on a 3D object's face

    - by www.Sillitoy.com
    I am not familiar with 3D graphics and I'd like to know what is the right way to render some 2D figures on different points of a wider face of a 3D object. My 3D object is just a cube representing a poker table. I have 2D png for players placeholders and I'd like to render these figures on the 3D object where needed. An alternative solution would be to render the whole face with a big picture containing all the placeholders figures. However it would be a waste of memory and thus less efficient. What do you suggest me?

    Read the article

  • Problem re-factoring multiple timer countdown

    - by jowan
    I create my multiple timer countdown from easy or simple script. entire code The problem's happen when i want to add timer countdown again i have to declare variable current_total_second CODE: elapsed_seconds= tampilkan("#time1"); and variable timer who set with setInterval.. timer= setInterval(function() { if (elapsed_seconds != 0){ elapsed_seconds = elapsed_seconds - 1; $('#time1').text(get_elapsed_time_string(elapsed_seconds)) }else{ $('#time1').parent().slideUp('slow', function(){ $(this).find('.post').text("Post has been deleted"); }) $('#time1').parent().slideDown('slow'); clearInterval(timer); } }, 1000); i've already know about re-factoring and try different way but i'm stack to re-factoring this code i want implement flexibelity to it.. when i add more of timer countdown.. script do it automatically or dynamically without i have to add a bunch of code.. and the code become clear and more efficient.. Thanks in Advance

    Read the article

  • Best strategy for supporting multiple server communication from iPhone/android app?

    - by tipycalFlow
    I'm making an app that will be used in multiple hospitals in the US. As per HIPAA compliance requirement, every hospital will have its own server that complies with these requirements of ensuring patient data security, etc. Now the task is that the app should communicate with a particular server based on the login info. An additional requirement is that new hospitals(servers) are likely to be added along the way, even after the app is available on the market. So basically, according to some login credentials, the app should communicate with the server of the hospital assigned to that person. One pretty crude way is to set up our own server which links the hospitals with the login info and accordingly, provides a base-url for data exchange. Is there a more efficient way to handle this?

    Read the article

  • What are the benefits of a disk install vs. Wubi? And can I migrate my settings easily?

    - by Alex Bixel
    I chose to do the Wubi install because it was short, simple, and easy to reverse (no messing with partitions required). To be honest, I can handle the lack of a hibernate function. I haven't really heard many other benefits of installing on a separate partition than hibernation and negligibly faster hard disk read/write. Yet almost everyone I encounter seems to have opted for the disk installation. Are there more benefits I should be aware of, especially as a college student who wants a fast, efficient machine for documents, web browsing, etc. (nothing big like gaming, I can run that on Windows)? Also, I have a fair amount of settings and packages installed that I spent a bit of time on and would rather not have to do again. Is there any way I can migrate all of these settings from the virtual disk on my C:/ drive (Wubi installation) to the disc installation in another partition? (I have a 16GB USB drive if that'll do the trick)

    Read the article

  • Instapaper Updates; Sports Native Social Media Sharing, Browsing, and More

    - by Jason Fitzpatrick
    Popular web content manager Instantpaper has updated to version 3.0 and brings a host of new features like native support for social media sharing, a recommendation system, in-app web browsing and more. Last year we shared a detailed guide with you on how to use Instapaper to save content from the web to your iOS device for later reading–definitely check it out if you’re unfamiliar with Instapaper. Some of the new features in Instapaper 3.0 include a social recommendation system where you can follow other Instapaper users and see the articles they are liking/sharing, native support for sharing to Twitter, Facebook, and other social media systems, smart rotation lock on the display, and more efficient article downloading and storage. Check out the link below to read a full rundown of the new features on the Instapaper blog. Instapaper 3.0 Is Here! [Instapaper via O'Reilly Radar] HTG Explains: What Are Character Encodings and How Do They Differ?How To Make Disposable Sleeves for Your In-Ear MonitorsMacs Don’t Make You Creative! So Why Do Artists Really Love Apple?

    Read the article

  • Passing class names or objects?

    - by nischayn22
    I have a switch statement switch ( $id ) { case 'abc': return 'Animal'; case 'xyz': return 'Human'; //many more } I am returning class names,and use them to call some of their static functions using call_user_func(). Instead I can also create a object of that class, return that and then call the static function from that object as $object::method($param) switch ( $id ) { case 'abc': return new Animal; case 'xyz': return new Human; //many more } Which way is efficient? To make this question broader : I have classes that have mostly all static methods right now, putting them into classes is kind of a grouping idea here (for example the DB table structure of Animal is given by class Animal and so for Human class). I need to access many functions from these classes so the switch needs to give me access to the class

    Read the article

  • Non-mathematical Project Euler (or similar)?

    - by Juha Untinen
    I checked the post (Where can I find programming puzzles and challenges?) where there's a lot of programming challenges and such, but after checking several of them, they all seem to be about algorithms and mathematics. Is there a similar site for purely logic/functionality-based challenges? For example: - Retrieve data using a web service - Generate output X from a CSV file - Protect this code against SQL injection - Make this code more secure - What is wrong with this code (where the error is in logic, not syntax) - Make this loop more efficient Does a challenge site like that exist? Especially one that provides hints and/or correct solutions. That would be a very helpful learning site.

    Read the article

  • How can I create multiple mini-sites with similar/duplicate content without hurting my search engine rank?

    - by ekpyrotic
    Essential background: I run a small company that lets members of the public post handwritten letters to their local politician (UK-based). Every week a number of early stage bills (called Early Day Motions) are submitted for debate in the House of Commons, and supporters of the motion will contact their local Members of Parliament, asking them to sign the motion. The crux: I want to target these EDMs with customised mini-sites, so when people search "EDM xxx", they find my customised mini-site, specifically targeting that EDM (i.e., "Send a handwritten letter to your MP asking them to sign EDM xxx"). At the moment, all these mini-sites (and my homepage) have duplicate content with only the relevant EDM name, number, and background image changed. (For example, http://mailmymp.com and http://mailmymp.com/edm/teaching-life-saving-skills-at-school-edm-550.php). The question: Firstly, will this hurt my potential search engine ranking? And, if so, what's the best way to target these political campaigns in an efficient manner without hurting my SEO prospects?

    Read the article

  • What is the correct way to reset and load new data into GL_ARRAY_BUFFER?

    - by Geto
    I am using an array buffer for colors data. If I want to load different colors for the current mesh in real time what is the correct way to do it. At the moment I am doing: glBindVertexArray(vao); glBindBuffer(GL_ARRAY_BUFFER, colorBuffer); glBufferData(GL_ARRAY_BUFFER, SIZE, colorsData, GL_STATIC_DRAW); glEnableVertexAttribArray(shader->attrib("color")); glVertexAttribPointer(shader->attrib("color"), 3, GL_FLOAT, GL_TRUE, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, 0); It works, but I am not sure if this is good and efficient way to do it. What happens to the previous data ? Does it write on top of it ? Do I need to call : glDeleteBuffers(1, colorBuffer); glGenBuffers(1, colorBuffer); before transfering the new data into the buffer ?

    Read the article

  • Where to prepare (legal) "embeded" documents for the web [on hold]

    - by WHITECOLOR
    Say I have to have terms of use on site, a place the text as marked up html. Where it is better to prepare this document initially if it is going to placed to the web? Considering that such documents are prepared and changed by lawyers, it is not very applicable for them to use text editor. And it is not very efficient to manually process content to prepare it for the web as it is changed. For now my solution is: Prepare document text in google docs keeping some simple structure (title, ordered lit of features, simple paragraphs), save as html (it will contain some unnecessary markup) and then use custom conversion tool (some script) to convert saved html from google docs to simple html and inject new version to the site. What is common practiceses for this problem? What is the workflow of preparing and publishing such documents.

    Read the article

  • Render 2D textures on a 3D object's face

    - by www.Sillitoy.com
    I am not familiar with 3D graphics, and I'd like to know the right way to render some 2D figures on different points of a wider face of a 3D object. My 3D object is just a cube representing a poker table. I have a 2D png for players' placeholders, and I'd like to render these figures on the 3D object where needed. An alternative solution would be to render the whole face with a big picture containing all the placeholders figures. However, it would be a waste of memory and thus less efficient. What do you suggest?

    Read the article

  • How can I support scrolling when using batched rendering for my tiles?

    - by dardanel
    I have tiled map 100*75 and tiles are 32*32 pixel.I want to use batching for performance .I don't figure it out , because of my game needs scrolling and every frame i draw 22*16 tiles (my screen is 20*16 tile) .I thought that batching tiles for every frame .Is it good or any suggestion? edit :to more clarify I want to use occlusion culling and batching at the same time.I thought that drawing only visible areas and batching them together .But there is a something i couldn't figure out .When scrolling screen with translate matrix , if one row become invisible , I bind new row and batch them again.Every batched objects needs to buffer again.So I batch tiles and buffer to VBO every time when one row become invisible .I don't know these way is efficient or not .This is my question .And i am open to any suggestions.

    Read the article

  • Vertex data split into separate buffers or one one structure?

    - by kiba2
    Is it better to have all vertex data in one structure like this: class MyVertex { int x,y,z; int u,v; int normalx, normaly, normalz; } Or to have each component (location, normal, texture coordinates) in separate arrays/buffers? To me it always seemed logical to keep the data grouped together in one structure because they'd always be the same for each instance of a shared vertex and that seems to be true for things like character models (ex: the normal should be an average of adjacent normals for smooth lighting). One instance where this doesn't seem to work is other kinds of meshes like say a cube where the texture coordinates for each may be the same but that causes them to be different where the vertices are shared. Does everybody normally keep them separate? Won't this make them less space efficient if there needs to be an instance of texture coordinates and normals for each triangle vertex (They won't be indexed)? Can OpenGL even handle this mixing of indexed (for location) vs non-indexed buffers in the same VBO?

    Read the article

  • MEA Oracle University Partner Enablement Update (22nd March)

    - by swalker
    Become an Oracle GoldenGate 10 Certified Implementation Specialist Let Oracle University help you become an Oracle GoldenGate 10 Certified Implementation Specialist. The following Boot Camp has been scheduled so that you can gain the required knowledge not only to develop and implement solutions that will drive your customers’ organizations to make better decisions, take informed actions, and run more-efficient business processes but also for you to pass the associated exam and get yourself specialized: Boot Camp Dates Location OPN Only Oracle GoldenGate 10 Implementation Boot Camp 26-28 Mar Dubai Oracle University OPN Only Boot Camps are co-funded by Oracle Alliances and Channels so are offered to you at very attractive prices. For prices, more information and assistance with registering please contact: Ion Georgescu eMail:  [email protected] Telephone:  +40 21.367.93.72

    Read the article

  • 3D array in a 2D array

    - by Smallbro
    Currently I've been using a 3D array for my tiles in a 2D world but the 3D side comes in when moving down into caves and whatnot. Now this is not memory efficient and I switched over to a 2D array and can now have much larger maps. The only issue I'm having now is that it seems that my tiles cannot occupy the same space as a tile on the same z level. My current structure means that each block has its own z variable. This is what it used to look like: map.blockData[x][y][z] = new Block(); however now it works like this map.blockData[x][y] = new Block(z); I'm not sure why but if I decide to use the same space on say the floor below it wont allow me to. Does anyone have any ideas on how I can add a z-axis to my 2D array? I'm using java but I reckon the concept carries across different languages.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >