Search Results

Search found 48441 results on 1938 pages for 'create folders'.

Page 452/1938 | < Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >

  • Can I copy large files faster without using the file cache?

    - by Veazer
    After adding the preload package, my applications seem to speed up but if I copy a large file, the file cache grows by more than double the size of the file. By transferring a single 3-4 GB virtualbox image or video file to an external drive, this huge cache seems to remove all the preloaded applications from memory, leading to increased load times and general performance drops. Is there a way to copy large, multi-gigabyte files without caching them (i.e. bypassing the file cache)? Or a way to whitelist or blacklist specific folders from being cached?

    Read the article

  • Is there a limit on the number of threads that can be spawned simultaneously?

    - by georgesl
    Yesterday I came across this question: How can i call robocopy within a python script to bulk copy multiple folders?, and I though it might be a good exercise for multithreading. I though of spawning as many threads as files needed to be copied, each routine having an exception handling system to prevent the whole copying process from crashing (and log -using mutex on the log file - if there was an error). My question: Is there a limit on the number of thread you can spawn almost simultaneously? If yes, what is the limiting factor? My question is focused on PC desktop, but I welcome any answer on different hardware (embedded systems, calculus clusters, etc.).

    Read the article

  • The error indicates that IIS is in 32 bit mode, while this application is a 64 b it application and thus not compatible.

    - by Patrick Olurotimi Ige
    I was trying to install a new WSS v3 Sharepoint on a 64 bit Windows 2003 server today but the installation was giving some error saying i would need to allow ASP.NET 2.0 in the web server extension in IIS.  Looking at the IIS there was a ASP.NET 2.0 32 bit allowed but not for a 64 bit. I tried registering the aspnet_regiis but no luck by doing so: For the 32 bit verison %SYSTEMROOT%\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe -i For the 64bit version %SYSTEMROOT%\Microsoft.NET\Framework64\v2.0.50727\aspnet_regiis.exe -i I get the error "The error indicates that IIS is in 32 bit mode, while this application is a 64 b it application and thus not compatible." The difference is the \Framework64 folders So my next guess was to find a way to disable the 32 bit and then allow the 64 bit version. And luckily enough i found this link    MS to the rescue So just ran : cscript %SYSTEMDRIVE%\inetpub\adminscripts\adsutil.vbs SET W3SVC/AppPools/Enable32bitAppOnWin64 0 and the registered the %SYSTEMROOT%\Microsoft.NET\Framework64\v2.0.50727\aspnet_regiis.exe -i and that was it

    Read the article

  • Separating portion of website to its own server

    - by Brett
    So my job is to take the homepage (or maybe I should say "homesite" because it encompasses a few interrelated pages) and drag this onto its own Apache server. The problem I'm having right now is being able to weed out jumbled/bundled files (such as folders of js, css, and other files that i cant even identify) and knowing what is necessary to keep the homesite running. I'm new to this stuff (I'm an intern) so feel free to ask questions if I'm leaving vital information out. What I'm asking of you guys here is basically any pointers or tips you may be able to give me in order to get the job done. I could use some advice from people with a little more experience in web development. btw: This question may appear as though I have not completed any prior research and that is, for the most part, true. But the problem is I really am not sure how to research this. If you guys could throw me some keywords to play with that would really be helpful. Thanks!

    Read the article

  • More problems in one

    - by Susie
    Yesterday I installed Ubuntu 12.04 because of some previous problems. I had to recover files through Photorec. But it filled my whole disk overnight and didn't even finish. When I turn on my notebook, there's an error The system is runnning in low-graphics mode (don't know if it's somehow connected with it). I need to delete those recup_dir folders, but also I need to finish recovering and get through the error. So I'm absolutely desperate. I hope that I writed it ok. Thanks in advance.

    Read the article

  • disk not accessible

    - by user107044
    i formatted my hard drive yesterday and it was working well even after the formatting. But when I restarted my system again , is is showing that the space is alloted to my files but they are inaccessible. I have even tried to unhide the files and folders, if they got hidden somehow. But nothing works. the hard drive is being shown empty but the properties are saying that it still conatins the data : http://imgur.com/ObjTE in the image, it is showing that the directory has only 1 file of size:4.8 kbps but the space being used by the drive is 11.6 GB. do suggest some solution.

    Read the article

  • LAN or workgroup with Ubuntu server and windows clients

    - by Kenneth Fernando
    I have 35 windows7 standalone computers right now and planning to setup a LAN/workgroup using ubuntu as a server if its a LAN. Purpose for network.. Files from client computers to be accessed by me and vice versa through a shared folder on the server. Files only include word documents and very small project files. The network wont have Internet at the moment. What I would like to know.. How would i configure the ubuntu server to recognize the clients and will the clients be able to view the shared folders through their windows machines simultaneously? Would appreciate feedback. Thanks

    Read the article

  • Does saving my progress on a U1-synced file/folder put unneccesary strain on the servers?

    - by Chauncellor
    I love Ubuntu One and I use it all the time. I have my documents and music composition folders set to sync. It's been a real boon. However, sometimes I feel that constantly saving my progress forces the file to sync dozens and dozens of times to the servers. It seems wasteful to me so I've been disconnecting U1 until I'm finished working on a project. Is this an unnecessary action that I am taking? I know it's using Amazon's storage but I'm still paranoid that I'm costing Canonical money when I constantly save my progress.

    Read the article

  • Different behavior for REF CURSOR between Oracle 10g and 11g when unique index present?

    - by wweicker
    Description I have an Oracle stored procedure that has been running for 7 or so years both locally on development instances and on multiple client test and production instances running Oracle 8, then 9, then 10, and recently 11. It has worked consistently until the upgrade to Oracle 11g. Basically, the procedure opens a reference cursor, updates a table then completes. In 10g the cursor will contain the expected results but in 11g the cursor will be empty. No DML or DDL changed after the upgrade to 11g. This behavior is consistent on every 10g or 11g instance I've tried (10.2.0.3, 10.2.0.4, 11.1.0.7, 11.2.0.1 - all running on Windows). The specific code is much more complicated but to explain the issue in somewhat realistic overview: I have some data in a header table and a bunch of child tables that will be output to PDF. The header table has a boolean (NUMBER(1) where 0 is false and 1 is true) column indicating whether that data has been processed yet. The view is limited to only show rows in that have not been processed (the view also joins on some other tables, makes some inline queries and function calls, etc). So at the time when the cursor is opened, the view shows one or more rows, then after the cursor is opened an update statement runs to flip the flag in the header table, a commit is issued, then the procedure completes. On 10g, the cursor opens, it contains the row, then the update statement flips the flag and running the procedure a second time would yield no data. On 11g, the cursor never contains the row, it's as if the cursor does not open until after the update statement runs. I'm concerned that something may have changed in 11g (hopefully a setting that can be configured) that might affect other procedures and other applications. What I'd like to know is whether anyone knows why the behavior is different between the two database versions and whether the issue can be resolved without code changes. Update 1: I managed to track the issue down to a unique constraint. It seems that when the unique constraint is present in 11g the issue is reproducible 100% of the time regardless of whether I'm running the real world code against the actual objects or the following simple example. Update 2: I was able to completely eliminate the view from the equation. I have updated the simple example to show the problem exists even when querying directly against the table. Simple Example CREATE TABLE tbl1 ( col1 VARCHAR2(10), col2 NUMBER(1) ); INSERT INTO tbl1 (col1, col2) VALUES ('TEST1', 0); /* View is no longer required to demonstrate the problem CREATE OR REPLACE VIEW vw1 (col1, col2) AS SELECT col1, col2 FROM tbl1 WHERE col2 = 0; */ CREATE OR REPLACE PACKAGE pkg1 AS TYPE refWEB_CURSOR IS REF CURSOR; PROCEDURE proc1 (crs OUT refWEB_CURSOR); END pkg1; CREATE OR REPLACE PACKAGE BODY pkg1 IS PROCEDURE proc1 (crs OUT refWEB_CURSOR) IS BEGIN OPEN crs FOR SELECT col1 FROM tbl1 WHERE col1 = 'TEST1' AND col2 = 0; UPDATE tbl1 SET col2 = 1 WHERE col1 = 'TEST1'; COMMIT; END proc1; END pkg1; Anonymous Block Demo DECLARE crs1 pkg1.refWEB_CURSOR; TYPE rectype1 IS RECORD ( col1 vw1.col1%TYPE ); rec1 rectype1; BEGIN pkg1.proc1 ( crs1 ); DBMS_OUTPUT.PUT_LINE('begin first test'); LOOP FETCH crs1 INTO rec1; EXIT WHEN crs1%NOTFOUND; DBMS_OUTPUT.PUT_LINE(rec1.col1); END LOOP; DBMS_OUTPUT.PUT_LINE('end first test'); END; /* After creating this index, the problem is seen */ CREATE UNIQUE INDEX unique_col1 ON tbl1 (col1); /* Reset data to initial values */ TRUNCATE TABLE tbl1; INSERT INTO tbl1 (col1, col2) VALUES ('TEST1', 0); DECLARE crs1 pkg1.refWEB_CURSOR; TYPE rectype1 IS RECORD ( col1 vw1.col1%TYPE ); rec1 rectype1; BEGIN pkg1.proc1 ( crs1 ); DBMS_OUTPUT.PUT_LINE('begin second test'); LOOP FETCH crs1 INTO rec1; EXIT WHEN crs1%NOTFOUND; DBMS_OUTPUT.PUT_LINE(rec1.col1); END LOOP; DBMS_OUTPUT.PUT_LINE('end second test'); END; Example of what the output on 10g would be:   begin first test   TEST1   end first test   begin second test   TEST1   end second test Example of what the output on 11g would be:   begin first test   TEST1   end first test   begin second test   end second test Clarification I can't remove the COMMIT because in the real world scenario the procedure is called from a web application. When the data provider on the front end calls the procedure it will issue an implicit COMMIT when disconnecting from the database anyways. So if I remove the COMMIT in the procedure then yes, the anonymous block demo would work but the real world scenario would not because the COMMIT would still happen. Question Why is 11g behaving differently? Is there anything I can do other than re-write the code?

    Read the article

  • What You Said: How You Deal with Bacn

    - by Jason Fitzpatrick
    Earlier this week we asked you how you deal with Bacn—email you want, but not right now—and you responded. Read on to see the three principle ways HTG readers deal with Bacn. The approach you all took fell into three distinct categories: Filtering, Obfuscating, and Procrastinating. Readers like Ray and jigglypuff use filters: I use Thunderbird as my email client. I have different folders that I filter the email I receive into. The newsletters and other subscribed emails go into a lower priority folder. One word: Filters. I just setup filters for all of this type of mail. Some I let go to inbox, others I let go straight to a folder without seeing it first. Then when I have time or want to go through them, I do. HTG Explains: Is ReadyBoost Worth Using? HTG Explains: What The Windows Event Viewer Is and How You Can Use It HTG Explains: How Windows Uses The Task Scheduler for System Tasks

    Read the article

  • How to manage maintenance/bug-fix branches in Subversion when third-party installers are involved?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

  • MinGw Multiple Definitions

    - by makuto
    I'm trying to get the MinGw C++ compiler set up so I can compile my code for Windows computers and I'm having troubles. I originally installed minGw32 but then found that mingw-w64 was a better fit for me, so I uninstalled minGw32 and installed mingw-w64. The problem is that when I try to compile a simple hello world application I get Multiple Definition errors (which are not from my code). I'm thinking it has something to do with the removal of w32 and the installation of w64 without a clean directory. How do/should I clean the necessary folders & get rid of those multiple definitions.

    Read the article

  • Syncing Files between workgroup server and Ubuntu workstation

    - by dotdawtdaught
    Recently I have decided that I can't make Windows 8 my primary OS on my laptop as it is just too cumbersome to deal with. I am made the switch to Ubuntu and so far so good. Using Windows I have been able to cache folders on my workgroup server using a feature called "Client Side Cache" that allows me to take a copy of my personal files offline while I am in the field, then later I when I return any changes get pushed up to the server and my local cache is refreshed. This feature is completely client driven although characteristics of it (who and what can be cached, and if caching is automatic) can be controlled via a policy assigned as part of a directory membership. Can anyone suggest a linux replacement for this feature? Is there a better way of handling this?

    Read the article

  • Using NPM to share resources between UI projects [on hold]

    - by guy mograbi
    I am a UI team leader. My team has a lot of projects using different languages/technologies. In some parts we will rewrite (gradually - @Ampt this is for you) the application in order to enable new fresh technologies in and get old dinosaurs out. I am going to use Node Package Manager to set up an "all powerful" build/dependency manager. Can I use NPM to depend on a private github repository? Can I use NPM to depend on SVN? Will NPM play nice with quickbuild? Since each project might have a slightly different structure (think jetty/maven or play!framework) can I configure NPM to install some dependencies in different folders while still running it from the project's root? How can I, using NPM, get development resources out and build a packaged product? (like a war) Yes/No - is there a reason to use grunt? No discussion, just one liners.

    Read the article

  • Dash search does not show applications

    - by To Do
    Since the upgrade (actually a fresh install) to 13.10, many times, when I open Dash and search for an application I only get results for files and folders. Sometimes I get some applications but not others. I haven't found a pattern to replicate the issue 100%. If I open the application lens and search again, it works as it should. So many times, to launch an application, I have to use the super + a key combination to open the application scope instead of the simply the super key. It is annoying. Did anyone have the same issue? I searched for bugs on launchpad but didn't find any. I didn't open a bug report yet because it is not clear how to reproduce the problem faithfully. Even more importantly, does anyone have a solution to this issue?

    Read the article

  • TStringList and TThread that does not free all of its memory

    - by VanillaH
    Version used: Delphi 7. I'm working on a program that does a simple for loop on a Virtual ListView. The data is stored in the following record: type TList=record Item:Integer; SubItem1:String; SubItem2:String; end; Item is the index. SubItem1 the status of the operations (success or not). SubItem2 the path to the file. The for loop loads each file, does a few operations and then, save it. The operations take place in a TStringList. Files are about 2mb each. Now, if I do the operations on the main form, it works perfectly. Multi-threaded, there is a huge memory problem. Somehow, the TStringList doesn't seem to be freed completely. After 3-4k files, I get an EOutofMemory exception. Sometimes, the software is stuck to 500-600mb, sometimes not. In any case, the TStringList always return an EOutofMemory exception and no file can be loaded anymore. On computers with more memory, it takes longer to get the exception. The same thing happens with other components. For instance, if I use THTTPSend from Synapse, well, after a while, the software cannot create any new threads because the memory consumption is too high. It's around 500-600mb while it should be, max, 100mb. On the main form, everything works fine. I guess the mistake is on my side. Maybe I don't understand threads enough. I tried to free everything on the Destroy event. I tried FreeAndNil procedure. I tried with only one thread at a time. I tried freeing the thread manually (no FreeOnTerminate...) No luck. So here is the thread code. It's only the basic idea; not the full code with all the operations. If I remove the LoadFile prodecure, everything works good. A thread is created for each file, according to a thread pool. unit OperationsFiles; interface uses Classes, SysUtils, Windows; type TOperationFile = class(TThread) private Position : Integer; TPath, StatusMessage: String; FileStringList: TStringList; procedure UpdateStatus; procedure LoadFile; protected procedure Execute; override; public constructor Create(Path: String; LNumber: Integer); end; implementation uses Form1; procedure TOperationFile.LoadFile; begin try FileStringList.LoadFromFile(TPath); // Operations... StatusMessage := 'Success'; except on E : Exception do StatusMessage := E.ClassName; end; end; constructor TOperationFile.Create(Path : String; LNumber: Integer); begin inherited Create(False); TPath := Path; Position := LNumber; FreeOnTerminate := True; end; procedure TOperationFile.UpdateStatus; begin FileList[Position].SubItem1 := StatusMessage; Form1.ListView4.UpdateItems(Position,Position); end; procedure TOperationFile.Execute; begin FileStringList:= TStringList.Create; LoadFile; Synchronize(UpdateStatus); FileStringList.Free; end; end. What could be the problem? I thought at one point that, maybe, too many threads are created. If a user loads 1 million files, well, ultimately, 1 million threads is going to be created -- although, only 50 threads are created and running at the same time. Thanks for your input.

    Read the article

  • How can I change the folder icon?

    - by Jakob
    I know how to change an icon this way. What I'm looking for is an equivalent to changing the icon for an application in the launcher, i. e. the Home folder, via gedit ~/.local/share/applications/nautilus-home.desktop That way you can set an icon type, which is helpful when you later want to change the icon set resp. theme or when you want the best resolution for each size of the icon. So, I know how I can do this for applications in the launcher - but how can I realize this for icon folders in Nautilus? (In which file these settings are stored and editable whith i. e. Gedit?)

    Read the article

  • Ubuntu 11.04 seem's like a laggy OS

    - by user772401
    I'm new to Linux/Ubuntu in general. I just dual booted the Ubuntu 11.04 w/Windows 7 on my Lenovo laptop - Intel i7 Quad Core 2Ghz, 4Gb ram, etc. etc... and for some reason Ubuntu is very laggy and slow. When I'm switching between programs (Chromium, folders, software center, etc..) it doesn't run as smooth as windows 7 (I have no more than 3 programs/windows up at a time...). I don't think it's my system requirements bkz Linux OSs are known to use low system resources. Could be a bad install or do people find it slow and laggy in general or is it my PC mfg type?? I installed it using Wubi - should I do a reinstall? I've already done all the recommended updates..

    Read the article

  • shortcuts/links to windows partition disappear on reboot

    - by al kirsch
    Ubuntu 11.10 recognizes my Windows 7 partition OK. Since that is where my work has historically been, I created links to various files and folders there. Everythings fine until I reboot. Then the icon reverts to the generic and I am informed the link is no longer valid. I created the links by right-clicking the folder or file and selecting "Make Link" in the Windows folder, then dragging it to the Ubuntu desktop. How can I fix this? BTW, it worked OK with Kubuntu; I got seduced by the sexy Unity desktop.....

    Read the article

  • Is there a COMPLETE tutorial for upgrading for dummies?

    - by Windwood Trader
    I have tried upgrading in the past with zero success doing a backup of stuff and futilely attempting to enter my stuff into the new version. My email accounts and folders, my bookmarks and web browser info and of course my photos. In the past I have received messages that the back up files were done using version XXX and cannot be read by the new system, as an example. I need a hand-holding tutorial to go from 11.04 to 12.10. What are the actual step by step mechanics? Frustrated Non-Geek

    Read the article

  • How to organize the nautilus bookmarks in the "Places" panel applet?

    - by piedro
    their seems to be no configuration dialog for the "Places" menue in the panel. Yes, I know how to add, remove or change the order of the bookmarks in nautilus. But I want to use folders and subfolders for the bookmarks. With more than 20 entries the nautilus bookmarks and the places menue become inconvenient. any editor for this? any configuration file that does the job? any other tool than the standard places menue? any extension for nautilus to extend the bookmark organization? thx for reading, p.

    Read the article

  • xubuntu 12.04 restarts after suspend - only from my account

    - by Yoav Aner
    After installing a clean xubuntu 12.04 I noticed that when I suspend, the computer suspends and turns itself off (you see the lights go off, and a click sound from the HD or fans), but then about 2 seconds later it turns itself back on again... The odd thing is that: It doesn't happen when booting from the liveCD I created another user account. When I log onto this account I can suspend fine. The computer stays off until I press the ON button When I remove my .config folder and it's clean - I can also suspend without problem on my account So it seems that something in my user config is causing this, but I can't work out what it might be. I tried diffing the two .config folders, and also all processes running with one account compared to the other (ps -ef |grep <username>), but couldn't find anything obvious that might be causing this...

    Read the article

  • How to Stop Browser from rejecting my downloads

    - by melki0795
    I have a portfolio site where I am trying to host some of my work, so people can download my work. Some of these files include exe executables, and some are jar executables, which are run through batch. When a user tries to download my apps, it says that the file is not commonly downloaded and may be harmful, and therefore blocks the download. If I zip the folders, it still does the same thing. Any format i choose, still blocks the downloads. How can I stop chrome from doing this. Is there a way I can verify my files so they will be considered as trusted? Thanks in advance!

    Read the article

  • Will everything (downloaded fonts, files, apps etc) remains the same when I upgrade from Maverick to Natty?

    - by Suffi
    I'm planning to upgrade from Maverick to Natty, and to try out Unity, but I don't want to lose any settings, customizations and downloads in Maverick. I just want to know, if upgrading alter anything that I said earlier? I don't want to start over with a clean, fresh Natty, and I hope I can keep all of my files and folders and settings in Maverick, and to have them in Natty after the upgrade. Is this possible? Should I use Update Manager to achieve this? And would all my compiz-settings be reset after I upgrade? Sorry for too much questions, but I'm asking this because you know, in Windows, any version upgrade would require the disk to be reformatted. I hope Ubuntu is more convenient than that.

    Read the article

  • SkyDrive Pro Limits

    - by Sahil Malik
    SharePoint 2010 Training: more information I’m putting this here, because I know I’m gonna forget it :) Overall size limits - On an on premises installation, there are no fixed limits on storage. You can configure it as you wish. For Office 365 (SharePoint Online), the limit is 7GB per user. Number of items (documents and folders): You can sync up to 20,000 items in your SkyDrive (personal library). You can sync up to 5,000 items from other libraries, per library. In other words, you could sync 10 libraries (for example), with each library having up to 5000 items. Third, site limits: No fixed limits. It depends on your computer's resources. Hooray! and Happy SharePointing! Read full article ....

    Read the article

< Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >