Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 147/837 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • writting becomes slow after few writes

    - by user1566277
    I am running an embedded Linux on arm with a SD-Card. While writing huge amounts of data I see bizarre effects. E.g, when I dd a 15 MB file few times, it writes the file (normally) in less than 2 Secs. But After lets say 3-4 times it takes sometimes 15 to 30 Seconds to write the same file. If I sync after writing the file, then this does not happen but sync takes long time too. If there is enough gap between writing two files than presumably kernel syncs itself. How can I optimize the whole performance so that write should always finish inside 2 Seconds. The File system I am using is ext3. Any pointers?

    Read the article

  • How to clone & restore virtual box hard drive

    - by user23950
    What I want to do is to clone my virtual box hdd with dual boot os. Xp and Vista. I'm using acronis and back it up on a flash drive. And end up with the flash drive that is partitioned. 2 partitions just like the virtual box hard disk. What do I do to restore it. I'm running acronis inside virtual box. What do I do to make use of the backup and actually restore what I've back up. And to be able to boot to xp and vista again inside virtual box. Please help.

    Read the article

  • Enterprise Level Monitoring Solution

    - by Garthmeister J.
    My company is currently looking to replace our current solution used for monitoring our web-based enterprise solutions for both up-time and performance. Please note this is not intended to be a network monitoring-type solution (internally we currently use Nagios). If anyone has a provider that they have had a positive experience with, it would be much appreciated. Here is a list of our requirements: • Must have a large number of probes/agents around the globe to be representative of our customer base • Must have a flexible scripting capability to automate multi-step user actions • 24 hour a day monitoring • Flexible alerting system • Report generation capability • Mimic browser specific monitoring (optional, not a must-have)

    Read the article

  • Is it possible to put only the boot partition on a usb stick?

    - by Steve V.
    I've been looking at system encryption with ArchLinux and i think I have it pretty much figured out but I have a question about the /boot partition. Once the system is booted up is it possible to unmount the /boot partition and allow the system to continue to run? My thought was to install /boot to a USB stick since it can't be left encrypted and then boot from the USB stick which would boot up the encrypted hard disk. Then I can take the USB key out and just use the system as normal. The reason I want to do this is because if an attacker was able to get physical access to the machine they could modify the /boot partition with a keystroke logger and steal the key and if they already had a copy of the encrypted data they could just sit back and wait for the key. I guess I could come up with a system of verifying that the boot has been untouched at each startup. Has this been done before? Any guidance for implementing it on my own?

    Read the article

  • Will UUID be the same if a disk moved from one machine to another?

    - by Sunry
    While in Linux every disk got a UUID. I just wondering will the UUID be the same if I moved the same disk from one Linux box to another? Is it the same UUID in different machines with the same disk? Or for a disk the UUID will change with attached machine? Also a similar question: Will the UUID be the same after Linux distribution reinstalled in the same machine with the same disk? For example: First is CentOS 5, then reinstalled it to CentOS 6.

    Read the article

  • Specific apache + mysql settings for a light-weight site

    - by Good Person
    I have a small website with a Joomla and a Moodle set up. It seems that both of these are very slow. The server (CentOS release 5.5 (Final)) is a virtual dedicated server with about 2GB of ram. I don't expect to ever get more than 10-15 people on at the same time (and if that is high) What settings could I change in either apache, mysql, or even the OS to increase the performance of my site? I'm not concerned about running out of resources if I get too many visitors. If you need more specific data leave a comment and I'll edit the question.

    Read the article

  • How do I stop my IIS App Pool making a request to wpad.mydomain.com?

    - by Programming Hero
    As part of some performance troubleshooting, I've monitored the slow startup of a "cold" App Pool (one without an active worker process) in IIS. When using a built-in account, the App Pool starts in sub-second time. When using a custom local account the App Pool takes 30+ seconds to start processing requests. The service appears to be making requests to wpad.mydomain.com, an address it does not have access to, which causes it to wait 30 seconds for a response before eventually timing out. As a workaround, I've added the hostname to the server's hosts file, to direct the traffic to the local machine, which returns much faster (1-2 seconds). What do I need to do to stop IIS making this request when this identity is used for the App Pool?

    Read the article

  • Is virtual machine slower than the underlying physical machine?

    - by Michal Illich
    This question is quite general, but most specifically I'm interested in knowing if virtual machine running Ubuntu Enterprise Cloud will be any slower than the same physical machine without any virtualization. How much (1%, 5%, 10%)? Did anyone measure performance difference of web server or db server (virtual VS physical)? If it depends on configuration, let's imagine two quad core processors, 12 GB of memory and a bunch of SSD disks, running 64-bit ubuntu enterprise server. On top of that, just 1 virtual machine allowed to use all resources available.

    Read the article

  • Relating ping to perceived browser GUI response

    - by cvsdave
    We periodically get complaints of poor GUI (browser page) response that we need to explore. I am looking for a quick and cheap first check to see if the issue is network latency, or server performance. Has anyone encountered any discussion of ping time and perceived GUI response? I understand that GUI response is complicated, but it would be nice if we could find or develop a rule of thumb along the lines of "Hmmmm, ping is over 200, it might be network problems". Ideally, this lives in a script on the user's machine so that we can see the latency that they are seeing... (BASH, Linux). A reference to a good discussion page would be a fine answer, as would any recommendation of other source material.

    Read the article

  • Why are browsers so heavy?

    - by Kaivosukeltaja
    Back in 1998 I had a computer with 233MHz Pentium MMX CPU and a GFX card with no 3D acceleration. It was able to run games like Quake II at a decent FPS rate. My current computer has tons more performance and a mid-class GPU, yet struggles to reach 20 FPS when rendering a single model inside a skybox with WebGL. Even regular pages with lots of 2D CSS animations bring many modern computers to their metaphorical knees. As a web developer I understand there's a lot going on in a web page but not what makes it that heavy. Modern browsers compile JavaScript to CPU native machine code before running it and rendering into a canvas element shouldn't trigger DOM rebuilds so theoretically it should be a lot faster than it is. What am I missing here and is it possible to avoid or minimize whatever is making the browsers slow to build more efficient websites?

    Read the article

  • Windows always logs in to temporary profile (thinks it is in D while it is in C)

    - by asdf
    I have Windows on C: Disk 0 Partition 1 When I start it works fine until the login screen. When I log in, it starts to display "preparing your desktop.." and logs in to a temporary profile. I have to run explorer.exe manually then using task manager. If I execute %SystemRoot% it tells me that Windows could not find D:\Windows. (while Windows is in C:) I have no such drive as D then why Windows is thinking it is in D? I've tried this Bootmanager is missing but it did not work. Bootrec /ScanOS from Windows setup gives me Total identified Windows installations: 0 Also note that Windows Setup correctly thinks windows is installed on C but Windows itself thinks it is on D.

    Read the article

  • Computers with Small Capacity SSD - For caching?

    - by RXC
    Recently, in newsletters from websites, I have been seeing computers for sale from manufacturers that include an HDD and an SSD but the SSD has a small capacity like 24 GBs. I don't know if this still holds true, but I learned that when building a computer, you would want to install your OS on your fastest hard drive. I do a lot of PC gaming, so I install my OS and games on my SSD, because I learned that games and many applications make lots of system calls to the OS and performance can only be as fast as the slowest piece. Why these computers come with small capacity SSDs? Most OS's take up around 20 to 30 GBs of space, so what are the benefits of such a small SSD? Are these small size SSDs for caching? and what exactly does caching mean (what does it do and how does it help)?

    Read the article

  • How does Google manage to serve results so fast?

    - by Quintin Par
    I am building an autocomplete functionality for my site and the Google instant results are my benchmark. When I look at Google, the 50-60 ms response time baffle me. They look insane. In comparison here’s how mine looks like. To give you an idea my results are cached on the load balancer and served from a machine that has httpd slowstart and initcwnd fixed. My site is also behind cloudflare From a server side perspective I don’t think I can do anything more. Can someone help me take this 500 ms response time to 60ms? What more should I be doing to achieve Google level performance?

    Read the article

  • Should I store my code/projects on my SSD or my secondary drive?

    - by user37467
    I just got a new box. It has an SSD for the primary drive, and a 1TB SATA for the secondary drive. I'm going to run windows and my binaries on the SSD and keep all my downloads/documents/music/etc on the secondary drive. My question is should I also keep my Visual Studio Projects and code on the SSD or keep them on the secondary drive? The faster SSD would presumably be better for compiling and indexed searches, but would it be better to keep it on the 2nd drive for a more parallel disk IO situation?

    Read the article

  • swapon --all --verbose : 'read swap header failed: Invalid argument'

    - by user66088
    Recently ran through EnableHibernateWithEncryptedSwap and ran the following command: swapon --all --verbose and received: 'read swap header failed: Invalid argument' How do I fix this? Here's some more pertinent output... Output of sudo fdisk -l: Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00006d20 Device Boot Start End Blocks Id System /dev/sda1 * 2048 499711 248832 83 Linux /dev/sda2 501758 156301311 77899777 5 Extended /dev/sda5 501760 156301311 77899776 8e Linux LVM Disk /dev/mapper/ubuntu--t10194-root: 75.5 GB, 75539415040 bytes 255 heads, 63 sectors/track, 9183 cylinders, total 147537920 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/ubuntu--t10194-root doesn't contain a valid partition table Disk /dev/mapper/ubuntu--t10194-swap_1: 4227 MB, 4227858432 bytes 255 heads, 63 sectors/track, 514 cylinders, total 8257536 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x08040000 Disk /dev/mapper/ubuntu--t10194-swap_1 doesn't contain a valid partition table Disk /dev/mapper/cryptswap1: 4225 MB, 4225761280 bytes 255 heads, 63 sectors/track, 513 cylinders, total 8253440 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd2236983 Disk /dev/mapper/cryptswap1 doesn't contain a valid partition table Thanks for any and ALL help!

    Read the article

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

  • How do I mount a USB floppy disk drive?

    - by Jocky B
    I have tried to mount my usb floppy disk drive in 11.04 Natty. So far I have managed to use the udisks --mount/dev/sdf command in terminal, but get a message that I have not stated a filetype john@john-desktop:~$ udisks --mount /dev/sdf Mount failed: Error mounting: mount: you must specify the filesystem type This is the result I get when trying to mount the usb floppy disk drive. Anyone know how to proceed?

    Read the article

  • Why is my dual-boot Ubuntu partition showing up as a peripheral "root.disk"?

    - by Don
    I recently installed Ubuntu 12.04, which I had been booting from a usb key, as a dual-boot on my machine running Windows 7. From what I had read online while researching, I was prepared to have to shrink the Windows partition and all that. But I never had to - it really was just a few clicks here and there and it was installed. I'm still pretty confused about it, but whatever, it worked, and the two peacefully coexist on my machine, and I have broken things to fix before I worry about fixing unbroken things. So yesterday I got it in my head to look at my partitions (I was considering making an all new partition to install the Windows 8 Release Preview). What I saw confused me. Here's a screenshot of the disk utility. At this moment, there is nothing connected to my computer, and nothing in any of the optical drives/ports/card readers/etc. Can you help me figure out what's going on here? Don's Machine is, I believe, my Windows partition - that's the name I assigned my machine from Windows Explorer. PQSERVICE is from what I can find online also Windows, but having to do with backup. And SYSTEM REQUIRED, if I browse it in Ubuntu, is definitely something to do with booting, and I believe it is also Windows'. According to the sizes shown, those three together should use up my 500 GB HD. Then further down, as a "peripheral device", it lists that 31 GB disk. This is obviously my Ubuntu (Model:Linux Loop:root.disk), but why is it showing up as a peripheral? So, to sum up those questions and to add some more random ones I had: Why is Ubuntu showing up as a peripheral device? If the Windows sections take up all 500 GB, where does Ubuntu live? If I renamed the disk partitions, would my life become a nightmare (seriously - can I safely rename them)? Why didn't I have to resize the Windows partition in the first place? Would giving Ubuntu more space improve its performance (it hangs alot)? Is it possible to have a partition for each OS (Windows 7 & 8, Ubuntu), a partition for files, and a separate partition for backups? Is this towards the good or bad idea end of the spectrum? @Elfy, would that explain why it keeps hanging? I guess I'll backup my files, rip it out, and reinstall it correctly later on today.

    Read the article

  • How many disk should I use to meed the capacity and IOPS need?

    - by facebook-100005613813158
    An application needs 1.6 TB of storage capacity and performs 1000 IOPS. How many disks are required to meet the application requirement and offer acceptable response time? The disk specifications are as follows: - Drive capacity = 100 GB - 15K RPM - Each disk can perform 50 IOPS 4 candidate , 10,12,16,20, which one is the most likely answer? in my opinion,16 disks can only meet the capacity need ,but cannot meet the IOPS need, so ,the right answer should be 20 disks?right?

    Read the article

  • How can i turn a virtualbox disk partion to be my main boot system? (read description)

    - by user75975
    I recently had windows xp installed on my pc and chose to try ubuntu but i did a full install with ubuntu. I luckily chose to create a backup windows iso before the install and now want to reinstall windows but i couldnt find a sufficient program to mount the iso to my cd disk. My question is how can i burn the disk using VirtualBox or even how can i transform a virtual box partion to my main OS system?? thx in advance

    Read the article

  • Windows system monitoring and profiling

    - by Aris
    I have several dozen 64-bit Windows 2003 servers in a high performance environment with very bursty system utilization. I am looking for a tool (or tools) to monitor and analyze system performance (eg, CPU utilization, bandwidth, etc). This tool can either query servers from a central location (SNMP) or require installation of a component on each server. It should poll on a 1-second interval. It should be able to generate pretty graphs which show trends over time. As a nice-to-have, it should be able to send out emails or IMs when certain thresholds are exceeded. The tools I have investigated so far, including Solarwinds and PRTG, are not designed to poll this frequently. They seem to be designed for a ~30-second interval. Solarwinds wouldn't go lower than 3-sec, and PRTG chokes at 1-sec. They both default to 1-min. All three tools seem more focused on outage monitoring and reporting than metric collection. Given the bursty nature of our applications, such infrequent polling would result in a very inaccurate picture of performance. I am considering rolling my own solution using perfmon. This would be a lot of work, and seems like reinventing the wheel. Are there any tools out there that meet my needs?

    Read the article

  • Preloading Winforms

    - by msarchet
    I am currently working on a project where we have a couple very control heavy user controls that are being used inside a MDI Controller. This is a Line of Business app and it is very data driven. The problem that we were facing was the aforementioned controls would load very very slowly, we dipped our toes into the waters of multi-threading for the control loading but that was not a solution for a plethora of reasons. Our solution to increasing the performance of the controls ended up being to 'pre-load' the forms onto a hidden window, create a stack of the existing forms, and pop off of the stack as the user requested a form. Now the current issue that I'm seeing that will arise as we push this 'fix' out to our testers, and the ultimately our users is this: Currently the 'hidden' window that contains the preloaded forms is visible in task manager, and can be shut down thus causing all of the controls to be lost. Then you have to create them on the fly losing the performance increase. Secondly, when the user uses up the stack we lose the performance increase (current solution to this is discussed below). For the first problem, is there a way to hide this window from task manager, perhaps by creating a parent form that encapsulates both the main form for the program and the hidden form? Our current solution to the second problem is to have an inactivity timer that when it fires checks the stacks for the forms, and loads a new form onto the stack if it isn't full. However this still has the potential of causing a hang in the UI while it creates the forms. A possible solutions for this would be to put 'used' forms back onto the stack, but I feel like there may be a better way.

    Read the article

  • How to modernize an enormous legacy database?

    - by smayers81
    I have a question, just looking for suggestions here. So, my application is 'modernizing' a desktop application by converting it to the web, with an ICEFaces UI and server side written in Java. However, they are keeping around the same Oracle database, which at current count has about 700-900 tables and probably a billion total records in the tables. Some individual tables have 250 million rows, many have over 25 million. Needless to say, the database is not scaling well. As a result, the performance of the application is looking to be abysmal. The architects / decision makers-that-be have all either refused or are unwilling to restructure the persistence. So, basically we are putting a fresh coat of paint on a functional desktop application that currently serves most user needs and does so with relative ease and quick performance. I am having trouble sleeping at night thinking of how poorly this application is going to perform and how difficult it is going to be for everyday users to do their job. So, my question is, what options do I have to mitigate this impending disaster? Is there some type of intermediate layer I can put in between the database and the Java code to speed up performance while at the same time keeping the database structure intact? Caching is obviously an option, but I don't see that as being a cure-all. Is it possible to layer a NoSQL DB in between or something?

    Read the article

  • SQL Server Multi-statement UDF - way to store data temporarily required

    - by Kharlos Dominguez
    Hello, I have a relatively complex query, with several self joins, which works on a rather large table. For that query to perform faster, I thus need to only work with a subset of the data. Said subset of data can range between 12 000 and 120 000 rows depending on the parameters passed. More details can be found here: http://stackoverflow.com/questions/3054843/sql-server-cte-referred-in-self-joins-slow As you can see, I was using a CTE to return the data subset before, which caused some performance problems as SQL Server was re-running the Select statement in the CTE for every join instead of simply being run once and reusing its data set. The alternative, using temporary tables worked much faster (while testing the query in a separate window outside the UDF body). However, when I tried to implement this in a multi-statement UDF, I was harshly reminded by SQL Server that multi-statement UDFs do not support temporary tables for some reason... UDFs do allow table variables however, so I tried that, but the performance is absolutely horrible as it takes 1m40 for my query to complete whereas the the CTE version only took 40minutes. I believe the table variables is slow for reasons listed in this thread: http://stackoverflow.com/questions/1643687/table-variable-poor-performance-on-insert-in-sql-server-stored-procedure Temporary table version takes around 1 seconds, but I can't make it into a function due to the SQL Server restrictions, and I have to return a table back to the caller. Considering that CTE and table variables are both too slow, and that temporary tables are rejected in UDFs, What are my options in order for my UDF to perform quickly? Thanks a lot in advance.

    Read the article

  • How can I apply a PSSM efficiently?

    - by flies
    I am fitting for position specific scoring matrices (PSSM aka Position Specific Weight Matrices). The fit I'm using is like simulated annealing, where I the perturb the PSSM, compare the prediction to experiment and accept the change if it improves agreement. This means I apply the PSSM millions of times per fit; performance is critical. In my particular problem, I'm applying a PSSM for an object of length L (~8 bp) at every position of a DNA sequence of length M (~30 bp) (so there are M-L+1 valid positions). I need an efficient algorithm to apply a PSSM. Can anyone help improve performance? My best idea is to convert the DNA into some kind of a matrix so that applying the PSSM is matrix multiplication. There are efficient linear algebra libraries out there (e.g. BLAS), but I'm not sure how best to turn an M-length DNA sequence into a matrix M x 4 matrix and then apply the PSSM at each position. The solution needs to work for higher order/dinucleotide terms in the PSSM - presumably this means representing the sequence-matrix for mono-nucleotides and separately for dinucleotides. My current solution iterates over each position m, then over each letter in word from m to m+L-1, adding the corresponding term in the matrix. I'm storing the matrix as a multi-dimensional STL vector, and profiling has revealed that a lot of the computation time is just accessing the elements of the PSSM (with similar performance bottlenecks accessing the DNA sequence). If someone has an idea besides matrix multiplication, I'm all ears.

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >