Search Results

Search found 10106 results on 405 pages for 'fail fast'.

Page 170/405 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Timesync on HyperV with CentOS 6.2

    - by WaldenL
    I've got a CentOS VM (release 6.2) running under HyperV. I have integration services installed (part of base now), and CentOS shows the current clocksource is hyperv_clocksource, however my time in the VM is about 10 minutes fast after a week of uptime. My understanding of the new IC and plugable clocksource is that this shouldn't happen any more. Is there any additional configuration necessary to get the plugable clocksource to "work?" I know there are plenty of links about setting kernel options to PIT and various stuff like that, but those all seem to pre-date the integrated clocksource support, and as I understand it shouldn't be needed anylonger. Nor should ntpd nor adjtimex.

    Read the article

  • Cluster failover and strange gratuitous arp behavior

    - by lazerpld
    I am experiencing a strange Windows 2008R2 cluster related issue that is bothering me. I feel that I have come close as to what the issue is, but still don't fully understand what is happening. I have a two node exchange 2007 cluster running on two 2008R2 servers. The exchange cluster application works fine when running on the "primary" cluster node. The problem occurs when failing over the cluster ressource to the secondary node. When failing over the cluster to the "secondary" node, which for instance is on the same subnet as the "primary", the failover initially works ok and the cluster ressource continues to work for a couple of minutes on the new node. Which means that the recieving node does send out a gratuitous arp reply packet that updated the arp tables on the network. But after x amount of time (typically within 5 minutes time) something updates the arp-tables again because all of a sudden the cluster service does not answer to pings. So basically I start a ping to the exchange cluster address when its running on the "primary node". It works just great. I failover the cluster ressource group to the "secondary node" and I only have loss of one ping which is acceptable. The cluster ressource still answers for some time after being failed over and all of a sudden the ping starts timing out. This is telling me that the arp table initially is updated by the secondary node, but then something (which I haven't found out yet) wrongfully updates it again, probably with the primary node's MAC. Why does this happen - has anyone experienced the same problem? The cluster is NOT running NLB and the problem stops immidiately after failing over back to the primary node where there are no problems. Each node is using NIC teaming (intel) with ALB. Each node is on the same subnet and has gateway and so on entered correctly as far as I am concerned. Edit: I was wondering if it could be related to network binding order maybe? Because I have noticed that the only difference I can see from node to node is when showing the local arp table. On the "primary" node the arp table is generated on the cluster address as the source. While on the "secondary" its generated from the nodes own network card. Any input on this? Edit: Ok here is the connection layout. Cluster address: A.B.6.208/25 Exchange application address: A.B.6.212/25 Node A: 3 physical nics. Two teamed using intels teaming with the address A.B.6.210/25 called public The last one used for cluster traffic called private with 10.0.0.138/24 Node B: 3 physical nics. Two teamed using intels teaming with the address A.B.6.211/25 called public The last one used for cluster traffic called private with 10.0.0.139/24 Each node sits in a seperate datacenter connected together. End switches being cisco in DC1 and NEXUS 5000/2000 in DC2. Edit: I have been testing a little more. I have now created an empty application on the same cluster, and given it another ip address on the same subnet as the exchange application. After failing this empty application over, I see the exact same problem occuring. After one or two minutes clients on other subnets cannot ping the virtual ip of the application. But while clients on other subnets cannot, another server from another cluster on the same subnet has no trouble pinging. But if i then make another failover to the original state, then the situation is the opposite. So now clients on same subnet cannot, and on other they can. We have another cluster set up the same way and on the same subnet, with the same intel network cards, the same drivers and same teaming settings. Here we are not seeing this. So its somewhat confusing. Edit: OK done some more research. Removed the NIC teaming of the secondary node, since it didnt work anyway. After some standard problems following that, I finally managed to get it up and running again with the old NIC teaming settings on one single physical network card. Now I am not able to reproduce the problem described above. So it is somehow related to the teaming - maybe some kind of bug? Edit: Did some more failing over without being able to make it fail. So removing the NIC team looks like it was a workaround. Now I tried to reestablish the intel NIC teaming with ALB (as it was before) and i still cannot make it fail. This is annoying due to the fact that now i actually cannot pinpoint the root of the problem. Now it just seems to be some kind of MS/intel hick-up - which is hard to accept because what if the problem reoccurs in 14 days? There is a strange thing that happened though. After recreating the NIC team I was not able to rename the team to "PUBLIC" which the old team was called. So something has not been cleaned up in windows - although the server HAS been restarted! Edit: OK after restablishing the ALB teaming the error came back. So I am now going to do some thorough testing and i will get back with my observations. One thing is for sure. It is related to Intel 82575EB NICS, ALB and Gratuitous Arp.

    Read the article

  • Linux or Windows for a server?

    - by Matt
    I'm a Linux guy when it comes to (web) servers for the following reasons Legally free Fast software updates (Unless you're running Cent OS :) Powerful CLI management of services Easy to secure (in terms of users and groups) Web server software is, well, built for Linux... Apache, PHP, Python, etc, are Linux programs that get ported to Windows - I'm 90% sure of this Unless the web server needed to run ASP, I wouldn't use Windows. My boss' IT friend is a Windows guy, though. He recently got a server setup in the office to run Microsoft Exchange and some other shit. What I'm asking is, if he wanted to start running websites on this thing, what would be good reasoning to convince him otherwise? He's not very bright in terms of IT and the IT friend is all Windows. So it's two against one here... What would you say to running a Windows web server?

    Read the article

  • distributed, fault-tolerant network block device

    - by gucki
    I'm looking for a distributed, fault-tolerant network storage system which exposes block devices (not filesystems) on the clients. A client's block device should write simultaneously to several storage nodes A client's block device should not fail as long as not all storage nodes backing it went down The master should automatically redistribute storages' data when a storage node fails or gets added/ removed A single master (which is for metadata only) is fine So ideally the architecture would be very similar to moosefs (http://www.moosefs.org/) but instead of exposing a real filesystem mounted using a fuse client it'd expose block devices on the clients. I know of iscsi and drbd but both don't seem to offer what I'm looking for. Or am I missing something?

    Read the article

  • Best way to grow Linux software RAID 1 to RAID 10

    - by Hans Malherbe
    mdadm does not seem to support growing an array from level 1 to level 10. I have two disks in RAID 1. I want to add two new disks and convert the array to a four disk RAID 10 array. My current strategy: Make good backup. Create a degraded 4 disk RAID 10 array with two missing disks. rsync the RAID 1 array with the RAID 10 array. fail and remove one disk from the RAID 1 array. Add the available disk to the RAID 10 array and wait for resynch to complete. Destroy the RAID 1 array and add the last disk to the RAID 10 array. The problem is the lack of redundancy at step 5. Is there a better way?

    Read the article

  • Using the link command to keep backups on another drive

    - by Xavier
    I have a folder that contains a not so large amount of space called /data/backup. I have been told that if I link that folder (/data/backup) to an even bigger folder area like /bigdata/backup for example, that I will be able to execute backups to the /data/backup folder. It will then just create a link, but the data will be seen in both folders and the latter one (/bigdata/backup) will contain the backup results but it will show on both folders. Since the /bigdata/backup has far more disk space then the backup will no longer fail because of space problems in the /data/backup one. Is this true?

    Read the article

  • Strange behavior in networking between 64bit and 32bit

    - by Rob
    I'm having a strange behavior about my network setup. I have 2 laptops, one (Lenovo) with Windows 7 Professional 64-bit and another (Acer) with Windows 7 Ultimate 32-bit and a wireless router. I'm connecting these 2 using the router but with a strange behavior. I can ping both machines, as well as the router, but when i try to access their shared folders (\\computer_name\shared_folder) the connection starts to fail and I need to reboot both machines to get it working again. But this only happens sometimes, sometimes it works.

    Read the article

  • CentOS PAM+LDAP login and host attribute

    - by pianisteg
    My system is CentOS 6.3, openldap is configured well, PAM authorization works fine. But after turning pam_check_host_attr to yes, all LDAP-auths fail with message "Access denied for this host". hostname on the server returns correct value, the same value is listed in user's profile. "pam_check_host_attr no" works fine and allows everyone with correct uid/password a piece of /var/log/secure: Sep 26 05:33:01 ldap sshd[1588]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=my-host user=my-username Sep 26 05:33:01 ldap sshd[1588]: Failed password for my-username from 77.AA.BB.CC port 58528 ssh2 Sep 26 05:33:01 ldap sshd[1589]: fatal: Access denied for user my-username by PAM account configuration Another two servers (CentOS 5.7 Debian) authorizes on this LDAP server correctly. Even with pam_check_host_attr yes! I didn't edit /etc/security/access.conf, it is empty, only default comments. I don't know what to do! How to fix this?

    Read the article

  • Best way to grow Linux software RAID 1 to RAID 10

    - by Hans Malherbe
    mdadm does not seem to support growing an array from level 1 to level 10. I have two disks in RAID 1. I want to add two new disks and convert the array to a four disk RAID 10 array. My current strategy: Make good backup. Create a degraded 4 disk RAID 10 array with two missing disks. rsync the RAID 1 array with the RAID 10 array. fail and remove one disk from the RAID 1 array. Add the available disk to the RAID 10 array and wait for resynch to complete. Destroy the RAID 1 array and add the last disk to the RAID 10 array. The problem is the lack of redundancy at step 5. Is there a better way?

    Read the article

  • Arch linux ati graphics crash after grub install

    - by Jay
    Ok, so I've had an arch linux w/ gnome 3 installed for a while now. And a while ago I installed ubuntu as another partition, I think to fix an issue that cause arch to fail. So, it was all working fine, but then I went and reinstalled grub 1 on the arch partition; Ubuntu had overwritten it on the install. Then when I tried to boot into arch it booted, but the graphics wasn't working correctly: gdm wouldn't even show, and there were weird colors instead. So, I uninstalled xf86-video-ati and then installed xf86-video-vesa. That made gdm run in fallback mode and I was able to boot to gnome 3 fallback mode (or whatever it's called). But I can't seem to get the graphics working correctly.

    Read the article

  • Is a cluster the most cost effective redundancy method for windows server 2003?

    - by Ryan
    We had a server with bad ram which caused a long outage while they figured it out and our client facing apps had to go down for a while. We are coming up with a solution for instant fail-over but are not sure what the most cost effective method would be. Is a windows server cluster the best method for this? Also note we are using Parallels Virtuozzo if that makes any difference here. We found Parallels has a documented method for setting this up but it said it required a Domain Controller as well as a Fiber connection to shared storage, is all that really needed? Thanks.

    Read the article

  • Slow Jquery Animation

    - by Pyronaut
    I have this webpage : http://miloarc.pyrogenicmedia.com/ Which atm is nothing special. It has a few effects but none that break the bank. If you mouse over a tile, it should change it's opacity to give it a fade effect. This is done through the Jquery Animation, not through CSS (I do this so it can fade, instead of being a straight change). Everything looks nice when the page loads, and the fades look perfect. Infact if you drag your mouse all over the place, if gives you a "trail" almost like a snake. Anyway, My next problem is that you will see there is a box in the top left, which is going to tell you information about the tile you are hovering over. This seems to work fine. When you mouse over that information box, it switches it's position (So that you can reach the tiles that were previously hidden underneath it). From my understanding, this is all working fine, and to the letter. However, After one move of the info box (One hover). Viewing the page in Firefox turns slugish. As in, after a successful move of the info box, the fade effects become very stuttered, and it doesn't pick up events as fast meaning you can't draw a "trail" on the screen. Chrome doesn't have this issue. It seems to work fine no matter what. Safari also seems ok. Although I do notice if I move my mouse really fast, it does jump a bit but not as much as firefox. Internet explorer doesn't work at all. Seems to crash the browser... And there is an error with the rounded corner plugin im using (Not sure why...). All in all, I think whatever I'm doing inside my code must be heavily sluggish. But I don't know where it is happening. Here is the full code, but I would advise go to the link to view everything. <script type="text/javascript"> $(document).ready(function() { var WindowWidth = $(window).width(); var WindowMod = WindowWidth % 20; var WindowWidth = WindowWidth - WindowMod; var BoxDimension = WindowWidth / 20; var BoxDimensionFixed = BoxDimension - 12; var dimensions = BoxDimensionFixed + 'px'; $('.gamebutton').each(function(i) { $(this).css('width', dimensions); $(this).css('height', dimensions); }); var OuterDivHeight = BoxDimension * 10; var TopMargin = ($(window).height() - OuterDivHeight) / 2; var OuterDivWidth = BoxDimension * 20; var LeftMargin = ($(window).width() - OuterDivWidth) / 2; $('#gamePort').css('margin-top', TopMargin).css('margin-left', LeftMargin).css('margin-right', LeftMargin); $('.gamebutton img').each(function(i) { $(this).hover( function () { $(this).animate({'opacity': 0.65}); }, function () { $(this).animate({'opacity': 1}); } ); }); $('.rounded').corners(); $('.gamebutton').each(function(i) { $(this).hover(function() { $('.gameTitlePopup').html($(this).attr('title')); FadeActive(); }); }); function FadeActive() { $('.activeInfo').fadeIn('slow'); } $('#gameInfoLeft').hover(function() { $(this).removeClass('activeInfo'); $(this).fadeOut('slow', function() { $('#gameInfoRight').addClass('activeInfo'); FadeActive(); }); }); $('#gameInfoRight').hover(function() { $(this).removeClass('activeInfo'); $(this).fadeOut('slow', function() { $('#gameInfoLeft').addClass('activeInfo'); FadeActive(); }); }); }); </script> Thanks for any help :).

    Read the article

  • how to install mpgtx from source code

    - by Ahmet vardar
    i am new on linux server. i have mpgtx folder in my root, how can i install it ? in readme file it is written; ./configure && make when i type this i get permission denied error ? thanks EDIT: Here the steps i done root@server [/]# cd /mpgtx root@server [/mpgtx]# ./configure -bash: ./configure: Permission denied root@server [/mpgtx]# make ----------------------------------------------------------------------------- Hello ! I'm afraid I'm a dummy Makefile. My goal in life is to politely ask you to run the configure script to actual- ly generate a real Makefile. Would you be kind enough to type "./configure --help" to see the options that will suit your needs ? Please note that typing "./configure" without option will generate a Makefile that will suit most people needs. I wish you a good day. Please don't drive to fast. ----------------------------------------------------------------------------- root@server [/mpgtx]# ./configure -bash: ./configure: Permission denied root@server [/mpgtx]#

    Read the article

  • Will Dolphin emulator run (smoothly) on this machine?

    - by Mark
    Product Specs: Intel® NM10 Express chipset /w Intel® Atom® D525 (dual-core, 1.8 GHz) I think the most I can put in there for RAM is 4 GB. (They don't make 8 GB SODIMM do they?) The Dolphin site is pretty vague about requirements: Windows XP or higher, or Linux, or MacOSX Intel Fast CPU with SSE2. GPU with Pixel Shader 2.0 or greater. Some integrated graphics chips work but it depends on the model (and only with DirectX 9). I'm trying to make a light-weight quiet little machine for streaming video and playing eumulators, and I'm trying to figure out the minimum requirements I will need to do what I want. I know I'm not supposed to ask for product recommendations here, so if you could just advise the minimum requirements in terms of CPU, graphics card, and RAM, that'd be helpful.

    Read the article

  • Internet very slow when upgrading to Ubuntu 9.10

    - by roojoo
    I was running Ubuntu 8.x on my desktop and everything worked fine. Im using wired internet and it worked perfectly, pages loaded pretty fast. However, when I decided to upgrade to 9.10 the upgrade failed at some point, however I was left with what appeared to be Ubuntu 9.10. Since then the internet has been weird. When I go to a website it takes at least 10 seconds for the page to display, however if Im on a site and navigate to other pages on the website it loads quickly. This never happened prior to the upgrade. I thought this may be due to the upgrade not installing correctly so I did a fresh install of Xubuntu 9.10 but the problems are still the same. Im writing this on a Vista machine over the wireless network and internet is fine. Does anyone have any ideas of the issue? Thanks.

    Read the article

  • Will this increase my VPS failing rate ?

    - by Spencer Lim
    Will this increase my Virtual private Server failing rate if i :- install Microsoft Window Server 2008 Enterprise install SQL server enterprise 2008 install IIS 7.5 install ASP.Net Mvc 2 install Microsoft Exchange install Team foundation server on one mini VPS with specification of DELL Poweredge R710 shared plan DDR3 ECC RAMs 16GB and -- 1GB for this VPS using DELL PERC 6i raid controller (this thing alone about 1.5k-2k) and the SAS HDD (15K RPM) (146GB) -- 33GB to this VPS each hdd is freaking fast over 300MB read / write possible with proper tuning the motherboard is a DELL and it has twin redundant PSU (870watt 85%eff) its running on Intel Xeon 5502 (Quad Core) x2 so about 8 physical proc (fairly share) is there any ruler for this about one VPS can only install what what what service ? Thx for reply

    Read the article

  • Can KVM CPU assignment count differ from physical hosts CPU count?

    - by javano
    I have read this question. I knew already that I could for example, have a quad core machine with four guests each having two vCPUs. As they don't all be require 100% CPU usage all the time, the scheduler would handle this for me. My question is about how this relates to a fail-over or migration situation; If host1 has two dual-core CPUs, and I assign guest1 four vCPUs (so it accessed all four physical cores), what will happen if I try and migrate it to host2 which only has one dual-core CPU? Can qemu-kvm emulate more vCPUs than there are physical? Or would I have to shut down the virtual machine, change the CPU assignment, migrate it, and then boot it back up (so no live migration)? Many thanks.

    Read the article

  • what might cause a print error in perl?

    - by Mark J Seger
    I have a long running script that every hour opens a file, prints to it and closes the file. I've recently found very rarely, the print is failing, not because I'm testing the status of the print itself but rather due to the fact of missing entries in the file until the system is actually rebooted! I do trap for file open failures and write a message to syslog when that happens and I'm not seeing any open failures so I'm now guessing it may be the print that is failing. I'm not trapping the print failures, which I suspect most people don't but am now going to update that one print. Meanwhile, my question is does anyone know what types of situations could cause a print statement to fail when there is plenty of disk storage and no contention for a file which has been successfully opened in append mode?

    Read the article

  • Can't login to a new mysql user

    - by mostar
    Hi, When I create a new Mysql user, it is impossible to login using this user and password. Only if I crate a user without a password I can login. For example: mysql -u root -phererootpass grant all privileges on mydb.* to testuser@'%' identified by '' with grant option; grant all privileges on mydb.* to testuser2@'%' identified by 'mypass' with grant option; FLUSH PRIVILEGES; exit; mysql -u testuser #<<< work fine mysql -u testuser2 -pmypass #<<< fail to login ERROR 1045 (28000): Access denied for user 'testuser2'@'localhost' (using password: YES) </code> I'm using Mysql 5.0 on Red Hat v5 Please advice Mostar

    Read the article

  • SSL FTP fails on Windows 7 but not Windows XP clients

    - by Andrew Neely
    We currently use a free SSL-FTP client called Move-It-Freely to transmit data from a custom data entry program at over forty facilities scattered around the state to our central server. Under XP, it works flawlessly. Some facilities have upgraded to Windows 7. On these machines, uploads (transfers to us) work, downloads (transfers from us to them) fail. Replacing the Windows 7 machine with an XP machine solves the problem. We have also verified that the network firewall settings have not changed. This problem persists even if Windows firewall is not running. We were able to remote into one of the Windows 7 machines to verify that the Windows firewall was indeed turned off. We cannot replicate the problem on our own Windows 7 machines, and are at a loss of how to fix this feature for our customers. The data contain health-related information, and needs to be encrypted (hence SSL-FTP.) Despite hours spent on Google, we cannot find a solution.

    Read the article

  • Best practice? Using DPM to backup VMs within each VM or through the host?

    - by andrew
    We've got two Hyper-V hosts running multiple VMs (all flavors of Windows Servers). One of the VMs is running MS Data Protection Manager 2010, which runs beautifully (most of the time) and is connected to a separate NAS via iSCSI for the DPM storage. I noticed when I installed the DPM agent on the Hyper-V hosts, it enumerates the VMs in the DPM Protection listing. I don't want to burn through my storage space too fast with duplicate protection, so I was wondering: Is it recommended to back up VMs through the host, or is it better to install the DPM agent on each VM and backup as I would any other machine? It would seem as though most people (currently including me) do it the second way, but is there any advantage to including the entries under HyperV (Backup using Child Partition Snapshop)?

    Read the article

  • Postgresql init.d script not working

    - by Bram Jetten
    I installed Postgresql 8.4 on my VPS with Ubuntu 10.04. Default setup, nothing unusual. After the installation the dbserver is automatically started and is running great. The installer also sets a init.d script in place. This script however, doesn't seem to affect Postgres. $ sudo /etc/init.d/postgresql stop The above line is not stopping the server. The command does not fail or show any message. The logs won't say anything as well. After killing all postgres processes with killall I cannot get Postgres working again using the init script. When rebooting my VPS it somehow starts up and works again.

    Read the article

  • How to use multiple dns?

    - by Enrichman
    When I connect at work the net is going to assign me a dns that is working fine. After that when I connect to VPN I'm going to receive a different dns. With this one I can reach the server of the vpn owner but I'm not able to go to the internet. BUT if I switch the dns with the old ones I'm able to surf again (still connected to the vpn, but I cannot surf their server). Recap: DNS1) MyPC - CompanyProxy - Internet DNS2) MyPc - CompanyProxy - VPN - NoInternet (can Ping vpn servers) DNS1) MyPC - CompanyProxy - VPN - Internet (cannot ping vpn servers) Weirdest thing: I'm able to do a nslookup from anywhere, but ping is going to fail. Is possible to use both DNS? Or setup a dns just on the browser? I'm quite lost..

    Read the article

  • (PHP) User is being forced to RE-LOGIN after trying to do something on an admin page

    - by hatorade
    I have created an admin panel for a client in PHP, which requires a login. Here is the code at the top of the admin page requiring the user to be logged in: admin.php <?php session_start(); require("_lib/session_functions.php"); require("_lib/db.php"); db_connect(); //if the user has not logged in if(!isLoggedIn()) { header('Location: login_form.php'); die(); } ?> Obviously, the if statement is what catches them and forces them to log in. Here is the code on the resulting login page: login_form.php <form name="login" action="login.php" method="post"> Username: <input type="text" name="username" /> Password: <input type="password" name="password" /> <input type="submit" value="Login" /> </form> Which posts info to this controller page: login.php <?php session_start(); //must call session_start before using any $_SESSION variables include '_lib/session_functions.php'; $username = $_POST['username']; $password = $_POST['password']; include '_lib/db.php'; db_connect(); // Connect to the DB $username = mysql_real_escape_string($username); $query = "SELECT password, salt FROM users WHERE username = '$username';"; $result = mysql_query($query); if(mysql_num_rows($result) < 1) //no such user exists { header('Location: login_form.php?login=fail'); die(); } $userData = mysql_fetch_array($result, MYSQL_ASSOC); db_disconnect(); $hash = hash('sha256', $password . $userData['salt']); if($hash != $userData['password']) //incorrect password { header('Location: login_form.php?login=fail'); die(); } else { validateUser(); //sets the session data for this user } header('Location: admin.php'); ?> and the session functions page that provides login functions contains this: session_functions.php <?php function validateUser() { session_regenerate_id (); //this is a security measure $_SESSION['valid'] = 1; $_SESSION['userid'] = $username; } function isLoggedIn() { if($_SESSION['valid']) return true; return false; } function logout() { $_SESSION = array(); //destroy all of the session variables if (ini_get("session.use_cookies")) { $params = session_get_cookie_params(); setcookie(session_name(), '', time() - 42000, $params["path"], $params["domain"], $params["secure"], $params["httponly"] ); } session_destroy(); } ?> I grabbed the sessions_functions.php code of an online tutorial, so it could be suspicious. Any ideas why the user logs in to the admin panel, tries to do something, is forced to re-login, and THEN is allowed to do stuff like normal in the admin panel?

    Read the article

  • copy boot-able partition

    - by Dima
    I have an disk image with 3 partitions: first partition (hd0,0) is boot-able with GRUB1 with the following configuration GRUB file: default=0 timeout=5 title Bank A root (hd0,1) chainloader +1 title Bank B root (hd0,2) chainloader +1 The partitions (hd0,1) and (hd0,2) are also boot-able. I'm trying to clone partition (hd0,1) to (hd0,2) by creating device map using kpartx and copying whole partition using dd command. The problem is: after partition cloning, the cloned partition did not boot (but all files are OK). What the wrong? I need both partitions to bee identical (I'm using them for fail-over purposes into embedded device)

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >