Daily Archives

Articles indexed Monday November 11 2013

Page 5/19 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Is there any well-known paradigm for iterating enum values?

    - by SadSido
    I have some C++ code, in which the following enum is declared: enum Some { Some_Alpha = 0, Some_Beta, Some_Gamma, Some_Total }; int array[Some_Total]; The values of Alpha, Beta and Gamma are sequential, and I gladly use the following cycle to iterate through them: for ( int someNo = (int)Some_Alpha; someNo < (int)Some_Total; ++someNo ) {} This cycle is ok, until I decide to change the order of the declarations in the enum, say, making Beta the first value and Alpha - the second one. That invalidates the cycle header, because now I have to iterate from Beta to Total. So, what are the best practices of iterating through enum? I want to iterate through all the values without changing the cycle headers every time. I can think of one solution: enum Some { Some_Start = -1, Some_Alpha, ... Some_Total }; int array[Some_Total]; and iterate from (Start + 1) to Total, but it seems ugly and I have never seen someone doing it in the code. Is there any well-known paradigm for iterating through the enum, or I just have to fix the order of the enum values? (let's pretend, I really have some awesome reasons for changing the order of the enum values)...

    Read the article

  • Turn off Windows Defender on your builds

    - by george_v_reilly
    I've spent some time this evening profiling a Python application on Windows, trying to find out why it was so much slower than on Mac or Linux. The application is an in-house build tool which reads a number of config files, then writes some output files. Using the RunSnakeRun Python profile viewer on Windows, two things immediately leapt out at me: we were running os.stat a lot and file.close was really expensive. A quick test convinced me that we were stat-ing the same files over and over. It was a combination of explicit checks and implicit code, like os.walk calling os.path.isdir. I wrote a little cache that memoizes the results, which brought the cost of the os.stats down from 1.5 seconds to 0.6. Figuring out why closing files was so expensive was harder. I was writing 77 files, totaling just over 1MB, and it was taking 3.5 seconds. It turned out that it wasn't the UTF-8 codec or newline translation. It was simply that closing those files took far longer than it should have. I decided to try a different profiler, hoping to learn more. I downloaded the Windows Performance Toolkit. I recorded a couple of traces of my application running, then I looked at them in the Windows Performance Analyzer, whereupon I saw that in each case, the CPU spike of my app was followed by a CPU spike in MsMpEng.exe. What's MsMpEng.exe? It's Microsoft's antimalware engine, at the heart of Windows Defender. I added my build tree to the list of excluded locations, and my runtime halved. The 3.5 seconds of file closing dropped to 60 milliseconds, a 98% reduction. The moral of this story is: don't let your virus checker run on your builds.

    Read the article

  • Routing public IPs (each a /32) through a VPN to another server

    - by Lee S
    Hopefully the title makes sense; I have a server currently in a colo facility, with many IP addresses routed to it. They are individual IPs and not in a contiguous block. Due to vastly improved connectivity (fibre) at home I am slowly bringing my infrastructure in-house for managability and eventually, cost savings. What I would like to do though is use the IP addresses allocated to my existing server, at home. I have an IP block allocated to me on my new ISP connection, but for a couple of reasons I'd like to make use of the colo ones for now: Ease of transition - lots of domains, dns, hard-coded IPs in programs, etc. Connectivity fallback. If my primary line goes down and switches to fallback 1 (dsl) or fallback 2 (4G), I lose access to the ISP-allocated IP block of IPs that are only presented on the primary WAN interface. What I'd like to achieve is my home virtualisation server (Proxmox/Debian-based) "dials in" to the colo server in the colo facility (also Proxmox/Debian) via VPN or similar, and gets to make use of the IP addresses that currently terminate on the colo box. If the primary connection to my ISP goes down and one of the fallback routes kicks in, the VPN tunnel will just time out and then be re-established on the backup connection instead. I'm sure this is doable, but I have no idea how. I'm not afraid to get my hands dirty, I just don't really know where to start?

    Read the article

  • Collectd on ubuntu with perl plugin support

    - by Roman
    For days I am struggling with enabling perl plugin support for collectd. I have installed colllectd 5.4.0 on a Aws ubuntu 13.04. Configured compiled. I have even installed libperl-dev. But when i run ./configure from collectd installation , it still says that "perl ....(needs libperl)" Now enabling the perl plugins from collectd.conf didnt help much. In logs i see that : plugin_load: Could not find plugin "perl" in /opt/collectd/lib/collectd and indeed there is not perl.so or whatever in that folder. Can someone help me out with that ?

    Read the article

  • How to update grub with puppet?

    - by Tombart
    I would like to change a line in /etc/default/grub with puppet to this: GRUB_CMDLINE_LINUX="cgroup_enable=memory" I've tried to used augeas which seems to do this magic: exec { "update_grub": command => "update-grub", refreshonly => true, } augeas { "grub-serial": context => "/files/etc/default/grub", changes => [ "set /files/etc/default/grub/GRUB_CMDLINE_LINUX[last()] cgroup_enable=memory", ], notify => Exec['update_grub'], } It seems to work, but the result string is not in quotes and also I want to make sure that any other values will be separated by space. GRUB_CMDLINE_LINUX=cgroup_enable=memory Is there some mechanism how to append values and escape the whole thing? GRUB_CMDLINE_LINUX="quiet splash cgroup_enable=memory"

    Read the article

  • How to solve "Warning: mail() [function.mail]: SMTP server response: 530 Relaying not allowed - sender domain not local in D:\\... " Error?

    - by Kiran Rs
    I have a contact page where users can contact me via that form. But I'm getting this error, Warning: mail() [function.mail]: SMTP server response: 530 Relaying not allowed - sender domain not local in D:\INETPUB\VHOSTS\nextoption.in\httpdocs\auto-replay\contact.php on line 33 My Php code is, if(isset($_POST['send'])) //if "email" is filled out, send email { //send email $email1=$_POST['email']; $headers = "From: My site\r\n"; $headers .= "Reply-To: [email protected]\r\n"; $headers .= "Return-Path: [email protected]\r\n"; $headers .= "X-Mailer: Drupal\n"; $headers .= 'MIME-Version: 1.0' . "\n"; $headers .= 'Content-type: text/html; charset=iso-8859-1' . "\r\n"; $to = "[email protected]"; $subject = "Test mail"; $message = "Hello!This is a simple email message $from = $email1; mail($to,$subject,$message,$headers); ? alert ("Enquiry form submited successfully ! We'll get back you soon "); What will be my fault..... What is the Fault in SMTP Server?

    Read the article

  • 4 Magento Requests per second = 210 mbit memcache bandwith?

    - by Karsten
    After searching serverfault for similar questions without success, these are my numbers for one magento instance, running on multiple servers: After varnish about 4 requests per second hit the webservers The magento cache is configured to use one separate memcache server where I'm measuring about 210 Mbit/s bandwith usage. Compared to other projects, magento and non-magento, this number seems way off (as in extremely high). I'd like to get some data to compare to, or even better, if you have any idea what exactly causes this/how to find it and how to improve the situation.

    Read the article

  • Q: MySQL Cluster - Data insertion in NDBCLUSTER table - error out after 5 million rows

    - by Mata
    MysqlCluster version: mysql-5.6.11 ndb-7.3.2 Insertload = 50 M dataset Datanodes = 3 LOAD DATA INFILE '/input_50m/Table_1_sorted.csv' IGNORE INTO TABLE nw_ndb FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' We recently setup a new mySQL cluster and trying to load data from a flat file. But getting error “Got temporary error 4010 'Node failure caused abort of transaction' from NDBCLUSTER" when inserting 5 million rows in a single table in MySQL Cluster. We are using "LOAD DATA INFILE" command to load the data in the table from csv file. Server (musqld, ndb nodes) has good hardware: 126 GB RAM, 32 Gb allocated to mysqld tried below settings with no effect: SET autocommit=0; SET FOREIGN_KEY_CHECKS=0; SET unique_checks=0; SET GLOBAL ndb_batch_size=8*1024*1024; SET GLOBAL ndb_cache_check_time = 1000; SET GLOBAL ndb_index_stat_cache_entries = 10000000; SET SESSION BULK_INSERT_BUFFER_SIZE=256217728; SET GLOBAL KEY_BUFFER_SIZE=256217728; Any clues?

    Read the article

  • Move files from ftp server to s3

    - by lev
    I would like to set up an ftp server, where users will upload files, and for each file, put it on s3 storage, and delete it from the ftp server. (the server runs on ec2 ubuntu) Here are the stuff I already tried, with no success.. Mount s3 bucket using s3fs. I followed those instructions, but there is a bug in the latest version of s3fs, that prevents it from working. The bug was fixed on the develop branch, but I don't want to use unstable version on my production. Use vsftpd and using s3cmd sync via cron to sync the files periodically. The problem with that approach, is that s3cmd can start running in the middle of a file upload, and start synching the incomplete file. Also s3cmd doesn't give any feedback it the sync fails, so I have no way of knowing if I can delete the files after the sync command finished running. Use pure-ftpd's upload script feature (which allows to run a script after a file is finished uploading), but I noticed that if the file upload was failed in the middle, the script will run anyway, and I have no way of knowing if the upload was successful or not. I've been at it for a few days now, and I'm at a loss here. Any suggestions will be welcomed.

    Read the article

  • Can I configure a DNS cache not to forward AAAA queries?

    - by itsadok
    I'm setting up an internal DNS cache because my firewall is having trouble handling all the sessions created by DNS requests. I tried using bind9, dnsmasq and DJB dnscache, they all help reduce the number of requests leaving my network, but there are still a lot of request being made. Looking at the log files, and tcpdump and dnstop outputs, it seems that requests that return SERVFAIL do not get cached at all. And a lot of those failed requests are AAAA requests, which is a shame, because I do not have ipv6 enabled on any server. I've looked at several ways to help the situation, and I think if I could somehow prevent AAAA record requests from being forwarded by the DNS cache, it would reduce the number of requests significantly. The closest thing I found was the filter-aaaa-on-v4 option in BIND9. However, this only removes the record from the server response, and does not prevent it from forwarding it. Any help would be appreciated.

    Read the article

  • Configuring nginx to check for hard files in only a few directories,

    - by Evan Carroll
    For a node.js project I'm doing, I have a tree like this. +-- public ¦   +-- components ¦   +-- css ¦   +-- img +-- routes +-- views Essentially, I have the root to be set to public. I want all requests destined to /components/ /css/ /img/ To check to see if their appropriate destinations exist on disk. However, I don't want requests to other directories to even run an IO operation, /foo/asdf /bar /baz/index.html None of those should result in the disk being touched. I have a stansa that does the proxy to node.js, location @proxy { internal; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_pass http://localhost:3030; proxy_redirect off; } I just would like to know how to arrange this. My problem would be easily solved if try_files took a single argument, but it always wants a file first. location /components/ { try_files $uri, @proxy } location /css/ { try_files $uri, @proxy } location /img/ { try_files $uri, @proxy } However, there is nothing that I can find that will give me, location / { try_files @proxy } How do I get the effect I want?

    Read the article

  • Windows Server Configuration Management Best Practices

    - by Anton Gogolev
    Chef/Pupper/Ansible are cool and all, but they are second-class citizens on Windows at best. We have a bunch of "snowflake" (one of a kind) machines (baremetal and virtual) that nobody really know what's going on with. What I want is to start establishing basic configuration management for said servers, starting from installing Windows, installing and enabling various Roles and Features, setting up Services, Shares, Users and deploying webapps. PowerShell DSC looks promising, but it's not yet here and appears to be over-engineered, Puppet and the like are again not first-class. There's a bunch of tooks and TLAs like Windows ADK, DISM, OCSetup, etc. and it seems to me that the "Configuration Management" story on Windows is not precisely rainbows and unicorns. What I want is a Puppet/Chef-like, lightweight tool (no System Center Configuration Management, please) which would allow us to "version-control our server infrastructure" and bring all the benefits of CM. So, where do I look for the tool that does this kind of thing?

    Read the article

  • Not able to connect to port 80 using telnet?

    - by pranitkothari
    When I try to connect my Windows PC with telnet, its giving me following error. Welcome to Microsoft Telnet Client Escape Character is 'CTRL+]' Microsoft Telnet open 192.168.220.135 80 Connecting To 192.168.220.135...Could not open connection to the host, on port 8 0: Connect failed Microsoft Telnet I have firewall off for both the machines. Can you please suggest how to connect to port 80 using telnet. Thanks.

    Read the article

  • Capturing same interface with tshark with same or different capture filters

    - by Pankaj Goyal
    I am in stuck in a situation where an interface will be captured more than one time. Like :- $ tshark -i rpcap://1.1.1.1/lo -f "ip proto 1" -i rpcap://1.1.1.1/lo -f "ip proto 132" or (same filter) $ tshark -i rpcap://1.1.1.1/lo -f "ip proto 1" -i rpcap://1.1.1.1/lo -f "ip proto 1" what will happen in both the cases ? In first case, will the capture filter gets OR'ed or AND'ed ?? In second case, will the same packets be captured two times ?

    Read the article

  • How to use 3T SAS HDDs which aren't supported by Dell PERC 6i

    - by Sunry
    We bought several 3T SAS hard disks want to use with our Dell R710 server. But we found our R710 can't recognize the 3T hd, and then we dig out the raid card is PERC 6i which doesn't support it. Contacted with Dell that H700 and H800 support 3T hd, thought we can buy a H700 , but the SAS cable is not available in the market right now. And it seems very hard for us to change the raid card, also no suitable tools to tease the special screws. So I want to know if any other method to use these 3T hds instead of putting them in the warehouse? Can I low formatting it to 2TB, and plug it into Dell R710 to use? Or a SAS to USB adapter to use it as USB external disk? I just want to use them by Dell R710 even external.

    Read the article

  • How to limit disk performance?

    - by DrakeES
    I am load-testing a web application and studying the impact of some config tweaks (related to disk i/o) on the overall app performance, i.e. the amount of users that can be handled simultaneously. But the problem is that I hit 100% CPU before I can see any effect of the disk-related config settings. I am therefore wondering if there is a way I could deliberately limit the disk performance so that it becomes the bottleneck and the tweaks I am trying to play with actually start impacting performance. Should I just make the hard disk busy with something else? What would serve the best for this purpose? More details (probably irrelevant, but anyway): PHP/Magento/Apache, studying the impact of apc.stat. Setting it to 0 makes APC not checking PHP scripts for modification which should increase performance where disk is the bottleneck. Using JMeter for benchmarking.

    Read the article

  • Cannot start Oracle XE 11gR2 Net Listener and Database on Ubuntu 13.04

    - by hydrology
    I have been following the setup step on this article for installing Oracle XE 11g R2 on Ubuntu 13.04. The environment variables PATH, ORACLE_HOME, ORACLE_SID, NLS_LANG ORACLE_BASE have all been set up correctly. simongao:~ 06:16:38$ echo $PATH /usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/simongao/adt-bundle-linux-x86_64-20130219/sdk/platform-tools:/u01/app/oracle/product/11.2.0/xe/bin simongao:~ 06:18:36$ echo $ORACLE_HOME /u01/app/oracle/product/11.2.0/xe simongao:~ 06:23:29$ echo $ORACLE_SID XE simongao:~ 06:23:35$ echo $ORACLE_BASE /u01/app/oracle simongao:~ 06:23:37$ sudo echo $LD_LIBRARY_PATH /u01/app/oracle/product/11.2.0/xe/lib simongao:~ 06:23:48$ echo $NLS_LANG /u01/app/oracle/product/11.2.0/xe/bin/nls_lang.sh However, when I try to startup the service, I receive the following error information. simongao:~ 06:18:40$ sudo service oracle-xe start Starting Oracle Net Listener. Starting Oracle Database 11g Express Edition instance. Failed to start Oracle Net Listener using /u01/app/oracle/product/11.2.0/xe/bin/tnslsnr and Oracle Express Database using /u01/app/oracle/product/11.2.0/xe/bin/sqlplus.

    Read the article

  • Directories shown as files, when sharing a mounted cifs drive

    - by Johan Sigfred Abildskov
    I have an issue where a directory is shown as a file when accessing a samba share ( on Ubuntu 12.10 ) from a Windows machine. The output from ls -ll in the folder on the linuxbox is as follows: chubby@chubby:/media/blackhole/_Arkiv$ ls -ll total 0 drwxrwxrwx 0 jv users 0 Jun 18 2012 _20 drwxrwxrwx 0 jv users 0 Apr 17 2012 _2006 drwxrwxrwx 0 jv users 0 Apr 17 2012 _2007 drwxrwxrwx 0 jv users 0 May 12 2011 _2008 drwxrwxrwx 0 jv users 0 Feb 19 09:53 _2009 drwxrwxrwx 0 jv users 0 Dec 20 2011 _2010 drwxrwxrwx 0 jv users 0 May 8 2012 _2011 drwxrwxrwx 0 jv users 0 Mar 5 11:37 _2012 drwxrwxrwx 0 jv users 0 Feb 28 10:09 _2013 drwxrwxrwx 0 jv users 0 Feb 28 11:18 _Mailarkiv drwxrwxrwx 0 jv users 0 Jan 3 2011 _Praktikanter The entry in /etc/fstab is: # Mounting blackhole //192.168.0.50/kunder/ /media/blackhole cifs uid=jv,gid=users,credentials=/home/chubby/.smbcredentials,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0 When I access the share directly from the NAS on my Windows box, there are no issues. The version of Samba is 3.6.6, but I couldn't find anything in the changelogs that seem relevant. I've tried mounting it in different locations with different permissions, users and groups but I have not made any progress Due to my low reputation on serverfault ( mostly stackoverflow user ) I'm unable to post a screenshot that shows that the directories are shown as files. If I type the full path in explorer, the directory listing works excellently, except for any subdirectories that are then shown as files. Any attack vector for this issue would be greatly appreciated. Please let me know if I have provided insufficient details. Edit: The same share when accessed from a OS X, works perfectly listing the directories as directories. Best Regards!

    Read the article

  • How to backup virtual machines on a standalone ESXi host?

    - by Massimo
    Standalone ESXi (4.1) host without any vCenter Server. How to backup virtual machines as quickly and storage-friendly as possible? I know I can access the ESXi console and use the standard Unix cp command, but this has the downfall of copying the whole VMDK files, not only their actually used space; so, for a 30-GB VMDK of which only 1 GB is used, the backup would take 30 full GBs of space, and time accordingly. And yes, I know about thin-provisioned virtual disks, but they tend to behave very badly when physically copied, and/or to blow up to their full provisioned size; also, they are not recommended for actual VM performance. It is ok for me to shut down the VMs before backing them up (i.e. I don't need "live" backups); but I need a way to copy them around efficiently; and yes, a way to automate shutdown/startup when taking a backup would also help. I only have ESXi; no Service Console, no vCenter Server... what's the best way to handle this task? Also, what about restores?

    Read the article

  • ECMP Load Balancing in JUNOS

    - by SpacemanSpiff
    I'm trying to figure out how to use ECMP load balancing in JUNOS. I know this isn't the best way to load balance, but its quick and dirty and gets done what I need to. In ScreenOS this was pretty easy. Device: SRX220 JunOS: 10.3R2.11 Here's what I've got so far: routing-options { static { route 0.0.0.0/0 { next-hop [ 1.1.1.1 1.1.1.2 ]; metric 10; } } maximum-paths 2; Will that do it? Tom

    Read the article

  • USB Virtual RAM is not working properly

    - by Haseeb Anwar
    This images are help u in solving my question.1I have installed Windows 7 64 bit. I have made a virtual USB RAM but it is not working properly. First I go to the USB PropertiesReady Boost. Use this device as a virtual RAM and then click Apply then Ok. then I go to My Computer PropertiesAdvance system settingsAdvance(tab)SettingsAdvanceChange. When i click on change button the USB is not here. What can I do. Please answer my question.

    Read the article

  • Restore only one partition of Windows Backup

    - by VitoShadow
    I have a MacBook Pro with an HD partitioned. This HD was divided in two partitions: the first was about 650 GB, with OS X installed, and the second one (created with BootCamp Assistant) was about 100 GB, with Windows 7 installed. I needed more space for Windows, so I decided to backup the Windows partition using the Windows backup tool, from Control Panel. I created an image of my partition, stored it in an external HD, and now I'm trying to use it. In order to give more space to Windows, I formatted the HD, and recreate a new partition table, with the first partition of about 250 GB (with OS X) and the second of the exactly size of the previous partition in which was installed Windows (about 100 GB); thre rest was empty space. In the second, I tried to restore the Windows backup. I plugged in the Windows Installation CD (with the HD with the backup connected to the computer), and select the option "Repair your computer". Then, I choose the image of the backup (automatically recognized), and I try to restore it. The problem is that now the System Recovery Tool wants to format all the HD, in order to install only Windows! In this way, I should lose everything, also the MacOS partition! Is there a way to install the backup only in the Windows partition?

    Read the article

  • Unable to ssh in Beagle Bone Black

    - by SamuraiT
    I wanted to install pip onto beagle bone black,and I tried this: /usr/bin/ntpdate -b -s -u pool.ntp.org opkg update && opkg install python-pip python-setuptools then, it threw errors,but Unfortunately, I didn't log that errors. it was occurred a week ago and was't solved yet. I wanted to solve it now and I tried connect by ssh,but I failed. When I ping to beagle bone, it responds, and Cloud9 IDE is working too but not ssh. I don't think this is serious problem since I can connect to beagle bone by other methods: Cloud9 or so. However, to use python on beagle bone, I need to connect by ssh. Before trying to update and install python-pip, I could connect by ssh. Do you have any ideas to solve this connection problem? note I use default OS: Angstrom I don't use SD card. HOST PC is mac, OS.X 10.9 connect by USB serial I checked this but this wasn't helpful http://stackoverflow.com/questions/19233516/cannot-connect-to-beagle-bone-black I could connect by GateOne SSH client, but still unable to connect from terminal.

    Read the article

  • When HDD becomes full, how to create a symbolic link to the data store on another disk?

    - by Brij Raj Singh
    I have a Linux Ubuntu machine which has an X GB hard disk. There is folder, say, /opt/software/data. The disk /dev/sda1 is almost full and I have attached another disk at /dev/sda2 which is mounted at /hdd2. Is it possible for me to link the folders /opt/software/data with /hdd2/software/data so, that every file get stored in the /hdd2/software/data but may be referred from the /opt/software/data? I can't do a reinstall of the software that creates this data, to change the default location of storage.

    Read the article

  • Screen randomly goes blue/black/white

    - by FubsyGamer
    Problem Randomly, while using my computer, the monitor goes dark grey/almost black, or it goes white with faint grey vertical lines, or it goes blue with black vertical lines. It's as if the computer powers off. People tell me I sign out of Skype, Spotify stops playing when it happens, etc. When I look at the tower, it doesn't seem like it's off at all. Nothing changes, fans are spinning, lights are on, etc. If you were only looking at the tower, you'd never know there was a problem The only way I can get it to come back up is to push and hold the power button and turn it off, then back on This never happens while I'm playing video games. I've done 5-6 hour sessions of League of Legends, and it doesn't do anything When I'm just browsing the web, reading email, checking Reddit, etc, it happens all the time. It can happen multiple times in a session, it usually takes only about 5 minutes from the time I start browsing to when the computer crashes This started happening after I moved to a new apartment (this has to be relevant somehow, it was not happening where I lived before) There is nothing in the crash logs or event logs System Specs i5 2500k CPU AMD Radeon 6800 GPU Gigabyte z68a-d3h-b3 motherboard WD VelociRaptor 1 TB HDD Screenshots Device manager About screen Things I have tried I was getting a WMI Error in my event logs, but I fixed it using Microsoft's fix, KB 2545227 I was using Windows 8. I wiped the HDD and downgraded to Windows 7 64 bit I took out the video card and used a can of air to totally clean out the video card, all fans, and the inside of the computer in general. I made sure all of the video card pins were fine, then reconnected it I tried to update my motherboard BIOS, but anything I downloaded from Gigabyte was only for 32 bit machines, not 64. I don't even know how to tell what my motherboard BIOS is at right now I am using a power strip, and anything else connected to it works just fine If I re-seat the monitor cable while this is happening, nothing changes Please, help me. I've been battling this for several weeks now, and it's so frustrating it makes me not even want to use the computer.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >