Search Results

Search found 21131 results on 846 pages for 'binary log'.

Page 535/846 | < Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >

  • Mac has become insanely slow : Processes SystemUIServer, UserEventAgent and loginwindow using a lot of memory

    - by SatheeshJM
    I have been using my Mac for for many months without any problem. But recently all of a sudden the Mac became insanely slow. I opened Activity Manager to see what was happening. For three processes SystemUIServer, UserEventAgent and loginwindow, the memory gradually increases and reaches upto 2 GB for each process. This completely hangs up my Mac. I tried the following : 1. Restart Mac 2. Restart Mac in safe mode 3. Manually kill the processes 4. Remove Date and Time from Menu bar(this was supposed to be the problem for the SysteUIServer process's memory according to many users) 5. Removed the externally connected keyboard and mouse(some had suggested this for UserEventAgent's memory) No luck with any of those. The moment I log in, the memory spikes up. Any idea what the hell is happening? Please help.

    Read the article

  • Login problems on SQL EXPRESS using a user

    - by meep
    Hello Serverfault. First time I set up a SQL server, so I hope you can help me out. I have a problem regarding logging in using SQL auth on my SQL EXPRESS 2008. I have added a user though the management interface as you can see on the image below. But as soon as I try to login using SQL auth I get an error the login failed for the user. The server log says: Login failed for user 'zebisgaard'. Reason: Could not find a login matching the name provided. [CLIENT: <named pipe>] Error: 18456, Severity: 14, State: 5. Do you have an idea why? I have triple checked that the username/password is correct, tried to recreate the user and so much more. And all this is localhost.

    Read the article

  • Can't communicate with Primary DNS Server

    - by horsley
    A computer, with Windows 7, can't access any website by domain suddenly. Whether this computer use a wired link or connect to the WLAN, The fault persists IP and DNS obtained automatically, and seems normal (ipconfig /all return the correct info) I can visit websites by using HTTP proxy The DNS server is available, other computer in my room works properly. I can ping myself, the gateway and any other IP, but domains I can use nslookup and obtain the correct IP info There are some error information in the event log about dns client events explaining the client can not verify the DNS server available Windows network diagnosis explain that Windows can't communicate with the device or resource (Primary DNS Server) I guess the dns client should be blame. I tried to do the following things but the fault persist. Reinstall the driver of network adapter Reset TCP/IP (netsh int ip reset) Reset Winsock (netsh winsock reset) Reset LSP I don't want to reinstall the whole os, what should I do?

    Read the article

  • FreeNX Server w/ nxagent 3.5 not able to create shadow sessions

    - by Jenna Whitehouse
    I am running a FreeNX server on Ubuntu 11.10 and am unable to do session shadowing. I get the authorization prompt, but the shadow client crashes after. The NX server log in the user's .nx directory is as follows: Error: Aborting session with 'Server is already active for display 3000 If this server is no longer running, remove /tmp/.X3000-lock and start again'. Session: Aborting session at 'Mon Oct 1 14:26:44 2012'. Session: Session aborted at 'Mon Oct 1 14:26:44 2012'. This then deletes the lock file, which is the lock file for the initial Unix session and crashes out. Everything works for a normal session, and shadowing works up to the authorization prompt. I am using this software: Ubuntu 11.10 freenx-server 0.7.3.zgit.120322.977c28d-0~ppa11 nx-common 0.7.3.zgit.120322.977c28d-0~ppa11 nxagent 1:3.5.0-1-2-0ubuntu1ppa8 nxlibs 1:3.5.0-1-2-0ubuntu1ppa8 Any help is appreciated, thanks!

    Read the article

  • How many of you *really* surf around without JavaScript enabled? [closed]

    - by Stephen
    I've decided to rephrase the question. After some deliberation on Meta, I've realized that my question needs to be a bit more focused. The question: Should we (web developers) continue to spend effort progressively enhancing our web applications with JavaScript, ensuring that features gracefully degrade, thereby ensuring accessibility? Or should we spend that time focused on new features or other areas of development? The subtext of that question would be: How many of our customers/clients/users utilize our websites or applications with JavaScript disabled? Do you have any projects with requirements that specifically demand JavaScript functionality (almost all of mine do), and do those requirements also demand graceful degradation? For the sake of asking this question, I pulled up programmers.stackexchange.com without JavaScript enabled, and I was greeted with this message: "Programmers - Stack Exchange works best with JavaScript enabled". It was difficult to log in, albeit the site seemed to generally work okay. (I wasn't able to vote up any questions.) I think this is a satisfactory approach to development. Imagine the effort involved in making all of the site's features work with plain old HTML and server-side logic. OTOH, I wonder how many users have been alienated by this approach. We've all been trained (at least the good developers among us) to use progressive enhancement and to ensure our web applications' dynamic features degrade gracefully. Is this progressive enhancement just pissing into the wind, or do some of our customers actually utilize certain web services without JavaScript enabled? I mean, like really, not figuratively or presumptuously.

    Read the article

  • Nginx proxy cache (proxy_pass $request_uri;)

    - by imastar
    I need to create proxy web using nginx. If I access http://myweb.com/http://www.target.com/ the proxy_pass should be http://www.target.com/ Here is my configuration: location / { proxy_pass $request_uri; proxy_cache_methods GET; proxy_set_header Referer "$request_uri"; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_ignore_headers Cache-Control; proxy_hide_header Pragma; proxy_hide_header Set-Cookie; proxy_set_header Cache-Control Public; proxy_cache cache; proxy_cache_valid 200 10h; proxy_cache_valid 301 302 1h; proxy_cache_valid any 1h; } Here is the log error 2013/02/05 12:58:51 [error] 2118#0: *8 invalid URL prefix in "/http://www.target.com/", client: 108.59.8.83, server: myweb.com, request: "HEAD /http://www.target.com/ HTTP/1.1", host: "myweb.com"

    Read the article

  • How to stop basic Postfix after-queue script from BCC-ing sender?

    - by mjbraun
    I'm building a content filter for Postfix (2.9.3 package installed via apt on an Ubuntu 12.04 test VM) and I'm starting with a very basic Ruby (1.9.3) template and building up functionality. Strangely, when the script is enabled, messages sent are being forwarded on as normal, but also sent back to the sender which is not normal. Disabling the script disables this behavior. Any suggestions about what I have to change to stop that from happening? Thanks for any advice! /etc/postfix/master.cf (only the lines changed from the default) smtp inet n - - - - smtpd -o content_filter=dumper:dummy ... dumper unix - n n - 10 pipe flags=RF user=mailuser argv=/home/mailuser/mailfilter/dumper.rb ${sender} ${recipient}` /home/mailuser/mailfilter/dumper.rb #!/usr/bin/env ruby require 'open3' dir="/home/mailuser/emails" logfile="maillog.log" message = $stdin.read cmd = "/usr/sbin/sendmail -G -i #{ARGV[0]} #{ARGV[1]}" stdin, stdouterr, wait_thr = Open3.popen2e(cmd) stdin.print(message) logfile = File.open("#{dir}/#{logfile}", 'a') logfile.write(stdouterr) stdin.close stdouterr.close exit(0)

    Read the article

  • Reader Poll: Are You Going to Buy the New iPad 2?

    - by Jason Fitzpatrick
    Steve Jobs announced the iPad 2 moments ago which will touch off a flurry of new purchases, upgrades, and general Apple-centric muttering and fist shaking. Will you be buying an iPad 2? Photo courtesy of Endgadget’s liveblog coverage of the iPad 2 launch. The first iPad sales exceeded everyones expectations, Apple fans and detractors alike, with a crazy 15 million units moved last year. The new iPad rocks a dual-core processor, a front and rear-facing camera, improved graphics, and a razor thinness (33% thinner than the current model), among other improvements. Are the improvements enough to entice you into buying one? Hit up the poll below to log your vote and then fill in the details in the comments. How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Add a “Textmate Style” Lightweight Text Editor with Dropbox Syncing to Chrome and Iron Is the Forcefield Really On or Not? [Star Wars Parody Video] Google Updates Picasa Web Albums; Emphasis on Sharing and Showcasing Uwall.tv Turns YouTube into a Video Jukebox Early Morning Sunrise at the Beach Wallpaper Data Networks Visualized via Light Paintings [Video]

    Read the article

  • Changing Recovery Model in Replicated Database

    - by Rob
    I now am the proud owner of two servers that replicate with each other. I had nothing to do with the install, but (of course), now i have to support the databases. Both databases are in the Simple recovery model, but the users want to ensure as little data loss as possible so I'm thinking that I should change the recovery model over to full and start doing transaction log backups. I wasn't planning on backing up the subscribing database, only the publisher. Is this the right plan? Do I need to switch both the Subscriber and and the publisher to Full, or can I leave the subscriber in Simple, but have the Publisher in Full? When I change the recovery model in one (or both) do the databases need to be offline? Thanks

    Read the article

  • wget not converting links

    - by acrosman
    I am trying to mirror a fairly large site (20,000+ pages) prior to a major overhaul. Basically, I need a backup before cutting over to the new one in case we forgot something we need (we'll have about 1,000 pages at launch). The site is run on a CMS that I cannot easily extract usable data from, so I'm trying to make the copy with wget. My problem is that wget does not appear to be actually converting links, despite the presence of --convert-links or -k in the command. I've tried a couple of different combinations of flags, but I haven't been able to get the output I need. Most recent failed attempt was: nohup wget --mirror -k -l10 -PafscSnapshot --html-extension -R *calendar* -o wget.log http://www.example.org & I've also included the --backup-converted, and --convert-links instead of -k (not that it have mattered). I've done it with and without -P and -l, again no that they should matter. Results in files that still have links like: http://www.example.org//ht/d/sp/i/17770

    Read the article

  • POST data not being received

    - by Alexander
    I've got an iPhone App that is supposed to send POST data to my server to register the device in a MySQL database so we can send notifications etc... to it. It sends it's unique identifier, device name, token, and a few other small things like passwords and usernames as a POST request to our server. The problem is that sometimes the server doesn't receive the data. And by this I mean, its not just receiving blank values for the POST inputs but, its not receiving ANY post data at all. I am logging all POST inputs to my server into some log files and when the script that relies on the POST data from the device fails (detects no data) I notice that its because NO POST data was sent. Is this a problem on the server, like refusing data or something or does this have to be on the client's side? What could be causing this?

    Read the article

  • DNS NAmeserver Aname and cname records [closed]

    - by David
    I am inexperienced in the configuration of DNS and have an issue with dominan hosting set up. I have two domains 'www.mydomain1.com' and 'www.mydomain2.com', with mydomain2 pointed at the same place as mydomain1. The domains were passed to me recently by the person who previoulsy controlled them. I have an account with Fasthosts in the UK. When I accepted the domains I could not access the DNS settings and inquired with fasthosts as to why. The reply was: The delegate hosting option for both domains were enabled and this is the reason why you were unable to find the option to edit the advanced DNS records. I have now disabled the delegate hosting option so you can now edit the advanced DNS records for both domains in your account. When I log into the Fasthost control panel now I can access the DNS controls but both domains have no A record or Cname record set up. I am concerned that Fasthosts have blatted the previous Nameserver entries and set me up on theirs but not added any record. 'www.mydomain1.com' currently still works but 'www.mydomain2.com' does not find the site anymore. I am worried I will lose mydomain1 to as the DNS changes filter through the system. my webhosting is at 'xxx.xxx.xxx.xxx/mydomain1.com/' and this is where I want both domains to point. Any advice would be much appreciated. One thing which is confusing me is that because I am on a shared server I have to put 'xxx.xxx.xxx.xxx/mydomain1.com/' to get to my site rather than just 'xxx.xxx.xxx.xxx'. The form on Fasthosts for the A name record only allows an IP to be entered - does it add the mydomain1.com/ onto the end itself? Thanks for any help given - I'm quite worried about this David

    Read the article

  • Asp.net hashing (using codesmith) when upgrading from .net 2.0 to 3.5

    - by user34505
    Hi, I'm administrating servers running IIS 6, hosting a website on ASP.NET 2.0. Yesterday I installed .Net framework 3.5, and all my user authentication system was lost. Users can't log in, because their password arn't getting authenticated, maybe because the hash function has changed in 3.5??? I can't really get to the code, but I know it uses an extention called CodeSmith. Do you know of any break my upgrade the 3.5 ugrade could couse? Please help. Thanks.

    Read the article

  • libgdx intersection problem between rectangle and circle

    - by Chris
    My collision detection in libgdx is somehow buggy. player.png is 20*80px and ball.png 25*25px. Code: @Override public void create() { // ... batch = new SpriteBatch(); playerTex = new Texture(Gdx.files.internal("data/player.png")); ballTex = new Texture(Gdx.files.internal("data/ball.png")); player = new Rectangle(); player.width = 20; player.height = 80; player.x = Gdx.graphics.getWidth() - player.width - 10; player.y = 300; ball = new Circle(); ball.x = Gdx.graphics.getWidth() / 2; ball.y = Gdx.graphics.getHeight() / 2; ball.radius = ballTex.getWidth() / 2; } @Override public void render() { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); camera.update(); // draw player, ball batch.setProjectionMatrix(camera.combined); batch.begin(); batch.draw(ballTex, ball.x, ball.y); batch.draw(playerTex, player.x, player.y); batch.end(); // update player position if(Gdx.input.isKeyPressed(Keys.DOWN)) player.y -= 250 * Gdx.graphics.getDeltaTime(); if(Gdx.input.isKeyPressed(Keys.UP)) player.y += 250 * Gdx.graphics.getDeltaTime(); if(Gdx.input.isKeyPressed(Keys.LEFT)) player.x -= 250 * Gdx.graphics.getDeltaTime(); if(Gdx.input.isKeyPressed(Keys.RIGHT)) player.x += 250 * Gdx.graphics.getDeltaTime(); // don't let the player leave the field if(player.y < 0) player.y = 0; if(player.y > 600 - 80) player.y = 600 - 80; // check collision if (Intersector.overlaps(ball, player)) Gdx.app.log("overlaps", "yes"); }

    Read the article

  • How do I work around sudo 'segmentation fault' on basic bash commands?

    - by sage
    I am sure the answers are out there, but alas there are too many answers (here and elsewhere) to other questions stopping me from finding them. I just encountered something substantially similar to what is described at the closed SO question, sudo : “segmentation fault” Ubuntu maverick [closed]. My team is using Ubuntu 11.04 on VMWare Workstation 8.0.4. We are doing development using c++, Xenomai, Qt, and Qt Creator. When we simulate our application on the virtual machine, we currently need to launch Qt Creator with sudo. My colleague mentioned today that he has been having issues where his workstation locks up and he needs to restart and that occasionally he has the issue that all sudo bash commands return "segmentation fault". I just ran our application in simulation mode. I was running Qt Creator under sudo and Qt Creator received the signal abort (if I recall). Afterward, every command executed with sudo from sudo qtcreator to sudo ls resulted in the message Segmentation fault. I clicked on the power widget to see if I could log out, but the system shut down straightaway without prompting. My understanding is that we run sudo because of a permissions issue with Xenomai and the VM as currently configured, but my colleague has a workaround for this. I expect that not running Qt Creator under sudo -- something that has always made me nervous -- will help contain this issue, but I find it troubling that this could happen and manifest as it does. Does anyone know what is happening? Any recommendations on how to work around this issue? This is happening often to I am trying tolobby for VM changes to be able to run the process without sudo.

    Read the article

  • Openfire: Closing session due to incorrect hostname

    - by cvista
    I have a fresh install on a windows sevrer 2008 box. I can connect adium to the server from a remote machine and from the admin console/sessions I can see my session. My friend can aslo connect and I can see his session. I can send an admin message out and both of us can see it in the adium clients. We can't see each other though. I also get these logs in the warn window in the log viewer: Closing session due to incorrect hostname in stream header. Host: prjatk.com. Connection: org.jivesoftware.openfire.net.SocketConnection@1b1fd9c socket: Socket[addr=/109.109.248.82,port=56258,localport=5269] session: null prjatk.com is the server - however in the server settings on the admin screen I see the computer name as the hostname - is that the issue? If so how can I change that?

    Read the article

  • Can't boot into windows7/ubuntu 12.04 after running boot-repair

    - by Rini
    I have installed Ubuntu 12.04 on my preinstalled windows 7 Sony vaio E series laptop following instructions here: http://www.linuxbsdos.com/2012/05/17/how-to-dual-boot-ubuntu-12-04-and-windows-7/ Everything went well and I am able to boot in to windows after complete installation of Ubuntu. Now following instructions on web I tried to add Ubuntu to my BIOS using Easy BCD (but forget to add windows 7 entry). As a result, I loose windows 7 OS and can't boot in to either OS then I successfully repaired windows 7 using recovery CD. Now my problem is that I can't reinstall Ubuntu 12.04 using Live CD it halts every time before disk partition step giving error. "ubi-partman crashed". "ubi-partman failed with exit code 141. further information may be found in /var/log/syslog. Do you want to try running this step again before continuing? If you do not, your installation may fail entirely or may be broken." and, any choice to continue will result in the same error. After that following some post solutions I ran boot-repair commands in terminal ( in Try Ubuntu mode) and got the following URL: http://paste.ubuntu.com/1206434/ Now, after restart I can't boot into either Windows or Ubuntu. Even any attempt to run Windows repair is failed and I got the message : 'No operating System found' I don't know what went wrong after running boot-repair command. Please help in solving this issue. Thanks and Regards, R Shukla

    Read the article

  • "error 1723 there is a pr*blem with this windows installer package a dll required for this install to complete could not be run" while uninstall java

    - by user1650410
    I am having the following problem:I've installed java 1.6u33 on my windows 7 machine.Everything was fine, i was running eclipse for example.But i did a mistake -i deleted the jre6 dir.Now i am trying to reinstall java with no success.I got this msg when i try to uninstall it: "error 1723 there is a problem with this windows installer package a dll required for this install to complete could not be run..."I've deleted everything i had found for java in the registry,the Java dir, also tried JavaRa.I saw in MSI**.LOG files what dll is mising and put it where it was searched for.No success. So is there a way i can reinstall java without reinstalling windows?

    Read the article

  • How to clear stuck locked maildrop pop3 process

    - by Joshua
    I am using cyrus for imap and pop One of my users is getting the following error: Unable to lock maildrop : Mailbox is locked by POP server. I can see where it starts in the log. I've read that there is no physical lock file anymore (i've tried looking for it anyways) and that the solution is to just wait for the timeout, or kill the offending pop3 process. I know that this is happening because of a lossy connection on the part of the affected user, and that pop3 can only have 1 session active at a time. I need to manually clear the lock and I am having trouble finding the offending pop process. I have tried lsof, but it doesn't say how long the individual files (sockets) have been opened for. I've reduced the tcp keepalive time down to 5 mins, but I still need to reset this guy's lock. I could use some pointers. Thanks!

    Read the article

  • How important is graceful degradation of JavaScript? [closed]

    - by Stephen
    Should web developers continue to spend effort progressively enhancing our web applications with JavaScript, ensuring that features gracefully degrade, thereby ensuring accessibility? Or should we spend that time focused on new features or other areas of development? The subtext of that question would be: How many of our customers/clients/users utilize our websites or applications with JavaScript disabled? Do you have any projects with requirements that specifically demand JavaScript functionality (almost all of mine do), and do those requirements also demand graceful degradation? For the sake of asking this question, I pulled up programmers.stackexchange.com without JavaScript enabled, and I was greeted with this message: "Programmers - Stack Exchange works best with JavaScript enabled". It was difficult to log in, albeit the site seemed to generally work okay. (I wasn't able to vote up any questions.) I think this is a satisfactory approach to development. Imagine the effort involved in making all of the site's features work with plain old HTML and server-side logic. On the other hand, I wonder how many users have been alienated by this approach. We've all been trained (at least the good developers among us) to use progressive enhancement and to ensure our web applications' dynamic features degrade gracefully. Is this progressive enhancement just pissing into the wind, or do some of our customers actually utilize certain web services without JavaScript enabled?

    Read the article

  • Munin does not show Apache/mySQL stats in web view

    - by Chris
    I'm facing a very strange Problem. I just set up Munin on a fresh Ubuntu slice with a common LAMP Stack. Everything works great, except that Munin does just not show the Apache/mySQL stats in the web view. Everything else in the web view works great, Apache works, mySQL works. I even tried calling the plugins via console: sudo munin-run apache_accesses And it works fine. AFAIK Munin log files are not telling me any problems.. My only hint: when I run munin-run without sudo it gives me a "Permission denied" - could this be the problem?

    Read the article

  • Must I have Exchange to use Blackberry Enterprise Server Express?

    - by John Spaz
    In the past I've setup BES (not express) for a company that just wanted their users on the corporate network, they didn't care for email or any other enterprise feature, they just wanted to push a policy that the phones internet should be routed through the corporate network. I want to setup BES Express now for a customer that also just wants the phones on his network but wherever I look, it says that BES Express requires Exchange. Is there a way to install BES Express without Exchange and without a AD Domain? Basically what the customer wants to accomplish is to be able to filter and log the internet access on the phones.

    Read the article

  • Load login shell inside user cronjob

    - by sa125
    I'm trying to run a rake task using a scheduled cronjob. My crontab looks something like this: 0 1 * * 1-7 /bin/bash -l -c "cd ~/jobs/rake && rake reports:create >> ~/jobs/logs/cron.log" Ruby on my account is provided by RVM, which is loaded via ~/.bashrc (before the no-interaction check): # load RVM env [[ -s $HOME/.rvm/scripts/rvm ]] && source $HOME/.rvm/scripts/rvm # If not running interactively, don't do anything [ -z "$PS1" ] && return # ... rest of logic Time and again, this task fails to run since RVM isn't loaded when the task is called (uses system's /usr/bin/ruby instead), and gem dependencies are missing. How can I make crontab load my shell environment before executing my scheduled jobs? thanks.

    Read the article

  • Wammu - USB Device Name?

    - by Paul
    I'm trying to get to my phone's filesystem through USB in Wammu, but I'm stuck in the configuration wizard when it asks for a USB device name. After about an hour of Internet searching, here are the failed solutions I've already tried, starting with the relevant information returned by lsusb in terminal. lsusb Bus 001 Device 003: ID 12d1:101e Huawei Technologies Co., Ltd. So I tried opening Wammu through sudo wammu in terminal and inputting "/dev/bus/usb/001/003" as the device name, which returns: Error opening device Device /dev/bus/usb/001/003 does not exist! and then "/dev/bus/usb/001/", which returns: Failed to connect to phone Description: Error opening device. Unknown, busy, or no permissions.<br> Function: Init<br> Error code: 2 Another proposed solution was to try "tail -f /var/log/messages" in terminal. But that only returned a "No such file or directory" message. Seemingly relevant dmesg info: [ 4739.716214] usb 1-1: new high-speed USB device number 8 using ehci_hcd [ 4739.854137] scsi9 : usb-storage 1-1:1.0 [ 4740.854416] scsi 9:0:0:0: CD-ROM HUAWEI T Mass Storage 2.31 PQ: 0 ANSI: 2 [ 4740.867051] sr0: scsi-1 drive [ 4740.867806] sr 9:0:0:0: Attached scsi CD-ROM sr0 [ 4740.870464] sr 9:0:0:0: Attached scsi generic sg1 type 5 I don't know why it is coming up as CD-ROM. But there it is. If you haven't noticed already, I'm an absolute beginner when it comes to Linux and terminal. So speaking to me like I'm a three year old is welcome if you can propose a solution. I'm running Ubuntu 12.04 LTS, and the phone is a Huawei U1250. My computer is an Acer Aspire One D250/KAV60. Any help is much appreciated.

    Read the article

  • Fusion 2 - blank screen - no start button

    - by KDK2010
    I have Fusion 2 on my MacBook. It was working fine until this morning. Logged on and it took too long to log my personal settings to load. I shut it down. Shut down the MacBook and restarted. Logged onto Fusion and now I only get the windows screen with no icons or Start Button. I don't want to uninstall in fear of loosing all of my work including Quickbooks! Anyone have a solution?

    Read the article

< Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >