Search Results

Search found 16379 results on 656 pages for 'long pham'.

Page 336/656 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • How can I use my Windows 7 computer to share WiFi connection with PCs that don't have wireless?

    - by Tom Auger
    Long story short: modem and wireless router are downstairs and we're having a LAN party where some visitors don't have wireless. There's no way to run the length of cabling required, so looking for options. My Windows 7 Home Premium PC has a wireless-n connection, and I'd like to see if I can use it as a "hub" or switch of sorts, running an ethernet cable out of the back and into a switch, then splitting off to the other PCs. Is this an option? I know with Internet sharing, you can set up your PC as a wireless access point, but I want to do the opposite.

    Read the article

  • &ldquo;Why do transactional messages all have the same priority?&rdquo;

    - by John Breakwell
    I answered this question on the MSMQ forum on MSDN and thought it worth sharing here. The poster wanted to know why all transactional messages have a fixed priority of zero (instead of 0 through 7). They wanted the guaranteed delivery of messages to a queue but wanted to assign different levels of priority. Some aspects of MSMQ were defined way back in the last century and this is one of them. If I remember right, the reason was to avoid the following scenario: You have a single transaction of 3 messages (a, b and c) with priorities 5, 3 and 4 respectively. The messages are sent in order a,b,c The messages arrive in the queue and are arranged in order a,c,b to reflect priority order This breaks the guaranteed order part of the transaction.  I know that very few people send more than one message in a transaction but that is a scenario that MSMQ has to be able to handle; for the majority, including yourself, this scenario is irrelevant which is why you are surprised by the absence of transactional priorities. The options, therefore, available to the Microsoft developers were to: Implement code that allowed you to send messages with variable priority as long as any messages within the same transaction were the same priority, or Define a set priority for all transactional messages As you can understand, option 1 would be a complicated arrangement with all the necessary enforcement, error handling, user education and documentation, etc. Sure, it would have been possible to implement option 1 but I expect the product group decided to invest the development time in some other aspect of MSMQ. Now, with five versions out there, it would be confusing to change how the product operates, in addition to potentially breaking exisiting systems that have been working fine for years. So, balancing cost and risk against customer demand, I would not expect this feature to ever change.

    Read the article

  • SSH - SFTP/SCP only + additional command running in background

    - by Chris
    there are many solutions described to get ur SSH-connection forced to only run SFTP by modifying the sshd_config by adding a new group match and give that new group a Forcecommand internal-sftp Well that works great but i would love to have a little more feature. My servers automatically ban IP's which try to connect often in a short time. So when you use any SFTP-Client, which opens multiple connections to work faster it can get banned instandly by the server for a long time. The servers have a script to whitelist users by administrator. I've modified this script to whitelist the user, which runs the script. All i need to do is now get the server to execute that script, when somebody logins. On SSH it's no problem, just put it in .bashrc or something like, but the Forcecommand don't runs these scripts on login. Is there any way to run such a shellscript before or at the same time as the Forcecommand get fired?

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part III)

    The last post ended with us just getting started on stumbling into text template file customization, a task that required a Visual Studio extension (Tangible T4 Editor) to even have a chance at completing.  Despite the benefits of the Tangible T4 Editor, I still had a hard time putting together a solid text template that would be easy to explain.  This is mostly due to the way the files allow you to mix code (encapsulated in <# #>) with straight-up text to generate.  It is effective to be sure, but not very readable.  Nevertheless, I will try and explain what was accomplished in my custom tt file, though the details of which are not really the point of this article (my way of saying dont criticize my crappy code, and certainly dont use it in any somewhat real application.  You may become dumber just by looking at this code.  You have been warned really the footnote I should put at the end of all of my blog posts). To begin with, there were two basic requirements that I needed the code generator to satisfy:  Reading one to many entity framework files, and using the entities that were found to write one to many class files.  Thankfully, using the Entity Object Generator as a starting point gave us an example on how to do exactly that by using the MetadataLoader and EntityFrameworkTemplateFileManager you include references to these items and use them like so: // Instantiate an entity framework file reader and file writer MetadataLoader loader = new MetadataLoader(this); EntityFrameworkTemplateFileManager fileManager = EntityFrameworkTemplateFileManager.Create(this); // Load the entity model metadata workspace MetadataWorkspace metadataWorkspace = null; bool allMetadataLoaded =loader.TryLoadAllMetadata("MFL.tt", out metadataWorkspace); EdmItemCollection ItemCollection = (EdmItemCollection)metadataWorkspace.GetItemCollection(DataSpace.CSpace); // Create an IO class to contain the 'get' methods for all entities in the model fileManager.StartNewFile("MFL.IO.gen.cs"); Next, we want to be able to loop through all of the entities found in the model, and then each property for each entity so we can generate classes and methods for each.  The code for that is blissfully simple: // Iterate through each entity in the model foreach (EntityType entity in ItemCollection.GetItems<EntityType>().OrderBy(e => e.Name)) {     // Iterate through each primitive property of the entity     foreach (EdmProperty edmProperty in entity.Properties.Where(p => p.TypeUsage.EdmType is PrimitiveType && p.DeclaringType == entity))     {         // TODO:  Create properties     }     // Iterate through each relationship of the entity     foreach (NavigationProperty navProperty in entity.NavigationProperties.Where(np => np.DeclaringType == entity))     {         // TODO:  Create associations     } } There really isnt anything more advanced than that going on in the text template the only thing I had to blunder through was realizing that if you want the generator to interpret a line of code (such as our iterations above), you need to enclose the code in <# and #> while if you want the generator to interpret the VALUE of code, such as putting the entity name into the class name, you need to enclose the code in <#= and #> like so: public partial class <#=entity.Name#> To make a long story short, I did a lot of repetition of the above to come up with a text template that generates a class for each entity based on its properties, and a set of IO methods for each entity based on its relationships.  The two work together to provide lazy-loading for hierarchical data (such getting Team.Players) so it should be pretty intuitive to use on a front-end.  This text template is available here you can tweak the inputFiles array to load one or many different edmx models and generate the basic xml IO and class files, though it will probably only work correctly in the simplest of cases, like our MFL model described in the previous post.  Additionally, there is no validation, logging or error handling which is something I want to handle later by stumbling through the enterprise library 5.0. The code that gets generated isnt anything special, though using the LINQ to XML feature was something very new and exciting for me I had only worked with XML in the past using the DOM or XML Reader objects along with XPath, and the LINQ to XML model is just so much more elegant and supposedly efficient (something to test later).  For example, the following code was generated to create a Player object for each Player node in the XML:         return from element in GetXmlData(_PlayerDataFile).Descendants("Player")             select new Player             {                 Id = int.Parse(element.Attribute("Id").Value)                 ,ParentName = element.Parent.Name.LocalName                 ,ParentId = long.Parse(element.Parent.Attribute("Id").Value)                 ,Name = element.Attribute("Name").Value                 ,PositionId = int.Parse(element.Attribute("PositionId").Value)             }; It is all done in one line of code, no looping needed.  Even though GetXmlData loads the entire xml file just like the old XML DOM approach would have, it is supposed to be much less resource intensive.  I will definitely put that to the test after we develop a user interface for getting at this data.  Speaking of the data where IS the data?  Weve put together a pretty model and a bunch of code around it, but we dont have any data to speak of.  We can certainly drop to our favorite XML editor and crank out some data, but if it doesnt totally match our model, it will not load correctly.  To help with this, Ive built in a method to generate xml at any given layer in the hierarchy.  So for us to get the closest possible thing to real data, wed need to invoke MFL.IO.GenerateTeamXML and save the results to file.  Doing so should get us something that looks like this: <Team Id="0" Name="0">   <Player Id="0" Name="0" PositionId="0">     <Statistic Id="0" PassYards="0" RushYards="0" Year="0" />   </Player> </Team> Sadly, it is missing the Positions node (havent thought of a way to generate lookup xml yet) and the data itself isnt quite realistic (well, as realistic as MFL data can be anyway).  Lets manually remedy that for now to give us a decent starter set of data.  Note that this is TWO xml files Lookups.xml and Teams.xml: <Lookups Id=0>   <Position Id="0" Name="Quarterback"/>   <Position Id="1" Name="Runningback"/> </Lookups> <Teams Id=0>   <Team Id="0" Name="Chicago">     <Player Id="0" Name="QB Bears" PositionId="0">       <Statistic Id="0" PassYards="4000" RushYards="120" Year="2008" />       <Statistic Id="1" PassYards="4200" RushYards="180" Year="2009" />     </Player>     <Player Id="1" Name="RB Bears" PositionId="1">       <Statistic Id="2" PassYards="0" RushYards="800" Year="2007" />       <Statistic Id="3" PassYards="0" RushYards="1200" Year="2008" />       <Statistic Id="4" PassYards="3" RushYards="1450" Year="2009" />     </Player>   </Team> </Teams> Ok, so we have some data, we have a way to read/write that data and we have a friendly way of representing that data.  Now, what remains is the part that I have been looking forward to the most: present the data to the user and give them the ability to add/update/delete, and doing so in a way that is very intuitive (easy) from a development standpoint.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Bing New Tab Page on Chrome

    - by One Terrorist
    For a long while, my computer had the default new tab page, which I liked. Recently, it has turned into a bing search bar tab. I hate bing. I checked all my settings, removed unused extensions, checked my Chrome files briefly, but I can't find how it keeps coming back! At first I thought it was easy to find; it was set as the new tab page, but it isn't now and it's still here! I installed a new extension after a few tries trying to remove it, called "X New Page Tab", and it worked for one night, then Bing overrode it. Any help please? I really don't want Bing as anything on my computer.

    Read the article

  • How do I upload large (30MB) files via a web interface?

    - by Dan
    Because I'm stumped... The client needs to be able to upload large images to a library but the upload fails after 5-6MB (over my poor connection). It seems to be timing out as the filesize at fail isn't consistent. The setup is a form which is accepted by PHP. I've googled and played with php.ini and everything is set for big uploads and long timeouts. Platform is a dedicated windows server at GoDaddy. What's going wrong?

    Read the article

  • Touchscreen on KDE and Ubuntu?

    - by The Quantum Physicist
    I just bought a Lenovo Yoga 2 Pro... I liked the activity of the touchscreen on Windows, and it makes sense as it does on my smart phone. However, I'm not a regular windows user, so I installed Kubuntu 14.04, and everything looks fine, except that the activity of the touchscreen is so silly that it's useless. Why? Because all the touchscreen does is a single mouse with left click. For example, if I touch the screen for a relatively long time, I don't get the effect of a right click. How do I configure the touchscreen properly to get the activity expected on Ubuntu and KDE? Thanks for any efforts.

    Read the article

  • Mimicking Google's Persistant Disks -- Is this a logical FreeBSD disaster recovery strategy?

    - by Casey Jordan
    I am looking into FreeBSD to provide a more comprehensive backup and disaster recovery strategy for database servers. Ideally I want to mimic what google is doing with "Persistant disks" https://developers.google.com/compute/docs/disks#snapshots I am hoping someone who knows more about FreeBSD can validate these ideas/questions: I have read that FreeBSD can take instant disk snapshots, therefore if our databases trigger a consistent state (Block all writes, and flush buffers to disk), I would assume I could take snapshots every hour without service interruption for more than a few seconds. Is this true? Is there a way to take snapshots and back them up offsite easily? Can this be done incrementally as to save how much disk space is actually used? If a rollback needed to be done, how long does this typically take? Is a rollback also instantaneous? Thanks!

    Read the article

  • Software Suggestion for Managing Voice Recordings (Windows)

    - by Cbeppe
    I'm looking for Windows software that allows me to effectlively manage already made voice recordings. I have a series of recordings taken from an iPhone and I have extracted the files. The problem is that these are very long recordings and therefore I'm looking for software that allows me to: Bookmark a time in the recording Effectively manage multiple files (like Adobe Bridge does with images) Freeware or Payware Possibly other features, I haven't done this before and I'm sorry I'm unable to give a more professional description. Thanks in advance to everyone who can help! If you have any other questions, please don't hesitate to ask - I will try my best to provide useful answers.

    Read the article

  • How should oracle vbox look like in terms of Memory, CPU and Performance? [duplicate]

    - by Nicholas DiPiazza
    This question already has an answer here: Can you help me with my capacity planning? 2 answers I've got a need for a ton of VMs to simulate some realistic load testing scenarios. I've got a bunch of different host machines that differ in ram, cpu's, etc. What should my resource manager look like? Is there a standard way to know what the CPU, Memory and Disk Utilization should be given your CPUs + Memory available + Disks available? For example, I have a box: MemTotal: 50 Gb CPUs: 8 CPUs are pretty much 100% all day long. Memory is at about 60%. Swap not getting hit. Little bewildered by why the VMs, while doing the exact same test script, are showing different virtual memory consumption. Huh.

    Read the article

  • Ubuntu Wireless not working on Lenovo t400

    - by VmaxBoss
    This problem started after upgrading to 12.04, an my system is 'up2date' Have tried most of the solution-proposals found on the net. lspci -nnk | grep -iA2 net 00:19.0 Ethernet controller [0200]: Intel Corporation 82567LF Gigabit Network Connection [8086:10bf] (rev 03) Subsystem: Lenovo Device [17aa:20ee] Kernel driver in use: e1000e 03:00.0 Network controller [0280]: Intel Corporation PRO/Wireless 5100 AGN [Shiloh] Network Connection [8086:4237] Subsystem: Intel Corporation WiFi Link 5100 AGN [8086:1211] Kernel driver in use: iwlagn iwconfig lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11abgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off sudo lshw -C network *-network description: Ethernet interface product: 82567LF Gigabit Network Connection vendor: Intel Corporation physical id: 19 bus info: pci@0000:00:19.0 logical name: eth0 version: 03 serial: 00:22:68:1a:c4:75 size: 100Mbit/s capacity: 1Gbit/s width: 32 bits clock: 33MHz capabilities: pm msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e1000e driverversion=1.0.2-k2 duplex=full firmware=1.8-3 ip=192.168.2.154 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:29 memory:fc000000-fc01ffff memory:fc024000-fc024fff ioport:1820(size=32) *-network DISABLED description: Wireless interface product: PRO/Wireless 5100 AGN [Shiloh] Network Connection vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 00 serial: 00:26:c6:6c:2d:24 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn latency=0 multicast=yes wireless=IEEE 802.11abgn resources: irq:30 memory:f4300000-f4301fff Please help Br/VB

    Read the article

  • Uptime concerns in case of AWS outage

    - by Aditya Patawari
    I am running an Elastic Load Balancer backup by 2 instances in different Availability Zones in US East. I am using Multi-AZ RDS as well. Ideally this should ensure that if one AZ goes down, it should not effect the app because everything is spread across multiple AZs. But the recent AWS outage took the app down for a long time. I am not sure how this can happen. It would be great if someone can point out what went wrong. Major question here I have is how can I avoid this in future? I can setup app servers across different regions or even providers and use DNS for load balancing but what do I do with MySQL? Read Replicas will introduce some lag which I would want to avoid.

    Read the article

  • Ubuntu Lucid (10.04) subpixel font rendering crashes Xorg

    - by user36066
    Hi everyone... I really don't know how to solve this on my own so I thought giving this site a chance. After upgrading to Lucid I ran into some problems. With some experimenting I came to a conclusion that if I enable subpixel smoothing on fonts the moment I start any other application not native to GTK+ (wine, openoffice, wxWidgets, ...) my X server crashes the same moment. At first this seemed like something went wrong during installation. To cut the long story short, after 3 clean installations and whole bunch of experimenting the same thing happens all over again. Strange thing is... if I configure any other font smoothing besides subpixel, everything works like it should. Any thoughs?

    Read the article

  • Windows W8, L8 and now H8

    - by raccoon_tim
    Windows 8 is having to endure with a lot of headwind at the moment. The weather forecast doesn’t appear to improve in the near future either with prominent game developers and publishers taking to the barricades accusing Microsoft of building a closed ecosystem. I am forced to side with this opinion as I too see services the likes of Steam as playing an important role in the gaming world, which just happens to be an industry that cannot be sidelined. What Microsoft is attempting to do is merge the PC and mobile markets. The Windows Marketplace is to be the only place where you can purchase Windows applications in the future starting now with Metro apps. This is what Apple, Google and Microsoft have been doing with mobile devices for some time now and it’s what we have all come to expect. The PC market is different, however. It has always been open, which has resulted in a diverse market allowing for third parties to build successful distribution and marketing networks. You could argue that Microsoft is just doing something that Steam has been doing for a long time now but the difference is that Microsoft would own both the marketplace AND that operating system, which would eventually give it dominance over the whole Windows application distribution network. Currently there is no real alternative to Windows in the PC gaming world but I would expect to see Mac OS and Linux getting more popular if Microsoft does not notice the signals coming from the gaming industry and choose to once again open up the markets on the PC.

    Read the article

  • How to force "Windows Explorer" to open new folders in the same window

    - by yoshiserry
    I have been searching for an answer to this question for a very long time. I have checked the "open folders in the same window" radio button in the general tab of folder options. I have also been told to uncheck the launch as seperate processes button in the view tab of folder options. I'm thinking some how this must be a registry issue. Anyone know a registry hack that will fix this problem and force windows explorer to open folders in the same window. I'm sick to death of having so many windows open. Im running Windows 7 Ultimate Beta 7100.

    Read the article

  • Download web server structure with empty files

    - by golimar
    I want to make a mirror of a Web server, but downloading the actual files will take too long. So I thought of having just the directory and file structure, and when I need the actual contents of the file, I can download just that file. I have tried wget --spider URL and in a short time it has created in my local disk the directory structure with no files. But I've checked all of wget's or curl's switches and there is nothing like what I need. Can this be done with wget, curl or any other tool?

    Read the article

  • Searching Multiple Terms

    - by nevets1219
    I know that grep -E 'termA|termB' files allows me to search multiple files for termA OR termB. What I would like to do instead is search for termA AND termB. They do not have to be on the same line as long as the two terms exists within the same file. Essentially a "search within result" feature. I know I can pipe the results of one grep into another but that seems slow when going over many files. grep -l "termA" * | xargs grep -l "termB" | xargs grep -E -H -n --color "termA|termB" Hopefully the above isn't the only way to do this. It would be extra nice if this could work on Windows (have cygwin) and Linux. I don't mind installing a tool to perform this task.

    Read the article

  • How can I automate Photoshop easily from PHP?

    - by justtaffy
    I'm building a web service, and I can't seem to figure this out. I've Googled everywhere, but nothing seems to fit as I need it. The user enters details into the website, and then a PSD file is edited via ExtendScript with the details they've entered. The file is saved, and available to download to the user. I have the script complete, the PSD ready and the PHP set up as I want it, but I can't seem to figure out how I can launch the script automatically from PHP. I've tried a few things and nothing seems to work. Any ideas would be great, I think I've been staring at it for too long and am jsut confusing myself now.

    Read the article

  • Hosting woes

    Unfortunately quite a few people have noticed our recent hosting problems, but if you are reading this they should all be over, so please accept our apologies. Our former web host decided migrate to a new platform, it had all sorts or great features, but on reflection hosting wasn’t one of them. We knew it was coming, and had even been proactive and requested several dates on their migration control panel so I could be around to check it afterwards. The dates came and went without anything happening, so we sat back and carried on on for a couple of months thinking they’d get back to us when they were ready. Then out of the blue I get an email saying it has happened! Now this is what I call timing, I had client work to complete, a 50 minute presentation to write and there was a little conference called SQLBits that I help organise at the end of the week, and then our hosting provider decides to migrate our sites. Unfortunately they only migrated parts of the sites, they forgot things like the database for SQLDTS. The database eventually appeared, but the data didn’t. Then the data pitched up but without the stored procedures. I was even asked if I could perform a backup and send it to them, as they were getting timeout errors. Never mind the issues of performing a native backup on a hosted server, whilst I could have done something, the question actually left me speechless. So you cannot access your own SQL server and you expect me to be able to help? This site was there, but hadn’t been set as an IIS application so all path references were wrong which meant no CSS and all the internal navigation and links were wrong. The new improved hosting platform Control Panel didn't appear to like setting applications. It said it would, you’d have to wait 2 hours of course, then just decided not to bother after all. So needless to say after a very successful SQLBits I focused my attention on finding a new web host, and here we are again. Sorry it took so long.

    Read the article

  • Tool for monitoring windows processes and folders

    - by Stoimen
    I am looking for a tool that tracks and keeps information for some processes on windows how long they've been running, when they have had started/closed. Also it would be nice to monitor folders if some data have been added/deleted to them. This is basically what I need. I tried Process Monitor but it gave me too much information. Just for creating a new folder it lists tons of useless information. I just need the time of creation... I tried and Process Explorer but it doesn't fit my needs either because it shows only the current state of my PC but I need to run some processes for couple of hours and after that to check what went wrong but unfortunately no records are saved.

    Read the article

  • grep/search for multiple lines in a file.

    - by GSto
    Let's say I have a file with a long nested array, that's formatted like this: array( 'key1' => array( 'val1' => 'val', 'val2' => 'val', 'val3' => 'val', ), 'key2' => array( 'val1' => 'val', 'val2' => 'val', 'val3' => 'val', ), //etc... ); what I would like to do is have a way to grep/search a file, and by knowing key 1, get all the lines (the sub-array) it contains. is this possible?

    Read the article

  • What editor/viewer to use to inspect large text based files?

    - by Turismo
    Are there any text editors/viewers (preferably on windows but other platforms are also ok) that can handle files of 500 MB or more? The editors I checked so far (Notepad++, Notepad, Eclipse) all choked on files of that size. Edit: Many thanks for the great suggestions. I tried gvim as it was the top voted and was available on Windows. I opened the file in a reasonable time. After that scrolling and searching was very smooth as long as syntax highlighting was turned off. From the other editors mentioned TextPad and EmEditor both claim to be able to handle large files very well. EmEditor seems to be built exactly for editing large files. I'll probably try both and report back.

    Read the article

  • Forwarding requests throught Apache to openVPN

    - by Ency
    I am wondering if it is possible to redirect requests through Apache to eg. OpenVPN. As long as I need to bypass firewall, I need to use port 80/443 for openVPN, but there is Apache server which has both port for itself. Client ---> Firewall (allows 80/443 only) --->| ---> Apache (80/443) ---> OpenVPN (1194) | -------------------------------------------- My Server I was thinking about mod_proxy, but I am not sure if it is good idea, have you got any ideas? I hope possible solution will be applicable on virtual host as well.

    Read the article

  • What is the easiest way to copy Chrome's login/passses into KeePass without creating duplicates?

    - by ldigas
    Okey, here's the thing. I have most of my login info in two places; one is in Keepass file and the other is in Chrome. Being a lazy sort of person, and since Chrome/Keepass integration never really started to work the way it should, a couple times a year I use the Nirsoft tool to get the Chrome login/passwords into a textual .csv file and then import it in Keepass. Creating lots of duplicates in the process which I then clean and so on. In the meantime, all the new logins I accumulate just stay in Chrome. As you might notice, this is not really the best way to do it. Is there a faster way to do this; copy logins from Chrome to Keepass without creating duplicates in Keepass, or has anyone perhaps found a way to get Keepass to work with Chrome under Win XP SP3? Keepass 1.0 or 2.0, doesn't make the difference as long as it works.

    Read the article

  • Active Directoy GPO

    - by Phillip R.
    I am looking into some weird issues with active directory and group policy. This domain has been upgraded from windows NT and has a few different administrators over the years. I am looking through the Default Domain group policy and Default Domain Controller group policy. In the security areas and I will use the log on locally area as an example, it shows SIDes that begin with asterisks and are quite long they look sort of like the following *S-1-5-21-787626... Normally, when I see something like this I would think that the User account was no longer there and this was never cleaned up. Am I wrong in my assumption? Thanks in advance

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >