Search Results

Search found 50994 results on 2040 pages for 'simple solution'.

Page 352/2040 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • How to deal with MySQL Connector/ODBC error "Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock'"

    - by user12653020
    I am sure many users run into a mysterious problem when perfectly working ODBC configurations started failing with errors like: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' The above error message might be preceded with something like [nxDc[yQ]. At the same time odbc.ini specifies in its DSN different SOCKET=/tmp/mysql.sock or a TCP connection SERVER=<remote_host_or_ip>. The question is, what had happened that the ODBC driver started to ignore the DSN options? The clue lies in the corrupted string [nxDc[yQ], which actually was [UnixODBC][MySQL] with each 2nd symbol removed. This is the case of bad conversion from SQLCHAR to SQLWCHAR. The UnixODBC driver manager took a single-byte character string from the client application and tried to convert it into the wide (multi-byte) characters for the Unicode version of MyODBC driver: Initially the piece of the connection string was represented by 1-byte chars like: [S][E][R][V][E][R][=][m][y][h][o][s][t][;] after the bad conversion to wide chars (commonly 2-byte UTF-16) [SE][RV][ER][=m][yh][os][t;] instead of [S\0][E\0][R\0][V\0][E\0][R\0][=\0][m\0][y\0][h\0][o\0][s\0][t\0][;\0] Naturally, the MyODBC driver could not parse the bad string and tried to use the default connection type (SOCKET) with the default value (/var/lib/mysql/mysql.sock) Now we know what happened, but why it happened? In most cases it happened because of using ODBCManageDataSourcesQ4 utility or its older analog ODBCConfig. When registering ODBC drivers they put lots of additional options and one of these options badly affects the UnixODBC driver manager itself. The solution is simple - remove or comment out the option in odbcinst.ini file (it is empty by default) set for the driver: [MySQL ODBC 5.2.6 Driver] Description    = Driver         = /home/dbs/myodbc526/lib/libmyodbc5w.so Driver64       = /home/dbs/myodbc526/lib/libmyodbc5w.so Setup          = /home/dbs/myodbc526/lib/libmyodbc5S.so Setup64        = /home/dbs/myodbc526/lib/libmyodbc5S.so UsageCount     = 1 CPTimeout      = 0 CPTimeToLive   = 0 IconvEncoding  =  # <--------- remove this line Trace          = TraceFile      = TraceLibrary   = After applying this simple solution (remove the line with IconvEncoding = ) everything came to normal. Prior to removing that line I tried putting different encoding names there, but the result was not good, so I really don't know how to properly use it. Unfortunately, UnixODBC manuals say nothing about it. Therefore, removing this option was the only way to get things done.

    Read the article

  • Statistics for and Details About Open Source Swing Projects

    - by user592704
    I'm looking for process-relative information on open-source Swing projects: how the task was described how many developers were involved how much time the solution was taken etc. Are there any open source (online) chronicles in that direction? I strongly prefer projects that include the authors' names. I watched this project and it seems fine but still I couldn't find any information concerning some current project task(s), its developers group, some chronicles (tips, milestones, feedbacks etc) For example if I see this swing component I'd like to know the above information.

    Read the article

  • Paging problem in Data Form Webpart SP2010

    - by Patrick Olurotimi Ige
    I was working on some webpart in sharepoint designer 2010  and i decided to use the default custom paging.But i noticed the previous link page isn't working it basicalling just takes me back to the start page of the list and not the previous page after a good look i noticed micosoft is using "history.back()" which is suppose to work but it doesn't work well for paged data.Anyway before i started further investigation i found Hani Amr's solution at the right time and that did the trick.Hope that helps

    Read the article

  • Rewriting a URL for tomcat through an ajp connection

    - by StudentKen
    I've tried several attempts to resolve this, but all have come up naught. Currently I have apache setup to forward all urls at and past the /portal/ tag to tomcat. Unfortunately, tomcat receives these requests through /portal/appName, a subdirectory in webapps rather than the webapps root directory where my wars are deployed. Is there a simple solution to this that I'm not seeing? I've been trying to use mod_rewrite to ^/portal/ $ / but that doesn't yield the expected results (perhaps I'm doing this wrong?).

    Read the article

  • Why does Compiz keeps crashing?

    - by odeh86
    I am running Ubunutu 14.04 x86_64. Compiz keep crashing frequently and I can't work on my laptop. If I have media running or so I can still hear the audio and its just Compiz that is crashing as I see in some crash logs. I tried to reset both Compiz and Unity but still it is not working. Is there a solution or should I jump back to Gnome? Here are the commands that i tried:- dconf reset -f /org/compiz/ unity-reset setsid unity

    Read the article

  • Clicking links in pdfs opened with Okular opens Abiword (instead of the default browser)

    - by Marius Hofert
    I work with Xubuntu 12.04 and use Okular (version 0.14.3) to view pdf files. If I click a link in a pdf file (created with pdflatex using the hyperref package), Abiword is opened instead of my default browser google-chrome. How can I change this behavior? The settings in Okular do not seem to provide a solution. (Note that I set google-chrome as the preferred application for web browsing under Settings - Preferred Applications, so that's not the problem).

    Read the article

  • SDL 2.0: is there a library to create 2D particle effects rapidly?

    - by mm24
    I would like to create an light/explosion particle effect using some in built library. I am used to Cocos2D where there are specific classes that you can simply initialize in a certain position and producing a certain particle effect. Is there a way to do so in SDL 2.0 C++? I have found this tutorial but it seems to go for a "build it yoursefl" solution, which is ok but I do not want to re-invent the wheel if someone else has already built it.

    Read the article

  • Http header 302 error

    - by Katherine Katie
    Response Headers status HTTP/1.1 302 Found connection close pragma no-cache cache-control no-cache location / location /NKiXN/ I don't know how it became so but i used w3 total cache plugin but at this time i have deactivated it please help me to solve it, site is coming down in search engine ranking and google bot is unable to follow it. Urgent help required, if this is configuration problem with server please let me know the solution. Site: http://onlinecheapestcarinsurance.co.uk/

    Read the article

  • Calibre icon missing from system tray

    - by onvas
    After installing Calibre, I immediately changed the preferences particularly by enabling system tray icon. The program restarted, as required, but the icon didn't appear on my sys tray. So I tried restarting my laptop, but still the same negative result. I have already whitelisted all apps after installing my OS so the problem is likely not that. Is there another solution for this? I'm using 12.04 64-bit on my ThinkPad R61i.

    Read the article

  • 5 ways to stop code thrashing&hellip;

    - by MarkPearl
    A few days ago I was programming on a personal project and hit a roadblock. I was applying the MVVM pattern and for some reason my view model was not updating the view when the state changed??? I had applied this pattern many times before and had never had this problem. It just didn’t make sense. So what did I do… I did what anyone would have done in my situation and looked to pass the blame to someone or something else. I tried to blame one of the inherited base classes, but it looked fine, then to Visual Studio, but it seemed to be fine and eventually to any random segment of code I came across. My elementary problem had now mushroomed into one that had lost any logical basis and I was in thrashing mode! So what to do when you begin to thrash? 1) Do a general code cleanup – Now there is a difference between cleaning code and changing code . When you thrash you change code and you want to avoid this. What you really want to do is things like rename variables to have better meaning and go over your comments. 2) Do a proof of concept – if cleaning code doesn’t help. The  you want to isolate the problem and identify the key concepts. When you isolate code you ideally want it to be in a totally separate project with as little complexity as possible. Make the building blocks and try and replicate the functionality that you are getting in your current application. 3) Phone a friend – I have found speaking to someone else about the problem generally helps me solve any thrashing issues I am having. Usually they don’t even have to say anything to solve the problem, just you talking them through the problem helps you get clarity of mind. 4) Let the dust settle – Sometimes time is the best solution. I have had a few problems that no matter who I discussed them with and no matter how much code cleaning I had done I just couldn’t seem to fix it. My brain just seemed to be going in circles. A good nights rest has always helped and often just the break away from the problem has helped me find a solution. 5) Stack overflow it – So similar to phone a friend. I am really surprised to see what a melting pot stack overflow has been and what a help it has been in solving technology specific problems. Just be considerate to those using the site and explain clearly exactly what problem you are having and the technologies you are using or else you will probably not get any useful help…

    Read the article

  • gedit-developer-plugins on Ubuntu 13.10 Saucy Salamander

    - by Francisco Costa
    I've installed gedit-developer-plugins on a fresh Ubuntu 13.10 Saucy Salamander I've lauched gedit (from the terminal) but when I turn a GDP plugin on (in this case GDP Find) I get the following error: (gedit:3640): libpeas-WARNING **: Could not find loader 'python' for plugin 'gdpfind' The same error only shows while turning on GDP plugins, the others work well. Any solution to make this plugins work again?

    Read the article

  • Query optimization using composite indexes

    - by xmarch
    Many times, during the process of creating a new Coherence application, developers do not pay attention to the way cache queries are constructed; they only check that these queries comply with functional specs. Later, performance testing shows that these perform poorly and it is then when developers start working on improvements until the non-functional performance requirements are met. This post describes the optimization process of a real-life scenario, where using a composite attribute index has brought a radical improvement in query execution times.  The execution times went down from 4 seconds to 2 milliseconds! E-commerce solution based on Oracle ATG – Endeca In the context of a new e-commerce solution based on Oracle ATG – Endeca, Oracle Coherence has been used to calculate and store SKU prices. In this architecture, a Coherence cache stores the final SKU prices used for Endeca baseline indexing. Each SKU price is calculated from a base SKU price and a series of calculations based on information from corporate global discounts. Corporate global discounts information is stored in an auxiliary Coherence cache with over 800.000 entries. In particular, to obtain each price the process needs to execute six queries over the global discount cache. After the implementation was finished, we discovered that the most expensive steps in the price calculation discount process were the global discounts cache query. This query has 10 parameters and is executed 6 times for each SKU price calculation. The steps taken to optimise this query are described below; Starting point Initial query was: String filter = "levelId = :iLevelId AND  salesCompanyId = :iSalesCompanyId AND salesChannelId = :iSalesChannelId "+ "AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND brand = :iBrand AND manufacturer = :iManufacturer "+ "AND areaId = :iAreaId AND endDate >=  :iEndDate AND startDate <= :iStartDate"; Map<String, Object> params = new HashMap<String, Object>(10); // Fill all parameters. params.put("iLevelId", xxxx); // Executing filter. Filter globalDiscountsFilter = QueryHelper.createFilter(filter, params); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(globalDiscountsFilter); With the small dataset used for development the cache queries performed very well. However, when carrying out performance testing with a real-world sample size of 800,000 entries, each query execution was taking more than 4 seconds. First round of optimizations The first optimisation step was the creation of separate Coherence index for each of the 10 attributes used by the filter. This avoided object deserialization while executing the query. Each index was created as follows: globalDiscountsCache.addIndex(new ReflectionExtractor("getXXX" ) , false, null); After adding these indexes the query execution time was reduced to between 450 ms and 1s. However, these execution times were still not good enough.  Second round of optimizations In this optimisation phase a Coherence query explain plan was used to identify how many entires each index reduced the results set by, along with the cost in ms of executing that part of the query. Though the explain plan showed that all the indexes for the query were being used, it also showed that the ordering of the query parameters was "sub-optimal".  Parameters associated to object attributes with high-cardinality should appear at the beginning of the filter, or more specifically, the attributes that filters out the highest of number records should be placed at the beginning. But examining corporate global discount data we realized that depending on the values of the parameters used in the query the “good” order for the attributes was different. In particular, if the attributes brand and family had specific values it was more optimal to have a different query changing the order of the attributes. Ultimately, we ended up with three different optimal variants of the query that were used in its relevant cases: String filter = "brand = :iBrand AND familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId AND brand = :iBrand "+ "AND manufacturer = :iManufacturer AND endDate >=  :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId  AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "brand = :iBrand AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; Using the appropriate query depending on the value of brand and family parameters the query execution time dropped to between 100 ms and 150 ms. But these these execution times were still not good enough and the solution was cumbersome. Third and last round of optimizations The third and final optimization was to introduce a composite index. However, this did mean that it was not possible to use the Coherence Query Language (CohQL), as composite indexes are not currently supporte in CohQL. As the original query had 8 parameters using EqualsFilter, 1 using GreaterEqualsFilter and 1 using LessEqualsFilter, the composite index was built for the 8 attributes using EqualsFilter. The final query had an EqualsFilter for the multiple extractor, a GreaterEqualsFilter and a LessEqualsFilter for the 2 remaining attributes.  All individual indexes were dropped except the ones being used for LessEqualsFilter and GreaterEqualsFilter. We were now running in an scenario with an 8-attributes composite filter and 2 single attribute filters. The composite index created was as follows: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); globalDiscountsCache.addIndex(me, false, null); And the final query was: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); // Fill composite parameters.String SalesCompanyId = xxxx;...AndFilter composite = new AndFilter(new EqualsFilter(me,                   Arrays.asList(iSalesChannelId, iLevelId, iAreaId, iDepartmentId, iFamilyId, iManufacturer, iBrand, SalesCompanyId)),                                     new GreaterEqualsFilter(new ReflectionExtractor("getEndDate" ), iEndDate)); AndFilter finalFilter = new AndFilter(composite, new LessEqualsFilter(new ReflectionExtractor("getStartDate" ), iStartDate)); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(finalFilter);      Using this composite index the query improved dramatically and the execution time dropped to between 2 ms and  4 ms.  These execution times completely met the non-functional performance requirements . It should be noticed than when using the composite index the order of the attributes inside the ValueExtractor was not relevant.

    Read the article

  • Sharepoint Guidance Logger: usage, setup and extension

    - by spano
    Introduction Log records are essential to any application for troubleshooting problems. When beginning a new Sharepoint project, one of the first needs is to have a core logging component that can be used throughout the application code. In this post I will talk about the logging solution that we are using (based on the Patterns & Practices Sharepoint Logger ), how to set it up, configure and read logs. SharePoint 2010 includes enhanced functionality for logging and tracing. You can now throttle...(read more)

    Read the article

  • Problems in VLC Flickering

    - by Hutley
    I'm with problems at VLC. When I will go watch a movie, the video and sound be flickering I saw in some places that if i turn off the Compiz Animations it will works fine, but i dont want turn off the animations. Other solution that i saw was put in terminal the command: gstreamer-properties and set the video out to "X Window System (without Xv)" but this isnt working... Anyone have any idea how resolve this? Thanks..

    Read the article

  • Community is Great

    - by GrumpyOldDBA
    I have a great respect for so many who contribute to the community, without them I would often struggle in my role for sure. When "strange events" happen in a busy production environment it can be quite daunting when it seems everyone around is expecting you to have the answer/solution at your finger tips. I'm indebted to Paul White http://sqlblog.com/blogs/paul_white/default.aspx in confirming I'd found a bug and doing all the hard work including raising a connect item https:/...(read more)

    Read the article

  • BonitaSoft lauréat des OW2 Best Project Awards, un prix qui clôture une année 2012 excellente pour le BPM open-source français

    BonitaSoft lauréat des OW2 Best Project Awards 2012 Un prix qui clôture une année 2012 excellente pour le BPM open-source français OW2, la communauté internationale pour le middleware et les plates-formes applicatives open source, a annoncé les résultats de ses premiers OW2 Best Project Awards. Parmi les quatre projets open source innovants primés cette année, on retrouve dans la catégorie « Market » le BPM français Bonita Open Solution. Un prix qui vient clôturer une année pleine de succès pour l'éditeur français (lire par ailleurs « BonitaSoft dépasse le million et demi de téléchargements »).

    Read the article

  • Application Crash cleared the content of the Folder

    - by Ameya
    Recently while working on the LinuxDC++ over the network the application crashed while downloading files. Now my Downloads folder which had at least 60-80GB of data is completely cleaned but the system is not reporting the available the correct free space. Is there way to restore the contents of the folder only as the solution available are for the whole partition. I just want to recover the contents from one folder.

    Read the article

  • Data Virtualization: Federated and Hybrid

    - by Krishnamoorthy
    Data becomes useful when it can be leveraged at the right time. Not only enterprises application stores operate on large volume, velocity and variety of data. Mobile and social computing are in the need of operating in foresaid data. Replicating and transferring large swaths of data is one challenge faced in the field of data integration. However, smaller chunks of data aggregated from a variety of sources presents and even more interesting challenge in the industry. Over the past few decades, technology trends focused on best user experience, operating systems, high performance computing, high performance web sites, analysis of warehouse data, service oriented architecture, social computing, cloud computing, and big data. Operating on the ‘dark data’ becomes mandatory in the future technology trend, although, no solution can make dark data useful data in a single day. Useful data can be quantified by the facts of contextual, personalized and on time delivery. In most cases, data from a single source may not be complete the picture. Data has to be combined and computed from various sources, where data may be captured as hybrid data, meaning the combination of structured and unstructured data. Since related data is often found across disparate sources, effectively integrating these sources determines how useful this data ultimately becomes. Technology trends in 2013 are expected to focus on big data and private cloud. Consumers are not merely interested in where data is located or how data is retrieved and computed. Consumers are interested in how quick and how the data can be leveraged. In many cases, data virtualization is the right solution, and is expected to play a foundational role for SOA, Cloud integration, and Big Data. The Oracle Data Integration portfolio includes a data virtualization product called ODSI (Oracle Data Service Integrator). Unlike other data virtualization solutions, ODSI can perform both read and write operations on federated/hybrid data (RDBMS, Webservices,  delimited file and XML). The ODSI Engine is built on XQuery, hence ODSI user can perform computations on data either using XQuery or SQL. Built in data and query caching features, which reduces latency in repetitive calls. Rightly positioning ODSI, can results in a highly scalable model, reducing spend on additional hardware infrastructure.

    Read the article

  • Téléchargez gratuitement le magazine PHP Solutions

    Bonjour, L'éditeur du Magazine PHP Solutions, offre en téléchargement gratuit, une partie de ses magazines, notamment ceux d'avril et mai de cette année. PHP Solution est un des rares magazines de langue française à proposer des articles sur PHP de qualités et bien documentés. Les articles sont généralement écrits par des personnes reconnues dans l'environnement PHP (Guillaume Ponçon, Cécile Odero, Magali Contensin (Chef de projet CNRS)). Profitez-en. Lien ...

    Read the article

  • After update UBUNTU 12.04 changes wired interface from eth0 to eth1, network configuration fails

    - by Don McLaren
    I have been running UBUNTU 12.04 for a couple of months on this system with no problems. After a recent update, the bootup fails to define the wired network interface as eth0, so the default network configuration fails. ifconfig -a shows eth1 instead, as unconfigured. Booting from CD works OK, configures eth0 as normal. Solution: sudo NetworkManager The network manager figures out the situation, and configures eth1 Question: Where did this problem come from all of a sudden?

    Read the article

  • How to fix “Cannot connect to the configuration database.”

    - by ybbest
    The problem: When I browse to a SharePoint site, I got the Server Error in ‘/’ Application, Cannot connect to the configuration database. The Analysis: The reason you get the message is that SharePoint WFE cannot connect to the SQL database, you need to check the weather SQL server service is started as shown below. Solution: When checking the SQL Server service, I see it is not started. After starting the service, it works like a charm.

    Read the article

  • On Demand Webinar: Extreme Database Performance meets its Backup and Recovery Match

    - by Cinzia Mascanzoni
    Oracle’s Sun ZFS Backup Appliance is a tested, validated and supported backup appliance specifically tuned for Oracle engineered system backup and recovery. The Sun ZFS Backup Appliance is easily integrated with Oracle engineered systems and provides an integrated high-performance backup solution that reduces backup windows by up to 7x and recovery time by up to 4x compared to competitor engineered systems backup solutions. Invite partners to register to attend this webcast to learn how the Sun ZFS Backup Appliance can provide superior performance, cost effectiveness, simplified management and reduced risk.

    Read the article

  • ISV Exastack program: IBIS, Performix, Cardtek

    - by Javier Puerta
    Impact Business Information Solutions (IBIS) accelerates insights for Health Sciences decision-makers to achieve new levels productivity using Oracle’s extreme-performance system, Oracle Exadata Database Machine. Read More. Perfomix Inc Achieves Oracle Exadata Optimized Status. Read more. Cardtek Group Company SmartSoft's payment processing solution achieves Oracle Exadata Optimized status. Read more.

    Read the article

  • Help identify the pattern for reacting on updates

    - by Mike
    There's an entity that gets updated from external sources. Update events are at random intervals. And the entity has to be processed once updated. Multiple updates may be multiplexed. In other words there's a need for the most current state of entity to be processed. There's a point of no-return during processing where the current state (and the state is consistent i.e. no partial update is made) of entity is saved somewhere else and processing goes on independently of any arriving updates. Every consequent set of updates has to trigger processing i.e. system should not forget about updates. And for each entity there should be no more than one running processing (before the point of no-return) i.e. the entity state should not be processed more than once. So what I'm looking for is a pattern to cancel current processing before the point of no return or abandon processing results if an update arrives. The main challenge is to minimize race conditions and maintain integrity. The entity sits mainly in database with some files on disk. And the system is in .NET with web-services and message queues. What comes to my mind is a database queue-like table. An arriving update inserts row in that table and the processing is launched. The processing gathers necessary data before the point of no-return and once it reaches this barrier it looks into the queue table and checks whether there're more recent updates for the entity. If there are new updates the processing simply shuts down and its data is discarded. Otherwise the processing data is persisted and it goes beyond the point of no-return. Though it looks like a solution to me it is not quite elegant and I believe this scenario may be supported by some sort of middleware. If I would use message queues for this then there's a need to access the queue API in the point of no-return to check for the existence of new messages. And this approach also lacks elegance. Is there a name for this pattern and an existing solution?

    Read the article

  • Ubuntu 12.10 hangs on checking battery state

    - by Waxolunist
    I have a Thinkpad X230. After a boot it hangs with checking battery state [OK]. A second boot is most times successful. What I have seen in other posts doesn't fit to my problem, because My graphics card is Intel 3rd Gen Core processor Graphics controller (not nvidia) It happens not always I have a fresh installed 12.10 with the latest updates (no upgrade) Does anyone experience the same issues or has a solution?

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >